Вы находитесь на странице: 1из 328

Proc.

National Conference on Recent Trends in Mechanical Engineering 2011 Page 1






KEY NOTE
LECTURES
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 2


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 3

THE WEB OF 7Es:
ENERGY, ECOLOGY, ECONOMY, EMPLOYMENT, EQUITY,
ENTROPY, ETHICS

P.L.Dhar
Ex Professor, Department of Mechanical Engineering, I. I. T. Delhi
pldhar@yahoo.com

ABSTRACT
Any attempt at evolving policies to address the global climate change and energy security issues must take a
holistic view of the various factors influencing them. These seven factors Energy, Ecology, Economy,
Employment, Equity, Entropy, Ethicsconstitute a web with intricate interconnections. The paper gives a
glimpse of these interconnections and indicates how lack of recognition of these interconnections has led, in the
past, to suggestions like reducing the specific fuel consumption of cars which, have eventually proved to be
counter productive. The paper also illustrates how an appreciation of these interconnections can suggest novel
strategies to address these twin problems. The analysis presented in the paper brings out that the present crisis
of energy, ecology, economy, (un)employment and (in) equity has its roots in a crisis of ethics and values, which
in turn influence the technology choice.
Keywords: Climate change, energy security, economy, equity, entropy, sustainability, ethics

1. INTRODUCTION
The award of Noble Peace Prize to the IPCC team has brought the issue of climate change to the centre stage
of international debates on sustainability. The climate change is directly linked to the increase in carbon-dioxide
content in earths atmosphere, which in turn is inextricably linked to the global consumption of fossil
(hydrocarbon) fuels whose combustion is the prime contributor to this increase. The stocks of these fossil fuels,
which form the backbone of modern economy, are also getting rapidly depleted by their ever increasing
consumption. The recent data [1] indicate that the proven reserve to production ratio of worlds oil - the
most versatile of these fossil fuels -- was 41.6 years at the end of 2007. Clearly, to ensure its availability for
future generations, there is a need to reduce the consumption of oil, and even of other fossil fuels too, which is
in conformity with the need to reduce the rate of increase of carbon dioxide content in atmosphere. However,
the economy of a nation is directly linked to its per capita energy consumption, Figure 1, [4], and most
developing nations resist any such suggestion since it may threaten their development. Thus energy and
climate are strongly linked to the issue of economy. Further, the generic term economy has various attributes
of great social importance like equity, employment-- which are influenced by the type of prime energy source
used in a society. Thus any discourse on energy policy should recognise these, and any other not-so-evident
interconnections. In this paper an attempt has been made to unravel this web of interconnections in the hope that
it will enable policy makers to evolve better policies for sustainable living.
2. ENERGY, ECOLOGY AND ECONOMY
Historically, any change in the prime energy source of a society has resulted in a revolution in the life style.
Thus domestication of animals and resulting easy availability of draft animal power played a key role in
transition from hunter-gatherer society (where human muscle power was the only source of energy) to the
agricultural society. Discovery of large stocks of coal (and steam engine) heralded the industrial revolution with
its mechanised production in eighteenth century [2]. Slowly steam boilers and engines replaced animal draft
power, wind mills and water wheels. Thus were sown the seeds of phenomenal increase in the carbon dioxide
content of air, and in the economies of nations. The second industrial revolution [3] of nineteenth century is
usually associated with numerous discoveries resulting in technological advances, the two most important of
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 4

these being the invention of electricity and modern usage of petroleum oil products in internal combustion
engines. International trade on a massive scale, as we see today, has become possible only due to the availability
of convenient liquid fuel to power ships and aeroplanes. Before the advent of mechanised production, the trade
was confined to local markets based primarily on barter. The first modern national system of markets was put
in place in England almost concurrently with the industrial revolution and the onset of mechanisation. The
technological innovations that have become possible after the invention of electricity, the most versatile form of
energy, are too well known to need elaboration. It would not be an exaggeration to say that globalisation, as we
see it today, would not have been possible without oil and electricity.


An important feature of these changes in the energy sources is the progressive increase in the
concentration of energy the acme of which is reached in the nuclear fuels. The centralised production in
heavy industries, which forms the backbone of modern global economy, has been possible only because of these
concentrated fuels. Centralisation of production gives economies of scale, but simultaneously it also results in
centralisation of pollutants beyond the recuperative capacity of atmosphere. As the sizes of economies of all
nations increase so do the industrial activity, the consumption of fossil fuels and the consequent global-
warming. As indicated in Figure 1, [4] there is a direct correlation between per capita energy consumption and
the GDP of nations. The ecological crises, and the rapid depletion in fossil fuel reserves, are thus the flip side of
the much admired increase in the wealth of nations.
The global warming and associated climate change however influence the economy negatively through
losses caused by increased intensity of the storms, floods and draughts. The rough estimates of their economic
impact during the last few decades, for which the data have been analysed, are shown in Figure 2 [5]. The
impact is clearly quite significant and as the intensity of such events increases with increase in global warming,
we can expect even greater damage to economy due to climate change, as predicted by the Stern Review Report
[6]. It may be worth mentioning that just the losses to private insurance companies after a single disaster -the
hurricane Katrina in US in 2005- are estimated at 40-50 billion dollars.

Figure 1 GDP and per capita energy consumption relationship
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 5




Today, promoting consumerism by creating demand through advertising and planned obsolescence [7]
has become an integral part of wealth creation. In such a scenario, rapid depletion of fossil fuels and rapid
escalation in global warming, with all its serious consequences, are unavoidable unless a major shift to non-
hydrocarbon sources of energy takes place soon.
Nuclear energy is now being seen as a clean non-carbon energy source and resurgence in its use seems
imminent. However, enormous amounts of fossil fuels are used to construct the massive concrete structures of
nuclear reactors, as also to mine and process uranium ore. Moreover, we need to remember that till now no
solution has been found to the problem of final disposal of highly radioactive and toxic waste
1
generated in
nuclear power plants.

Most renewable energy sources like solar energy and bio-fuels are diffused energy sources and thus are more
suited for decentralised production systems. Thus ideally the projected increase in the share of renewable energy
in future should to be accompanied by changes in the production systems. This linkage is often not appreciated
resulting in enormous waste of energy. A net is laid to trap the diffused energy over a large catchment area to
enable it to power heavy industries whose products are then transported to the consumers spread all over the
place resulting in an easily avoidable wastage of energy at the collection point, and then in transportation of the
produce.
3. ENERGY, ECONOMY, ECOLOGY, EMPLOYMENT, EQUITY

It is important to disaggregate the factors economy and energy considered above in an omnibus manner.
Both could be divided into two categories, viz. the economies based on centralised or decentralised production
systems; and the fossil or renewable energies. While fossil fuels are the only suitable source for centralised
production in heavy industries, renewable energy resources are ideally suited for decentralised production
systems. These production systems influence the employment and equity differently, as discussed at length in

1
To get an idea of the toxicity of Plutonium, it may be mentioned that one gram of Plutonium, if dispersed uniformly, is sufficient to kill
a million people.
+
-
+
-
-
Energy
Reserves
Energy
price
Energy
consumption
Global
warming
-
Economy
Figure3: Energy Ecology-Economy interrelationships
Figure 2 Economic losses from disasters
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 6

[7], and shown in the influence diagram of Figure 4. Centralised production in heavy industries is capital
intensive, so only a select few- the already rich- can benefit from it. It uses automation to bring down costs
making it difficult for products of small industries to compete. Moreover, it creates much lesser employment
opportunities per unit capital investment. All these factors exacerbate inequity in the society. It needs to be
appreciated that bulk of the employment opportunities in developing nations are in un-organized sector. (e.g. in
India only 14% of the workforce is in the organized sector), and so when the economy of a nation relying
primarily on heavy industries grows, as has been happening in India ,China and other countries in the post
liberalisation regime, the inequity in the society also grows. This is brought out remarkably in the Figure 5a,













reference [9], which shows how the GINI index, a measure of inequity in the society, has been increasing in
most nations after liberalisation of their economies which gave a spurt to the economic growth.
Clearly increasing use of renewable energy, which is diffused by its very nature, would not only
improve ecology, but also promote employment and equity in the society, which are the greatest need of the
modern times. Most sociologists agree that the increased crime and violence in the society today is being
sustained by unemployed youth who see it as the only means of redressing the injustice of inequity. Enough
empirical evidence is also available to support this thesis; see for example, Figure 5b,adopted from [8], which
brings out a strong correlation between increasing inequity and increasing homicides in Canada and the USA.
The increase in strife throughout the world whether it be the Naxalite problem in India, or the recent riots
witnessed in England are all strong evidences of the evil consequences of inequity. Thus while evolving a
response to the crisis facing mankind today, we need to consider the pentad of energy, ecology, economy,
employment and inequity.








Figure 5a Income disparities in nations after World War II
+
- +
+ -
+ -
Energy
% Fossil
Ecology
%Decentralised
Production
%Centralised
Production
% Renewable
Employment
Equity
Figure 4: Energy, Ecology, Economy, Employment Equity inter-relationships
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 7



4. THE OTHER TWO Es : ENTROPY, ETHICS
Most attempts to address the climate change and energy security issues have been based on the interrelationship
between three of the Es mentioned above, viz. Energy and Ecology and Economy. The focus is generally on
improving the energy supply, distribution, and usage efficiency through better power plants, newer renewable
sources of energy, fuel efficient vehicles, efficient lighting systems, efficient air conditioning systems,
improving the agricultural crop yields [10]; on reducing emissions by shifting to non carbon energy (like
nuclear), capturing carbon di-oxide etc.; and on reducing poverty by increasing the economic growth. All these
suggestions seem prima facie quite logical. Thus increase in energy efficiency would certainly reduce the energy
consumption of every artifact at the micro level. However the social factors emanating from the prevailing
materialistic world view can completely change their overall influence, by changing the macro picture, as is
brought out by following examples:
i) Increased fuel economy of cars boosted their sales with a net increase in fuel consumption in India. In US it
has also resulted in increase in the average distance driven per year with the result that the motor vehicles in US
consume 35% more fuel in 2000 than they did in 1980[11].
ii) Increased recycling of aluminum cans reduced their costs and it has resulted in doubling the number of
canned soft drinks consumed in USA.
iii) Improvement in the efficiency of lighting systems has motivated use of higher intensity of illumination in
UK with the result that electricity consumption per average km of British roads has increased twenty-five fold in
last eighty years.[11]
In a similar vein we can recall the fact mentioned earlier: Increasing economic growth of nations has only
increased the gap between the rich and the poor. Clearly there is a need to account for the influence of the
societal world view on the pentad of energy, ecology, economy, employment, equity. It is proposed to do so by
introduction of two more Es, namely Entropy and Ethics.
The term entropy is used in a generic sense to indicate the need to look at the energy issues from a broader
perspective of the second law of thermodynamics, as extended to economic systems by Georgescu Roegen [12].
Briefly put, the extended second law states that all natural processes involve exergy and material dissipation,
exergy being the thermodynamic term for the maximum useful work that can be obtained from any energy
source. The rate of dissipation of exergy/matter largely depends upon the lifestyle of the people, and the
exergetic efficiency of the artifacts used. This law thus demands a rethink on the whole concept of progress
Figure 5b : Effect of inequity on Homicides in USA and Canada
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 8

since we live in a planet with a finite endowment of energy and material resources [13]. As pointed out by
Roegen, economists have created a myth about value addition through machine and labour since all that they
can do is to transform the available matter/energy from a usable state to an unusable state providing some utility
along the way. The second law thus makes us clear that no amount of technological innovation can sustain
limitless growth in a planet with a finite stock of minerals, metals, fossil fuels and finite flow of solar energy. It
is thus imperative that a shift be made to sustainable levels of consumption. As Daly and Townsend put [14]it :
The term sustainable development ..makes sense only if it is understood as development without growth
i.e. qualitative improvement of a physical economic base that is maintained in a steady state by a throughput of
matter-energy that is within the regenerative-assimilative capacities of the eco system Currently the term
sustainable development is used as a synonym for the oxymoronic sustainable growth. It must be saved
from this perdition.
Today, unfortunately, the term growth has become synonymous with development and progress,
and any mention of limiting the growth is seen as a regress. This demands a deeper understanding of the concept
of progress, its raison detre. We have to ask: what is the purpose of economic growth, development, progress?
The answer, of course, is quite obvious, and was given first by Adam Smith, the father of modern economics:
well being and happiness of the society. Clearly if economic growth is accompanied by increasing damage to
ecology, threat of exhaustion of fossil fuels and climate change, increasing inequity and social tension, and
increase in crime and violence, such an economic growth does not increase the well being or happiness of the
society. For a society to be truly happy, there should be no poverty or stark inequality, people should be
gainfully employed in meaningful vocations, they should relate to each other with loving kindness and
compassion, always ready to help those in distress, there should be safety and security, freedom to lead the kind
of life one wants and express ones opinion freely without any fear of oppression. This brings us into the realm
of ethics the seventh Ethe basic principles of right conduct both for individuals as also for organizations and
communities. It is interesting to mention here that even Adam Smith, the father of modern free market
economics, recognized the need for markets to function with ethics and morals as is evident from the following
passage from his magnum opus Wealth of Nations quoted in [15]: Justice [the human virtue of not
harming others]is the main pillar that supports the whole building. If justice is removed, the great fabric of
human society which seems to have been under the darling care of Nature must in a moment crumble into
atoms.Men, though naturally sympathetic, feel so little for others with whom they have no particular
connection in comparison to what they feel for themselves. The misery of one who is merely their fellow creature
is of so little importance to them in comparison to even a small convenience of their own. They have it so much
in their power to hurt him and may have so many temptations to do so that if the principle of justice did not
stand up within them in his defense and overawe them into a respect for his innocence, they would like wild
beasts be ready to fly upon him at all times. Under such circumstances a man would enter an assembly of others
as he enters a den of lions.
If we reflect deeply on the crisis of energy, ecology, economy, employment, equity we would find that its roots
lie in a crisis of ethics and values than in wrong technology choices. In fact even these wrong choices have
emanated from confusion in values, the confusion in understanding due to which one equates happiness with
material acquisitions. Oriental philosophies have always understood happiness as a state of mind which need not
be dependent on matter. Even the mainstream economists now recognise quality of life as a multidimensional
concept involving satisfaction of a variety of needs which can be broadly classified into physical, emotional,
intellectual and spiritual needs. All these needs are important, and are not substitutable. One can not, for
example, satisfy the need for water to quench thirst by food, howsoever tasty. Similarly one can not fill spiritual
vacuum in ones life by an assortment of goods. But while the needs are finite how much food can one eat,
how many clothes one can wear or rooms one can live in the wants are infinite. If one can not exercise self
control and/or lacks a proper understanding of human happiness, these wants become the needs; one
stagnates at the level of satisfying these unending physical needs, and gets no opportunity to satisfy the higher
needs. This lack of satisfaction of emotional and other higher needs leads to perpetual discontentment and one
tries to alleviate that discontentment with physical goods, power and authority. The result: unbridled
consumerism, corruption and egotistic behaviour which are at the root of the energy-ecology crisis.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 9

These two Es entropy and ethics thus provide a framework from which can emerge holistic response to the
energy-ecology crisis.
HOW MUCH ENERGY DO WE NEED?

Smil [11] has presented detailed analysis of the relationship of the per capita energy consumption in various
countries with various quantifiable factors usually associated with quality of life, like food availability, infant
mortality, female life expectancy, index of political freedom, and human development Index (HDI) as defined
by UNDP (Figure 6 shows the data for HDI vs energy consumption). As is evident from Figure 6, [11], HDI
(and indeed all other objective indicators the quality of life) varies with average per capita energy use in a
nonlinear manner. The rate of increase slows down appreciably beyond an energy consumption of about
1500kgoe/ year (64GJ/year) with virtually no additional gains accompanying consumption above 2600kgoe/year
(110GJ/year). Concluding this analysis Smil [11] comments: Annual per capita energy consumption of between
50-70GJ thus appears to be the minimum for any society where a general satisfaction of essential physical
needs is combined with fairly widespread opportunities for intellectual advancement. This conclusion is drawn
on the basis of objective criteria for QOL generally accepted by economists.




A survey on the subjective feeling of satisfaction with personal life reported by Smil indicates that it has no
correlation with economic well being or per capita energy use as brought out by following table
Per capita energy use 40 GJ(Thailand) 175GJ(Germany) 340 GJ(USA)
% of people satisfied 74% 74% 72%
with their life
The same is brought out vividly by the Figure 7, adopted from Myers [24] ,which shows no change in the
percentage of people who are very happy with change in income.
Figure 6 HDI vs Per capita use of commercial energy
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 10



The reason for this is evident in the light of above holistic enunciation of QOL since only a relative satisfaction
of physical needs is sufficient to allow emergence and satisfaction of the higher (non physical) needs. Thus
neither on objective nor on subjective grounds is the hunger for increasing energy consumption justified. Clearly
energy crisis, and its shadow, the crisis of ecology, are a direct result of confusion between the needs and the
wants.
A HOLISTIC RESPONSE TO THE CRISIS OF 5 Es
A framework of holistic response to the crisis of 5 Es can now be drawn in the light of above discussions, and is
shown in the influence diagram of Figure 8, where for the sake of clarity all the factors have been so defined as
to correspond to a + sign on all arrows. At the centre of the policy is a world wide sustained campaign on
education in sustainable and ethical living for only this can lead to four fundamental changes of immense
importance, viz. reduction in wants, willingness to change the life style in consonance with the non materialistic
world view, acceptance of decentralized production even if it reduces the profits a little, and finally a reduction
in defence expenditure of the nation in tune with reduced security threats due to change in the world view of
people all over the world. Once these four attitudinal changes occur in the society, it can release an enormous
amount of resources for mitigating the distress of the worlds poor.
Equity would increase and this would in turn reduce crime and violence which would make it even easier to
reduce defence expenditure. These four changes also reduce energy consumption in a variety of ways, and
increase the acceptance of renewable energy sources and various energy conservation measures (which often
demand a little sacrifice of convenience), as indicated in Figure 8. Some of the strategies, which can bring
about a quantum change in the energy consumption, are discussed below.

Figure 7 Average Incomes and Happiness in
The United States, 1957-2002
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 11





i) The success story of green revolution during 50s to 90s has been rightly highlighted as a
great contribution of science since it increased the grain yield by over 250%. Taking a
thermodynamic view of this increase, we can say that the energy output of agriculture also
increased in the same ratio. The second law tells us that this increase could not have come without
an even larger increase in energy input. As pointed out in [16], the average energy input in
modern agriculture is about 50 times that of the traditional agriculture, the bulk increase coming
from energy expended in the manufacture of inorganic fertilizer, operation of field machinery, for
irrigation and transportation. FAO document [17]shows that energy input per unit yield of rice is
about 80 times more in USA than in traditional method of rice production in Philippines, while its
productive yield per ha is about 4.5 time more ( Figure 9).
Education in
Sustainable &
Ethical Living
Acceptance
of
decentralise
d production
Reduction in
defence
expenditure
Willingness to
change Life style
{food, transport,
housing, etc.}
Reduction in
Wants

Acceptance
of RES
Low energy
farming
Release of
resources
Reduced crime
& violence
Micro/small
enterprises
Energy
conservation and
Improvement in
Ecology
Equity
Figure 8 Responding to the Crisis of 5Es
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 12



Today humans are actually eating fossil fuels, e.g. about 400 gallons of oil equivalent are
needed to feed each American [16].Clearly shifting to less energy intensive farming practices like
modern organic farming [18] can substantially reduce the energy requirements in agriculture without
any significant reduction in the farm output. In fact researchers indicate that it should be possible to
have modern agricultural yields with zero fossil energy inputs [19].
ii) Another insight from thermodynamics is that whenever a living being consumes food bulk of the
energy released is dissipated to the environment as heat, and only a small fraction (typically ~10%)
gets stored as new biomass [13]. Thus at each level of the food chain there is a multiplying factor of
~10 in energy between the lower and higher level. As a result it can be said that the energy load (and so
also the water requirement) of supporting meat eaters is at least 10 times more than that of vegetarians
as shown in the ecological pyramid of figure 10. The potential of reduction in energy consumption
(and ecological damage) by switchover to vegetarian diet can be gauged from the fact that 70% of the
food grain produced in USA is actually fed to animals for meat production [20].



iii) Housing is another basic need for which the modern technology uses RCC with fired bricks. It
is possible to cut the energy needs to 1/10
th
by building eco-friendly houses in bamboo, a plant
with remarkably high rate of growth. Colonies of bamboo houses have been built in Colombia and
many other parts of the world and glimpses of some of these houses can be seen on the YouTube
[21].
Figure 9 : Energy Consumption in agriculture
100% energy
10% energy
1% energy
0.1% energy
Figure 10 Ecological Pyramid
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 13

iv) Elimination of energy wasting habits and technologies, which have become almost a part of our
daily life, can also yield rich dividends. Some of these are: use of bottled water, frozen foods,
electric geysers, paper towels, international trade in fruit and vegetables (many countries import
and export millions of tons of the same commodity, see reference [22]) , remote controlled
electronic gadgets (which continuously leak electric energy even when not operational) etc.
CONCLUDING REMARKS
The problems of global climate change and energy security can not be ameliorated in isolation from the other
nodes in this intricate web of 7Es. To evolve appropriate policy measures for mitigating these problems, we
need to take into account the feedback loops identified in Figures 3, 4 and 8. The root cause of the crisis staring
mankind is the lack of effective education in sustainable and ethical living. Therefore, technological innovations
like improvement in energy efficiency of power plants, engines, lighting systems, or development of renewable
energy sources etc. -- though extremely important and the need of the hour-- can not suffice. Improvement in
efficiency of gadgets may only increase their demand resulting in net increase in the overall energy
consumption. A society addicted to conveniences will not accept renewable energy technologies since their use
demands some effort compare for example, the effort needed in preparing food in an automatic electric
cooker with that needed to cook food in a solar cooker. Implementation of many decentralized technologies
demands change in life style and ethical conduct, for otherwise even sound technologies would get discredited,
as has happened with biogas (and improved chulha) programmes in India.
No amount of technological innovation would solve the problems of intra-national and inter-national inequity,
which threaten the very social fabric of society; or reduce the defence expenditure which are consuming a huge
chunk of planets resources. The importance of abating inequity is unfortunately not fully appreciated by policy
makers. We continue to believe, in spite of all the evidence to the contrary, that mere economic growth will
reduce it through trickle down effect. To reduce inequity, wealth needs to be created in a distributed manner by
promoting rural industrialization based on micro and medium enterprises powered by decentralized renewable
sources of energy. If urgent steps are not taken in this direction, the social climate, which is already under
great strain, may get vitiated to such an extent that equilibrium in the society may get disturbed. In
thermodynamics we learn the famous Le Chateliers principle : If any inhomogenity develops in a
thermodynamic system in equilibrium, the system would respond in a manner that tends to eradicate the original
inhomogenity This law , like all other laws of nature, can be extended to social systems with the word
inhomogenity replaced by inequity. The increasing crime, violence and extremism in the society are the
attempts of the social system to eradicate the inequity. But we also know that if a thermodynamic system
becomes unstable, even a minute disturbance can result in set in motion processes which change its character
altogether cause a phase change. In social systems, such a phase change would imply social turmoil or civil
war of the kind we have seen in many nations in the past. The country wide agitation which we have seen in
India in last few weeks is an indication of the things to come, unless appropriate corrective actions are taken
urgently. Reducing inequity should therefore be a prime concern of any sustainable development policy.
If no effort is done in this direction of education in sustainable and ethical living, the energy ecology crisis
would aggravate even sooner than current predictions and many of the changes are likely to be forced on us by
the circumstances. Thus exhaustion of oil would definitely lead to drastic curtailment, if not stoppage, of all
international trade (and even travel) for no amount of innovation in nuclear technology would enable us to have
aero planes / ships working on nuclear power the risks are just too high for the society to be able to ever accept
it. Our lifestyles would have to change, but we would all be suffering because of inner resistance to these
changes. Renewable energy sources, and measures to save energy and water, would be forced on us and we
would find it hard to accept these due to addiction to conveniences. If the ethical base of the society does not
improve, the defence expenditures would keep on rising, the resources available for poverty alleviation would
reduce, and thus the vicious cycle of increasing inequity crime and violence - defence expenditure (Figure 8)
would only tighten the noose in which the society finds itself today.
It is thus imperative for the survival of mankind on this planet that we learn the art of living more with less. As
beautifully out by Tester et al [23] :There are many activities that bring pleasure with little need for materials or
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 14

commercial energy: looking at a beautiful sunset, listening to music , interacting with friends and family,
expressing and receiving love and friendship, reading, sitting quietly, taking a walk , practicing whatever
spiritual program we may find individually rewarding.
A transition to a new set of values is not such a utopia, as it sometimes seems to be. All of us relate within our
families, to our children our parents, close relatives and friends, with altruism and compassion. We only need to
widen that circle of compassion to embrace all humanity. Such a transition would ensure that mankind would be
able to collectively take steps to ensure that the harmful effects of past extravaganza are mitigated quickly, and
the society switches over to sustainable living paradigm with least distress to all. Historically energy transitions
have been accompanied with improving quality of life through increase in physical comfort and reduction of
drudgery, energy, ecology crisis is an opportunity to bring about a quantum improvement in the quality of life
through a change in the values.
REFERENCES
[1] BP Statistical Review of World Energy, June 2008
available at http://www.bp.com/statisticalreview
[2] Industrial Revolution - : en.wikipedia.org/wiki/Industrial_Revolution
[3] Industrial Revolution : library.thinkquest.org/C0116084/IR2.htm
[4] http://secondlawoflife.wordpress.com/2007/05/17/energy-consumption-and-gdp/
[5] NEF Report (2002) Measuring real progress: Headline indicators for a sustainable world
available at :http://www.neweconomics.org/gen/z_sys_PublicationDetail.aspx?PID=112
[6] Stern Review on the Economics of Climate Change (2006)
executive summary available at http://www.hm-treasury.gov.uk/6513.htm
[7] Dhar P.L and Gaur R.R. (1992) Science and Humanism towards a unified world view, Commonwealth
Publishers, Delhi.
[8] Daly M , Wilson M, and Vasdev S (2001) Income inequality and homicide rates in Canada and the United
States, Journal of Criminology, p219-236. also at http://psych.mcmaster.ca/dalywilson/iiahr2001.pdf
[9] GINI Index : at http://en.wikipedia.org/wiki/Gini_index
[10] IPCC 2007,Climate Change: Mitigation. Contribution of Working group III to the Fourth Assessment
Report of the Intergovernmental panel on climate change.
[11] Smil Vaclav (2003) Energy at the crossroads Global perspectives and uncertainties, MIT Press,
Cambridge, USA.
[12] Roegen N. Georgescu (1976) Energy and Economic Myths : Institutional and Analytical Economic Essays.
Pergamon Press: New York.
[13] Dhar P.L.(2008) Engineering Thermodynamics-a generalized approach, Chapter 14, Elsevier, New Delhi
[14] Daly Herman E and Townsend Kenneth N(1993) Valuing the Earth: Economics, Ecology, Ethics at
http://dieoff.org/page37.htm
[15] Cox E(2008) at http://blogs.mccombs.utexas.edu/mccombs-today/2008/09/cox-creed-of-greed-not-
supported-by-adam-smith/
[16] Eating Fossil fuels at http://www.fromthewilderness.com/free/ww3/100303_eating_oil.html
[17] FAO: Environment and Natural Resources Working Paper at
http://www.fao.org/docrep/003/x8054e/x8054e05.htm
[18] Organic farming at http://en.wikipedia.org/wiki/Organic_farming#Productivity_and_Profitability
[19] Zero Fossil Energy Farming at http://www.commonwork.org/Downloads/ZEF_Roadmap.pdf
[20] Meat, Now it is not personal, World Watch magazine, July/Aug 2004, at
http://www.worldwatch.org/system/files/EP174A.pdf
[21] Construction in Bamboo- Colombia at http://www.youtube.com/watch?v=YSNQj7CDkxE
http://www.youtube.com/watch?v=SmZR04l_hA8&feature=related
http://www.youtube.com/watch?v=bxxkdgpoafU
[22] http://www.fao.org/es/ess/toptrade/trade.asp
[23] Tester JW, Drake E M, Driscoll MJ, Golay M W, Peters W A (2005) Sustainable Energy: choosing among
options, MIT Press, Cambridge USA.
[24] Myers , D , Psychology of happiness, at http://www.scholarpedia.org/article/Psychology_of_happiness


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 15

ADVANCES IN MANUFACTURING SYSTEMS

Rajnish Prakash
Ex-Principal, PEC, Chandigarh


ABSTRACT

The advanced manufacturing concepts are characterized by their ability to allow a rapid response to
continuously changing customer requirements. At their core are flexible automation systems that can reduce
product cycle time, increase quality and allow rapid changes in design. Recent high tech innovations like FMS,
CE, CIM, RP, and AI have made a reality to successfully implement flexible automation. The implications of
product design, assembly systems, machine tools and inspection in an automated system are discussed.

Keywords: Manufacturing Systems, Flexible automation, CIM.

INTRODUCTION

Manufacturing is a value adding activity, where the conversion of materials into products adds value
to the original material. Manufacturing has to adapt itself to meet the market needs by innovating and
utilizing factors of production. Manufacturing has undergone a sea-change in its philosophies,
technologies and methods. Integration of manufacturing with electronics, computers and IT gave
birth to new technologies leading to seamless integration with forward and backward activities like
marketing information, demand-forecasting, supply-chain management, human resource management,
product life-cycle management, etc. Flexible technologies are replacing dedicated technologies
leading to Lean and/or Agile manufacturing. In manufacturing theory being both is often referred to
as leagile. Flexible Automation is a basic necessity for ensuring agile manufacturing environment.

EVOLUTION OF MANUFACTURING POLICY:

Looking back into the history, one can mark three stages in the development of manufacturing
technology.

I-stage dependent on human labour and human intelligence
II- stage replacement of human labour by machines but still relying on human intelligence
III- stage human intelligence being replaced by artificial intelligence and integrated with
machine labour.

The evolution of third stage is made possible and accelerated by the availability of low cost
electronic computing and control, telecommunications and sophisticated measurement and sensor
technologies.

The manufacturing is moving from functions where people, materials, and costs are managed to
systems where information, continuous change, and time must be managed.
A traditional factory derives its competitive advantage from a combination of (i)
economy of scale, (ii) task specialization, (iii) standardization, and (iv) repetition. Obviously it
will not be conducive for accelerated product development and product- innovation. Such a
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 16

technology will not offer competitive advantage now in a rapidly changing and different market
environment. The evolution of manufacturing policy over the last century is shown in fig.1.


The technology has always shaped itself to meet the needs of market by utilizing available basic
factors of production like materials, processes, sources of energy and control systems. In the
present context of globalization of economy new pressures are experienced by industry. These
are:

(i) Shorter market life-time and compressed product life cycles.
(ii) Intense competition and declining profit margins.
(iii) Fast technical development and increased demand on augmented products.

Shorter market lifetimes and shorter innovation times lead to increasing demands on companys
preparedness, adaptability and versatility. The companies must adapt to new environment and
pursue new strategies e.g.

(i) Develop new products with increased frequency and variations.
(ii) Shorten delivery-times and reduce costs by all means.
(iii) Ensure high quality during all phases of products life time incorporating increasing
level of product customization.




Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 17



Drivers for New Approach
There are two primary forces viz. Technology Push and Competitive Pull, driving a change in
the way the manufacturers approach product innovation and product development. Whereas,
technology push is the result of available enabling technologies, the pull is the outcome of
market requirements due to change of external conditions.

The combination of technology push and competitive pull results in a new approach to effective
manufacturing through flexible automation in the current scenario.




Flexible Automation:
Companies which possess the ability to adapt them and to react rapidly to changes in their
environment are in a better position than companies with fixed aims and means. The essential
attributes like enhanced dynamism, greater variance and higher quality, can be attained primarily
through the creation on new production conditions by means of computers, industrial robots and
automation, through the creation of direct information routes between design and production by
means of data-processing techniques, and by choosing equipment and structuring production system
in the right way.

In the present environment of short delivery-time and competitive prices, it has become essential to
optimize flexibility and productivity. Short term flexibility has ability to adopt changes in existing
product profile and long term flexibility requires additional ability to adopt new products. These
objectives can be best achieved through Flexible Automation which offers rapid response to
product-innovation, process-innovation and shifts in demand.

Flexible Automation (FA) is a type of manufacturing automation, which exhibits some form of
"flexibility." Most commonly this flexibility is the capability of making different products in a short
time frame. There are several other manifestations of flexibility. Flexible automation allows the
production of a variety of part types in small or unit batch sizes. Although FA consists of various
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 18

combinations of technology, flexible automation most typically takes the form of machining systems
that is, manufacturing systems where material is removed from a workpiece. The flexibility comes
from the programmability of the computers controlling the machines. Flexible automation is also
observed in assembly systems.

The functional relations in manufacturing system are shown in Fig.3. Varying degree of Flexible
Automation is the result of integration of various technologies.


FIG. 3: FUNCTIONAL REQUIREMENT IN MANUFACTURING
PLANT

The recent hi-tech innovations like Flexible Manufacturing System (FMS), Computer Integrated
Manufacturing (CIM), Intelligent Manufacturing System (IMS), Artificial Intelligence (AI),
Concurrent Engineering (CE) and Rapid Prototyping (RP) have greatly contributed toward achieving
the objectives of Flexible Automation.

Flexible Manufacturing System (FMS)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 19

Flexible Manufacturing is a system which combines micro-electronics and mechanical engineering to bring
economics of scale to batch work.

A central on line computer controls the machine tools and other work stations, the transfer of components, and
tooling. The computer also provides monitoring and information control. This combination flexibility and
overall control makes possible the production of a wide range of products in small numbers.

The aim of an FMS is to give production with intermediate variance and intermediate volume many of the
properties, in the form of cost reductions and efficient manufacturing methods, which are to be found in the
concept of production lines, dedicated machines, together with the flexibility to be found in free-standing non-
dedicated tools.

Computer Integrated Manufacturing (CIM)
Whereas FMS is a machining technology, CIM must be considered as a combined philosophy for design and
production. CIM is the use of computers for on-line automation, optimization and integration of the total system
from design to production.

Numerical Control (NC), Computer Aided Design (CAD), CAD/CAM and FMS are steps on the way to CIM.
In CIM system, engineering support, manufacturing support and process management are all done by computer.
Since it has now become possible to achieve relational data based technology and communication networking
technology at an equitable cost, the CIM business has also become a practical reality.

Intelligent Manufacturing System (IMS)
IMS, by definition, is a system which takes care of the data of intellectual activities in the manufacturing sector
and makes use of them to better fuse, men and intelligent machines in the integration of the entire range of
corporate activities from order booking through design, production and marketing in a flexible manner
which leads to optimum productivity.

In other words an IMS plant of future will deal with global communication and operations to enjoy the benefits
of maximum productive flexibility.

Ai: Artificial Intelligence A Tool For Smart Manufacturing
AI generally relates to the attempt to use computer programming to model the behavioral aspect of human
thinking, learning and problem solving. Expert system and genetic algorithms are attempts in this direction.
Artificial intelligence, in near future, may be the single most important and most pervasive ingredient for the
realization of true CIM/IMS systems. The reason for such promise is that the technique is applicable to the
entire range of manufacturing activities. It also holds the key to sharing of information or knowledge, among
the many disparate elements of typical FMS/CIM/IMS environments.

The expert systems, developed through artificial intelligence technique, make it possible to free the human
experts from making routine decisions and thereby make them available to use their intelligence to deal with
new challenges.

Concurrent Engineering (CE)
Concurrent Engineering is a systematic approach to the integration of concurrent design of products and their
related processes, including manufacture and support. The approach is intended to cause the developers, from
the outset, to consider all elements of the product life cycle.

CE, by getting all concerned departments involved at the preliminary design stage, forces the evaluation of
multiple design changes early in the product definition process. Changes made earlier are significantly less
expensive, easier and faster to implement.
Rapid Prototyping (RP)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 20

Rapid prototyping is a technique in which physical models are fully created from materials, provided in various
forms, completely under control of solid model data created within a computer aided design environment.

There are six different Rapid Prototyping Technologies, employing different principles, currently available in
the market. All six are additive technologies, evolving the principle of gradually building up a solid object
from CAD solid modeling data by the successive addition of material under computer control:

Stereo lithography
Laminated Object Manufacturing
Selective Laser Sintering
Fused Deposition Modeling
Solid Ground Curing
Desk-Top Modeling

PROBLEMS IN INTEGRATION

The integration process has influenced other areas which have impact on manufacturing technology and
automation. The advances have taken place in the materials and processes as well as product design. Some
salient developments are mentioned below which have futuristic implications.

Octahedral Hexapod Technology:
Technological advancement has been perhaps the keenest in the metal cutting machine tool sector, especially in
terms of coated carbide/cermet tips, polycrystalline diamond tools, CBN and ceramic tools. Amid all these
changes, the structure of the machine is fundamental to its basic capability. One unconventional development in
the structure of machine tools is the new octahedral hexapod design.

The Octahedral Hexapod Machine employs two principles which work together in combination: octahedral
structural frame and hexapod actuator.

The Octahedral Hexapod is a first truly new approach to machine design since the industrial revolution. The
revolutionary break-through has become possible only because of the power of the computer to control the
motion, simultaneously, of each of the six axes of the machine to precisely generate the desired motions.

Mechatronics:
When computers and thereby software are to be used to automate equipments, then there is a vital need of
effective interface between the computer and the equipment. Therefore, development in sensor and actuator
technology is as essential as computer technology itself.

This dual alliance of electronic and computer technology to control mechanical systems is known as
mechatronics. This integrated technology has extensively been used in consumer products to realize the benefits
from mechanical, electronics and information engineering.

Software as a Product Component:
Recent trends demonstrate that the software will appear increasingly as a component in products of the
consumer and capital goods industries. The economic importance of software is growing with its significance
for the functions of mechanical engineering products. Software development already takes up 80% of
development costs for control systems. Programmable Logic Controllers (PLC), Computerized Numerical
Control (CNC) and Personal Computer (PC) control systems now comprise only 20% hardware, but 80%
software, with an increasing trend.

The growing competition attaches increased significance to product quality. Since vital functions of the product
are now dependent on the software, the latter eventually becomes an important factor. Their development is
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 21

twinned with problems of heavy expenses and excessive time. These have been causes of delay, frequent
dissatisfaction and entailing high maintenance costs, e.g., more than 60% of development costs in software were
on debugging. This software crisis can not be solved by simple procedures, but the development is closely
linked in methodological terms with the development of mechanical and electrical components.

Automated Assembly:
The operation of assembly is quite complex, which is ably done by the operator by his skill of dexterity and
decision making; but it is accompanied by the inherent defects of manual operations, i.e., low speed,
unreliability and unpredictability. So, if overall performance has to be improved, then it is essential that
automation of this activity is also done.

The assembly operations are cumbersome and time consuming if the number of components involved are many.
And that would be more so with mechanized systems. Such problems can be minimized by resorting to
integrated design of components, which is very much possible with die-casting and plastic moulding processes.
Further, to minimize on intelligence requirements on part of the system, symmetrical or highly unsymmetrical
components be designed so that feeding, orientation and handling by interfacing elements becomes simple and
fool proof.

The assembly system needed today should meet the needs of smaller batches, increased variety, shorter lead
time and of course at competitive costs. The trend would be towards programmable assembly systems which
would cater to similar shaped components. An intensive study with GT would be desired to design the
machine/component system.

Robots with changeable grippers and limited intelligence (Vision!) are the typical equipment for such
operations. The need of a universal gripper design is vital, besides the component design need to be extensively
examined. This would minimize the non- recurring and recurring costs involved in the assembly automation.

Flexible Inspection
Constantly increasing demands on quality and quality assurance have lead to metrology spiral; more accuracy,
extended ranges, and more new technologies. An important requirement of exchangeability of product
components throughout the world require identical realized units with traceability to global metrology system.
But higher accuracy generally kills speed of operation. The speed of inspection assumes importance in
automated manufacturing units.

Modern image and signal processing can be substituted for conventional techniques in quality assurance and
measurement technology. This technology often enables measurement of complex product quality
characteristics which are not possible with conventional methods. As a future-oriented instrument, a means for
production automation, and apace setting method of quality control, quality assurance and quality planning,
industrial image processing represents a significant potential for the reduction of production costs. In addition
to making measurement and inspection more free from human error, a verifiable gain in effectiveness can be
achieved through increased speed. In this way inspection and measurements can be carried at production speed.

CONCLUSIONS

Manufacturing has undergone a sea-change owing to the availability of IT, computers and communication and
controls technology. The earlier factors offering competitive advantage no longer apply and have given way to
flexible automation. New technologies with varying degree of integration are emerging in the field of
manufacturing which demand changes in many other areas. The application of flexible automation demands
change in the outlook while designing products, assembly systems and machine tools. The bottlenecks faced
during integration process and automation is also discussed.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 22



REFERENCES:

[1] Ed: Deshmukh .S.G, Rao. P.Advanced Manufacturing Technology, Dept. of ME, IIT Delhi (1998)
[2] Ed: Tagore etalFactory Automation, Robotics and soft computing, Dept. of ME, NIT Warangal (2007)
[3] Ed: Mathur B.S., Ojha V.N. and Kothari P.CAdvanced in Metrology and Its Role in Quality Improvement
and Global Trade,Narosa Publishing House, (1996)
[4] L. Goldman, R.L. Nagel and K Preiss, Agile Competitors and Virtual Organizations - Strategies for
Enriching the Customer, Van Nostrand Reinhold, 1995.











Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 23





SECTION -1

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 24


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 25

PRODUCTION OF BIOGAS FROM WASTE FOOD OF HOSTEL MESS

Yashvir Singh
1*
, Nishant Kr. Singh
2
, Sumit Kumar
3
,Piyush Chaudhary
4

1,2,
Mechanical Engg. Dept., Hindustan College of Science and Tech., Mathura-281122, Uttar Pradesh, India
3,4,
Mechanical Engg. Dept. Institute of Engineering and Technology, Khandari, Agra, Uttar Pradesh, India

1
yashvirsingh21@gmail.com


ABSTRACT

The focus of the research paper is to create an organic processing facility to create biogas which will be more
cost effective, eco-friendly, cut down on landfill waste, generate a high-quality renewable fuel, and reduce
carbon dioxide & methane emissions. The continuously-fed digester requires addition of sodium hydroxide
(NaOH) to maintain the alkalinity and pH range from 6.7 to 9.4. A practical laboratory scale experimental
design using waste food from hostel mess of Hindustan College of Science and Technology, Mathura was done
to find out the effects of Alkaline [NaOH] on the volume of biogas generated. Results obtained reveal a high
volume of gas generated when the operating conditions inside the digester is maintained at moderately alkaline
condition.

Keywords- Alkalinity, digester, biogas.

1. INTRODUCTION

Kitchen waste is organic material having the high calorific value and nutritive value to microbes, thats why
efficiency of methane production can be increased by several orders of magnitude as said earlier. It means
higher efficiency and size of reactor and cost of biogas production is reduced. Also in most of cities and places,
kitchen waste is disposed in landfill or discarded which causes the public health hazards and diseases like
malaria, cholera, typhoid. Inadequate management of wastes like uncontrolled dumping bears several adverse
consequences: It not only leads to polluting surface and groundwater through leach ate and further promotes the
breeding of flies, mosquitoes, rats and other disease bearing vectors. Also, it emits unpleasant odour & methane
which is a major greenhouse gas contributing to global warming.
Mankind can tackle this problem(threat) successfully with the help of methane , however till now we have not
been benefited, because of ignorance of basic sciences like output of work is dependent on energy available
for doing that work.
The proper disposal of HCSTs Hostel kitchen waste will be done in ecofriendly and cost effective way. While
calculating the cost effectiveness of waste disposal we have to think more than monetary prospects. The
dumping of food in places and making the places unhygienic can be taken good care of. It adds to the value of
such Biogas plants. Using the natural processes like microorganisms kitchen waste & biodegradable waste viz
paper, pulp can be utilized.
Anaerobic digestion is controlled biological degradation process which allows efficient capturing & utilization
of biogas (approx. 60% methane and 40% carbon dioxide) for energy generation. Anaerobic digestion of food
waste is achievable but different types, composition of food waste results in varying degrees of methane yields,
and thus the effects of mixing various types of food waste and their proportions should be determined on case by
case basis.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 26


Figure1.1 Biogas digester

For optimum performance of the digester the internal temperature needs to be maintained within the range of 25
35.5 degree centigrade and certainly over 25C and within a pH range of 6.7 to 9.4. For optimum gas
generation, the pH must be maintained at a reasonable alkaline condition. Four basic types of microorganism are
involved in the production of biogas: Hydrolytic bacteria break down complex organic waste into sugar and
amino acids. Fermentative bacteria then convert those products into organic acids; Acidogenic microorganism
converts the acids into hydrogen, carbon dioxide and acetate. Finally, the methanogenic bacteria produce biogas
from acetic acid, hydrogen and carbon dioxide. This whole process takes place in air tight chamber called a
biogas digester which is shown in fig 1.1. Figure 1.2 shows the typical gas composition of biogas.

2. MATERIALS AND METHODS

2.1. Materials
Waste food
Sodium hydroxide
Distilled water


Figure1.2 Composition of biogas

2.2. Apparatus Used:
- Measuring cylinder
- Beehive for gas collection
- Infra red thermometers
- Retort stand
- Electronic weighing balance
- Biogas digester

2.3. Method
1.5kg of partially composed waste food was accurately weighed using an electronic balance and allowed to
undergo partial decomposition in a compact arrangement with addition of bio enzymes and water. The partially
decomposed waste was then introduced into the digester (Fig. 1.1) . The digester was then completely sealed
and then connected to the gas delivery setup (the gas was collected over water in a trough with a beehive and
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 27

measuring cylinder). The experimental setup was then left for monitoring for a specific time period (Precisely 1-
6 days) at an ambient condition until a decline in gas production was observed. During the period of the
experiment, the temperature and volume of water displaced by gas were measure daily. The contents of the
digester were continuously stirred to ensure that the molecules of gas are set in perpetual random motion. The
second part of the experiment was carried out to study the effect of sodium hydroxide on biogas production. The
procedure was the same as with the effects of time on biogas production, only that a solution of 1, 3 and 5%
wt/wt sodium hydroxide (NaOH) was added to the partially decomposed waste food before it was fed into the
digester. The addition of the sodium hydroxide was aimed at studying the effect of alkaline condition on biogas
generation.

3. RESULTS AND DISCUSSION

The amount of gas produced was monitored by measuring its volume and the average temperature daily. The
digester temperature remained in the range of 27 to 35.5C throughout the period of operation. The results
obtained shows that the volume of biogas generated from the first day to the sixth day changes repeatedly. Gas
generated for the first three days was quite low though an increase in production was observed daily. There was
a gradual reduction in the volume of gas produced after it has reached the peak value of gas production.
This is due to the fact that the micro organisms responsible for biogas production have consumed a large amount
of the substrate and hence subsequent drop in activity. More also the pH of the digester remains considerably
within the range of 6.5-6.9, this also would have contributed to the lower volume of gas generated (Table3.1and
Figure 3.1).
Moreover, when about 1000ml of 1, 3 and 5% wt/wt NaOH respectively was added to
the partially decomposed waste, the result obtained shows a significant increase in volume of gas produced
compared to that obtained without the addition of sodium hydroxide. In addition, the pH of the digester content
rose up a little above 7 (Table 3.2, 3.3, 3.4 and Fig 3.2, 3.3, 3.4).



Figure3.1 Biogas production with no addition of NaOH


Figure 3.2 Biogas production with addition of 1% wt/wt NaOH

0
10
20
30
40
50
1 2 3 4 5 6 7
M
e
a
s
u
r
e
d

P
a
r
a
m
e
t
e
r
s
Time (days)
Temp (o C)
pH
Gas volume
(cm3)
0
10
20
30
40
50
0 1 2 3 4 5 6
M
e
a
s
u
r
e
d

p
a
r
a
m
e
t
e
r
s
Time (Days)
Temp (oC)
pH
Gas volume
(cm3)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 28


Figure 3.3 Biogas production with addition of 3% wt/wt NaOH


Figure 3.3 Biogas production with addition of 5% wt/wt NaOH

4. CONCLUSIONS

The results show that the addition of different strengths of caustic improved gas yield from 1 to 3% wt/wt
sodium hydroxide solution, however a decrease was observed from 5% wt/wt sodium hydroxide treatment.
Therefore if the p
H
of the system is maintained at that prevalent for 3% wt/wt caustic treatment, more gas will be
produced.

REFERENCES

[1] Jeffery, A.C., J.V. Peter, J.J.B.R. William and M.G. James, 1981. Predicting methan fermentation
degradability, Biotechnology and Bioengineering Symposium, 11: 93-117.
[2] Shelef, G., H. Grynberg and S. Kimchie, 1981. High rate thermophilic aerobic digestion of agricultural
wastes, Biotechnology and Bioengineering Symposium, 11: 341-342.
[3] Alvarez R, Villica R and Liden G 2006, Biomass and Bioenergy. 30: 66-75.
[4] Uzodinma EO, Ofoefule AU, Eze JI, Onwuka ND (2007). Biogas Production from blends of Agro-
industrial wastes. Trends Appl. Sci. Res. 2(6): 554-558.
[5] Eze JI (1995). Studies on generation of biogas from poultry droppings and rice husk from a locally
fabricated biodigester. M.Sc. dissertation, University of Nigeria, Nsukka, pp. 64-65.










0
20
40
60
80
0 1 2 3 4 5 6
M
e
a
s
u
r
e
d

P
a
r
a
m
e
t
e
r
s
Time (Days)
Temp (oC)
pH
Gas volume
(cm3)
0
10
20
30
40
0 1 2 3 4 5 6
M
e
a
s
u
r
e
d

p
a
r
a
m
e
t
e
r
s
Time (Days)
Temp (oC)
pH
Gas volume
(cm3)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 29

CURVATURE EFFECT ON FLOW DEVELOPMENT
IN RECTANGULAR SIGMOID DIFFUSER

Dr. Suparna Mukhopadhyay
1
,Dr. Bireswar Majumdar
2

1
Faculty Member,NTPC,PMI,Noida

2
Professor,Power Engg Dept, Jadavpur University, West Bengal

1
ersuparna@yahoo.com

ABSTRACT

This paper addresses the effect of curvature especially strong one on the behavior of flow within a rectangular-
sectioned sigmoid diffuser. The experimental results, obtained through 5-hole pressure probe and hot-wire
anemometer, enlightens the combined effect of diffuser and the typical s-shaped centerline make the flow purely
3-D in nature i.e. the generation of strong secondary motion across the flow passage. The shifting of bulk flow
from one wall to another wall, as well as the change in turbulence intensity distribution due to the change in the
direction of centrifugal force is clear from the velocity and turbulence intensity mappings.

KEY WORDS: Sigmoid Diffuser, Turbulence intensity, Secondary Motion

INTRODUCTION

Sigmoid diffuser stretches the areas of simple axial flow straight diffusers to a large extend. It converts the
kinetic energy to pressure energy through diffusion of the gaseous fluids, as well as gives a compact shape to the
design of the equipments of the modern era. These diffusers are mainly applied in high-speed aircraft with wing-
root and flush-scoop intakes of air breathing propulsion system, turbine flumes, compressor crossover, etc. The
reported literatures on sigmoid-/s-shaped ducts and diffusers are very meager, may be due to the complexity of
the flow behavior. Johnston [1956] first reported the diffuser design and performance analysis using a
computational tool for straight as well as curved diffusers. First systematic study reports on the flow regimes
within 90
0
-or part turn curved diffuser passages were published by Fox and Kline [1962]. However in this paper
a rare attempted investigation on small area ratio (AR=1.5, L/W
1
=9.0, AS =3.75)) and strong curvature
(=60
0
/60
0
) sigmoid diffuser with the help of traditional instruments (e.g.5-hole pressure probe, hot-wire
anemometer, etc.) has been reported. The Performance of any diffuser is governed by many geometrical and
dynamic parameters, e.g. AR, AS, 2
u
, L/W
1
, inlet flow conditions, amount of swirl at inlet etc. Also the
performance of a diffuser is defined by static pressure recovery coefficient (C
pr
), total pressure loss coefficient
(

) and diffuser effectiveness (

0
). The design, adopted in most of the investigations on curved diffusers,
was suggested by Stanitz[1953]. Sagi and Johnston [1967] gave an alternative design procedure for 2D subsonic
diffuser mainly based on Stantiz [1953].Majumdar et al.[1997] reported a detailed measurements of flow
characteristics in s-shaped diffusing duct, with AR=2.0 and inlet AS=6.0 and =90
0
/90
0
. Berrier and Allan
[2004] have reported their experimental and computational study on two were semi-circular and two semi-
elliptical S-duct inlets. They performed compressible flow analysis in the Mach number range of 0.25 to 0.83
and observed that increasing the extent of boundary layer ingestion with a distorted profile decreased the
pressure recovery and increased the distortion. They were also able to capture the pressure recovery and
distortion trends with Mach number and inlet mass-flow using computational study. Gauthier et al. [2001]
studied an 180
o
bend and have reported the presence of centrifugal instability in a curved rectangular duct of
small aspect ratio. They observed that the flow instability generates stationary contra-rotating stream wise
vortices. Researchers are still investigating to understand the different flow phenomena in constant area curved
ducts having different geometrical parameters.
This paper also revealed the generation of a pair of vortical motions in the first half and its reversal of sense of
rotation after the inflexion plane. Low speed fluid occupies less flow area at the top half of section close to
concave wall. Energy loss is more in case of 60
o
/60
o
diffuser and it increases continuously and more sharply due
to larger angle of turn for same axial length of the diffuser

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 30

2. EXPERIMENTAL PROCEDURE

An experimental test rig has been developed for the present study to obtain detailed measurements of mean
velocities, static pressure, total pressure, turbulence level etc. along the entire length of s-shaped diffusers. The
schematic layout has been shown in Fig.1. The set-up was run by a single-stage centrifugal blower of air
delivering capacity 0.6m
3
/s and coupled with a induction motor of 5.5 KW having rated r.p.m. Of 1440.















Ambient air has been used as working fluid. The experimental investigation was carried out for a rectangular
cross-sectioned sigmoid-/s-diffuser of area ratio 1.5. The diffuser was designed as suggested by Fox and
Kline[1962]. The centreline of the test diffuser was chosen a pair of circular arc combined as tangent to each
attached at inlet of diffusers to make unidirectional flow at inlet as well as to reduce the down stream curvature
effect at the inlet of the diffusers. A tailpipe of length 100mm was provided at exit of the diffusers also. It may
help to avoid the atmospheric effect and also to improve diffuser performance.
The mean velocity and static pressure survey of the entire connection were done with the help of a
hemispherical tip cobra shape 5-hole probe. The probe was fabricated and calibrated base on Bryer and
Pankhrust[1971]. Seven measuring sections were prefixed at different angle of turns namely 0
o
, 24
o
, 40
o
, 60
o
, -
40
o
, -24
o
, -0
o
from inlet. The numbers of measuring stations at each section were varied from 5 to 7, from inlet
to exit depending on the width of corresponding sections.
A constant temperature hot-wire anemometer was used to find out the turbulence intensity. It was fabricated as
described by Majumdar[1994]. It was also traversed same way as the 5-hole probe. Hot-wire anemometer was
calibrated against a standard pitot tube. Turbulence intensity at seven different sections of test diffuser has been
measured with a single wire hot wire anemometer.
The investigation was carried out with a steady uniform flow velocity of 40m/s and a Reynolds No. of 1.0 x 10
5

at the diffuser inlet. Using a graphic software package named SURFER, available at the Computation Lab, the
contours of all velocities, pressures and other parameters have been drawn. The inlet mass averaged values of
velocity,
Total pressure loss and static pressure recovery curves (across the sections along the centerline length) also have
been presented by using commercial CFD software named Fluent.


3. RESULTS AND DISCUSSION

3.1 Experimental Results
Variation in normalized mean velocity at three measuring sections for 60
o
/60
o
test diffuser is shown in (Fig.2
experimental a, b, c). The inlet mean velocity profile has not been shown, as it was found to be uniform and
symmetric. At 40
0
turn i.e. before inflexion, [Fig.2a], the velocity profiles near to the convex wall are seen to be
Figure 1 Experimental Set up
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 31

a little distorted close to the parallel walls. Further, it portrays the uniformity with overall reduction in
magnitude ,overall increase in velocity of fluid near parallel wall region, close to the concave wall.
As the flow reaches the inflexion [Fig.2b], there is a perceptible change as compared to that of before
inflexion. As bulk flow has moved towards the concave wall, acceleration of flow is now along concave wall..
Further downstream, after inflexion sections [Fig.2c] flow instability as seen at the previous section has
enhanced and low speed fluid occupies less flow area at the top half of section close to concave wall. However,
the non-uniform flow occupies more flow area towards the concave side.









3.2 Predicted Results
Before inflexion (Fig2 predicted a,b,c) , contour shows evenly distributed velocity throughout entire cross
section except top and bottom wall of diffuser. Accumulation of low velocity fluid at the bottom corner close to
concave side can also be observed from Fig2c. Development of such flow, in the form of flow instability has
also been observed previously during experiment.

4. CONCLUSIONS
1. Low speed fluid occupies less flow area at the top half of section close to concave wall
2. Energy loss is more in case of 60
o
/60
o
diffuser and it increases continuously and more sharply due to
larger angle of turn for same axial length of the diffuser
3. Development of strong secondary motion and the change of sense of rotation of these motions after
inflexion has been seen

REFERENCES
[1] Bansod, B., and Bradsaw, P. (1972), The Flow in s- shaped Ducts, Aeronautical Quaterly, Vol
23, Part-2, pp 131-141.
[2] Dominy, R.G., Kirkham (1998), D.A., Smith Flow development through interturbine diffusers
Trans. ASME., Journal of Turbomachinery, vol. 120, pp 298-308.
[3] Fox, R.W., and Kline (1962), S.J., Flow regimes in curved subsonic Diffusers, Trans. A.S.M.E.,
Journal of Basic Engg, vol 84, pp 303-316.
[4] Guo.,R .W., and Seddon., J. (May 1983), Swirl characteristics of an s-shaped Air intake with
both horizontal and vertical off set, Aeronautical Quaterly, pp130-146.
[5] Majumdar, B., Singh, S.N. and Agrawal (1997), D.P., Flow characteristics in s-shaped Diffusing
Duct, International Journal of Turbo and Jet Engines, vol. 14, pp.45-57.
[6] Mukhopadhya, S., Dutta, A., Mullick, A.N., and Majumdar, B. (2001), Effect of five-hole probe
tip shape on its calibration, J. of The Aeronautical Society of India, vol. 53, no. 4, pp. 271-275.
[7] Rojas, S., Whitelaw., J.H., and Yianneskis., M. (1983), Developing flow in s-shape Diffusers,
Part-1: square to rectangular cross-section Diffuser, Report no FS/83/21, Dept. of Mechanical
Engg, Imperial College of Science & technology.
[8] Shimizu, Y., Nagafusa, M., Sugino, K, and Nakamura,F. (1986), Studies on Performance and
internal flow of twisted s-shaped bend Diffuser-the so- called bend Diffuser Journal of Fluid
Engg, vol. 108, pp289-296.
[9] Whitelaw, J.H., and Yu, S.C.M. (1993), Turbulent Flow characteristics in s-shaped Diffusing
Duct, Flow measurements and instrumentation, vol.3., pp171-179.
[10] 11.Berrier, B.L., and Allan, B.G. (2004), Experimental and Computational Evaluation of Flush-
Mounted, S-Duct Inlets, 42nd AIAA Aerospace Sciences Meeting & Exhibit, Reno, Nevada, pp.
1-15.
Figure 2 Mean velocity contours (experimental & predicted)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 32

PERFORMANCE EVALUATION FOR STEAM GENERATION
SYSTEM IN A THERMAL POWER PLANT
Ravinder Kumar
1*
, A.K. Sharma
2
, P.C. Tiwari
3

1
Research scholar,
2
Associate Professor,
3
Associate Professor
1, 2
Dept of Mechanical Engineering, DCRUST, Murthal , Haryana, India
3
Dept of Mechanical Engineering, NIT, Kurukshetra, Haryana, India

1
rav.chauhan@yahoo.co.in,
2
avdhesh_sharma35@yahoo.co.in,
3
pctewari1@rediffmail.com
ABSTRACT

The aim of this study is to develop a performance model to be used for evaluating the performance or
availability of the steam generation system in a thermal plant, for decision making using probabilistic
approach. Using Markov birth-death approach, differential equations were developed from transition diagram
taking constant failure/repair rates for all subsystems so as to determine the steady state availability of the
system. Besides, one performance/availability matrix is also developed, which provide various
performance/availability levels for different combinations of failure and repair rates of all subsystems. Based
upon these values, maintenance decisions are taken.

Key words: Performance Modeling; Probabilistic approach; Availability matrix; Markov approach; Transition
diagram.
1. INTRODUCTION

Over t he year s as engi neer i ng syst ems have become mor e compl ex and sophi st i cat ed, t he
per f or mance eval uat i on of engi neer i ng syst ems i s becomi ng i ncr easi ngl y i mpor t ant
because of f act or s such as cost , r i sk of hazar d, compet i t i on, publ i c demand, usage of
new t echnol ogy. Hi gh r el i abi l i t y l evel i s desi r abl e t o r e duce over al l cost s of pr oduct i on
and r i sk of hazar ds f or l ar ger , mor e compl ex and sophi st i cat ed syst ems such as t her mal
power pl ant . I t i s necessar y t o mai nt ai n t he st eam t her mal power pl ant t o pr ovi de
r el i abl e and uni nt er r upt ed el ect r i cal suppl y f or l ong t i me. The consi der abl e ef f or t s have
been made by t he r esear cher s pr ovi di ng gener al st r at egi es f or opt i mi zat i on of t her mal
power pl ant s by desi gni ng component s or equi pment s wi t h opt i mal r el i abi l i t y f i gur es.
[ 1] Aven pr esent ed some si mpl e appr oxi mat i on f or mul a f or t he avai l abi l i t y of st andby
r edundant syst ems compr i si ng si mi l ar uni t s t hat ar e pr event i vel y mai nt ai ned. [ 2] Kumar
et al . devel oped a mat hemat i cal model f or cal cul at i ng t he r el i abi l i t y and avai l abi l i t y of
cr yst al l i zer syst em i n sugar pl ant s. [ 3] Zh ao devel oped a gener al i zed avai l abi l i t y model
f or r epai r abl e component s and ser i es syst ems i ncl udi ng per f ect and i mper f ect r epai r . [ 4]
Nag C. N. eval uat ed var i ous r el i abi l i t y par amet er s such as r el i abi l i t y, f ai l ur e densi t y,
f ai l ur e r at e & mean t i me t o f ai l ur e of a Hydr aul i c Uni t . [ 5] Nakamur a. M descr i bed a
mai nt enance schedul i ng f or Pump syst ems i n t her mal power st at i ons i n or der t o r educe
t he mai nt enance cost dur i ng t he whol e per i od of oper at i on, whi l e keepi ng t he cur r ent
r el i abi l i t y l evel of t he pump syst e m. [ 6] Ver ma et al . wor ked f or t he measur ement of
ef f ect i veness of t he 2- component non- i dent i cal syst em. Mar kovi an appr oach was used
i n or der t o der i ve t he r el i abi l i t y char act er i st i cs such as r el i abi l i t y f unct i on & mean t i me
bet ween f ai l ur es. [ 7] Shar ma r epor t ed an i nt er est i ng model f or opt i mi zat i on of
r edundancy i n a t her mal power pl ant usi ng Genet i c Al gor i t hm t echni que. He al so
r epor t ed t hat hi gh r el i abi l i t y f i gur es ar e r equi r ed f or most cr i t i cal component s such as
boi l er , t ur bi ne and condenser uni t . [ 8] Tewar i et al . Anal ysi s Avai l abi l i t y of Bl eachi ng
Syst em of Paper Pl ant . [ 9] Gupt a and Tewar i descr i bed Mar kov Appr oach f or Pr edi ct i ve
Model i ng and Per f or mance Eval uat i on of a Ther mal Power Pl ant .

2. SYSTEM DESCRIPTION
( i ) Boi l er ( A) : Consi st of si ngl e uni t , f ai l ur e of whi ch r esul t s i n t o syst em f ai l ur e.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 33

( i i ) Condensor ( B) : Havi ng si ngl e uni t , f ai l ur e of whi ch r esul t s i n t o syst em f ai l ur e.
( i i i ) Dear at or ( C) : Fai l ur e of whi ch r esul t s i n t o syst em f ai l ur e.
( i v) Economi ser ( D) : Fai l ur e of t hi s r educes t he syst em capaci t y.



2.1 Assumptions and Notations
I . Fai l ur e and r epai r r at es f or each subsyst em ar e const ant and st at i st i cal l y i ndependent .
I I . Not mor e t han one f ai l ur e occur s at a t i me.
I I I . A r epai r ed uni t i s as good as new, per f or mance wi se.
I V. The st andby uni t s ar e of t he same nat ur e and capaci t y as t he act i ve uni t s.

The not at i ons associ at ed wi t h t he t r ansi t i on di agr am ( Fi gur e 2) ar e as f ol l ows:
I . A, B, C, D : Subsyst ems i n good oper at i ng st at e
I I . a : I ndi cat es t he f ai l ed st at e of A, B, C, D,


I I I .
i
: Mean const ant f ai l ur e r at es f r om st at es A, B, C, D t o t he st at es a, b , c, d.
I V.
i
: Mean const ant r epai r r at es f r om st at es a, b, c, d. t o t he St at es A, B, C, D
V. P
i
( t ) : Pr obabi l i t y t hat at t i me t al l uni t s ar e good and t he syst em i s i n i t h st at e.
VI . s: Lapl ace t r ansf or m var i abl e.
VI I . ' : Der i vat i ves w. r . t . t









Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 34

2.2 Mathematical Analysis of the System
Pr obabi l i t y consi der at i on gi ves f ol l owi ng di f f er ent i al equat i ons ( Eq. 1 Eq. 8)
associ at ed wi t h t he Tr ansi t i on Di agr am ( Fi gur e 2) .
) (
'
0
t P + ( ) ) (
0 4 3 2 1
t P + + + = ) ( ) ( ) ( ) (
1 4 4 3 3 2 2 1
t P t P t P t P + + + ( 1)
) (
'
1
t P + ( ) ) (
1 4 3 2 1
t P + + + = ) ( ) ( ) ( ) (
7 3 6 2 5 1 0 4
t P t P t P t P + + + ( 2)
( 3)
) ( ) ( ) (
0 2 3 2
'
3
t P t P t P = + ( 4)
) ( ) ( ) (
0 3 4 3
'
4
t P t P t P = + ( 5)
) ( ) ( ) (
1 1 5 1
'
5
t P t P t P = + ( 6)
) ( ) ( ) (
1 2 6 2
'
6
t P t P t P = + ( 7)
) ( ) ( ) (
1 3 7 3
'
7
t P t P t P = + ( 8)
I ni t i al condi t i ons at t i me t = 0 ar e
1 ) ( = t P
i

f or i = 0
= 0 f or i 0
2. 3 Steady State Availability
The l ong r un or st eady st at e avai l abi l i t y of t he Syst em i s obt ai ned by put t i ng


, i nt o al l di f f er ent i al equat i ons ( 1) t o ( 8) .
The st eady st at e avai l abi l i t y of t he syst em can be anal yzed by set t i ng
t
and
0
dt
d
; The l i mi t i ng pr obabi l i t i es f r om equat i ons ( 1) ( 8) ar e:
( ) 1
0 4 3 2 1
+ + + P
=
1 4 4 3 3 2 2 1
P P P P + + +
( 9)
( )
1 4 3 2 1
P + + +
=
7 3 6 2 5 1 0 4
P P P P + + +
( 10)
0 1 2 1
P P =
( 11)
0 2 3 2
P P =
( 12)
0 3 4 3
P P =
( 13)
1 1 5 1
P P =
( 14)
1 2 6 2
P P =
( 15)
1 3 7 3
P P =
( 16)
Sol vi ng t he above equat i ons, we get :
Let us assume,
0 1 1
P L P =
,
0 1 2
P K P =
,
0 2 3
P K P =
,

0 3 4
P K P =
,
0 1 1 5
P L K P =
,
0 1 2 6
P L K P =



Wher e,
) ( ) ( ) (
0 1 2 1
'
2
t P t P t P = +
0 1 3 7
P L K P =
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 35

1
1
1

= K
,
2
2
2

= K
, ,
4
4
1

= L

Now usi ng nor mal i zi ng condi t i ons i . e. sum of al l t he pr obabi l i t i es i s equal t o one, we
get :
1
7
0
=

= i
i
P


Now, t he st eady st at e avai l abi l i t y of t he syst em may be obt ai ned as t he summat i on of al l
t he wor ki ng st at e pr obabi l i t i es, i . e .
1 0
P P A
V
+ =


0 1
] 1 [ P L + =


3. PERFORMANCE ANALYSIS
The f ai l ur e and r epai r r at es of var i ous subsyst ems of St eam gener at i on syst em ar e t aken
f r om t he mai nt enance hi st or y sheet of t her mal power pl ant . The deci si on suppor t syst em
deal s wi t h t he quant i t at i ve anal ysi s of al l t he f act or s vi z. cour ses of act i on and st at es of
nat ur e, whi ch i nf l uence t he mai nt enance deci si ons associ at ed wi t h t he St eam gener at i on
syst em. The deci si on mat r i ces ar e devel oped t o det er mi ne t he var i ous avai l abi l i t y l evel s
f or di f f er ent combi nat i ons of f ai l ur es and r epai r r at es. Tabl e 1, 2, 3 r epr esent t he
deci si on mat r i ces f or var i ous subsyst ems of St eam gener at i on syst em. Accor di ngl y,
mai nt enance deci si ons can be made f or var i ous subsyst ems keepi ng i n vi ew t he r epai r
cr i t i cal i t y and we may sel ect t he best possi bl e combi nat i ons of f ai l ur e and r epai r r at es.




3
3
3

= K
| |
1
1 3 1 2 1 1 1 3 2 1 0
1

+ + + + + + + = L K L K L K L K K K P
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 36




4. RESULTS AND DISCUSSION

Tabl es 1 t o 3 show t he ef f ect of f ai l ur e and r epai r r at es of Boi l er , Condenser , &
Dear at or on t he st eady st at e avai l abi l i t y of t he St eam gener at i on syst em. The f ai l ur e of
economi zer r educes t he capaci t y of syst em. Tabl e 1 r eveal s t he ef f ect of f ai l ur e and
r epai r r at es of Boi l er subsyst em on t he avai l abi l i t y of t he syst em. I t i s obser ved t hat f or
some known val ues of f ai l ur e / r epai r r at es of Condenser , Dear at or & Economi ser
(
2
=0. 006,
3
= 0. 0025,
4
=0. 004,
2
=0. 1,
3
=0. 125,
4
=0. 05) , as t he f ai l ur e r at es of
Boi l er i ncr eases f r om 0. 0006 t o 0. 001 t he avai l abi l i t y decr eases by about 1. 6%.
Si mi l ar l y as r epai r s r at es of Boi l er i ncr eases f r om 0. 02 t o 0. 1, t he avai l abi l i t y i ncr eases
by about 1. 99%.
Tabl e 2 r eveal s t he ef f ect of f ai l ur e and r epai r r at es of Condenser subsyst em on t he
avai l abi l i t y of t he St eam gener at i on syst em. I t i s obser ved t hat f or some known val ues
of f ai l ur e / r epai r r at es of Boi l er , Dear at or & Economi ser (
1
=0. 0006,
3
= 0. 0025,

4
=0. 004,
1
=0. 02,
3
=0. 125,
4
=0. 05) , as t he f ai l ur e r at es of Condenser i ncr eases f r om
0. 006 t o 0. 01, t he avai l abi l i t y decr eases by about 3. 14%. Si mi l ar l y as r epai r s r at es of
Condenser i ncr eases f r om 0. 1 t o 0. 5, t he avai l abi l i t y i ncr eases by about 4. 07%.
Tabl e 3 r eveal s t he ef f ect of f ai l ur e and r epai r r at es of Dear at or subsyst em on t he
avai l abi l i t y of t he syst em. I t i s obser ved t hat f or some known val ues of f ai l ur e / r epai r
r at es of Boi l er , Condenser & Economi ser (
1
=0. 0006,
2
= 0. 006,
4
=0. 004,
1
=0. 02,

2
=0. 1,
4
=0. 05) , as t he f ai l ur e r at es of Dear at or i ncr eases f r om 0. 0025 t o 0. 041, t he
avai l abi l i t y decr eases by about 1. 03%. Si mi l ar l y as r epai r s r at es of Dear at or i ncr eases
f r om 0. 125 t o 0. 250, t he avai l abi l i t y i ncr eases by about 0. 81%.
5. CONCLUSIONS
The Deci si on Suppor t Syst em f or St eam gener at i on syst em uni t has been devel oped wi t h
t he hel p of mat hemat i cal model i ng usi ng pr obabi l i st i c appr oach. The deci si on mat r i ces
ar e al so devel oped. These mat r i ces f aci l i t at e t he mai nt enance deci si ons t o be made at
cr i t i cal poi nt s wher e r epai r pr i or i t y shoul d be gi ven t o some par t i cul ar subsyst em of
St eam gener at i on syst em. Deci si on mat r i x as gi ven i n t abl e 2 cl ear l y i ndi cat es t hat t he
Condenser i s most cr i t i cal subsyst em as f ar as mai nt enance aspect i s concer ned. So,
Condenser shoul d be gi ven t op pr i or i t y as t he ef f ect of i t s f ai l ur e and r epai r r at es on t he
uni t avai l abi l i t y i s much hi gher t han t hat of Boi l er , Dear at or & Economi ser . Ther ef or e,
on t he basi s of r epai r r at es, t he mai nt enance pr i or i t y shoul d be gi ven as per f ol l owi ng
or der :
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 37

1. Condenser
2. Boi l er
3. Dear at or
4. Economi ser

REFERENCES:
[1] Aven Terj e. (1991), Avai l abi l i t y For mul a for St andby Syst ems of Si mi l ar Uni t s That
are Pr event i vel y Mai nt ai ned, I EEE Tr ansact i ons on Rel i abi l i t y, Vol . 39, No. 5,
December 1991.
[2] Kumar D. et al . (1992), Avai l abi l i t y of t he Cr yst al l i zat i on Syst em i n t he Sugar
i ndust r y under Common Cause Fai l ur e, I EEE Transact i ons on Rel i abi l i t y, Vol . 41,
No. 1, 1992.
[3] Zhao M. (1994), Avai l abi l i t y for Repai r abl e Component s and Ser i es Syst em, I EEE
Transact i ons on Rel i abi l i t y, Vol . 43, No. 2, June 1994.
[4] Nag N. C. (1999), Si mul at i on of Rel i abi l i t y for Hydraul i c Power Uni t [ Mai nt enance of
Ai r - Cr af t ], M. Tech. Thesi s, G. N. D. E. Col l ege, Ludhi ana, Punj ab [1999].
[5] Nakamura M. et al . (2001), Deci si ons for Mai nt enan ce-Int erval s of Equi pment i n
Ther mal Power St at i ons, IEEE Transact i ons on Rel i abi l i t y, Vol . R-50, No. 4, pp. 360-
364, 2001.
[6] Ver ma, S. M. & Reddy, E. M. K. (2003), Rel i abi l i t y Indi ces for 2 - Component Non-
Ident i cal Syst em, The Inst i t ut e of Engi neers, Vol . 84, November 2003.
[7] Shar ma A. K. (2006), Rel i abi l i t y Opt i mi zat i on of a St eam Ther mal Power Pl ant usi ng
Genet i c al gor i t hm t echni que Paper Code I MECE2006-15820, Proceedi ngs of ASME
Int ernat i onal Mechani cal Engi neer i ng Congr ess and Exposi t i on, 2006 Nov. 5 -10, 2006,
Chi cago, I l l i noi s, USA.
[8] Khanduj a, Raj i v. , Tewar i , P. C. , Di nesh Kumar. , (2008) , Avai l abi l i t y Anal ysi s of
Bl eachi ng Syst em of Paper Pl ant . Journal of Indust r i al Engi neer i ng, Udyog Pragat i ,
N. I. T. I. E. Mumbai (Indi a), Vol . 32 No. 1, pp 24-29.
[9] Tewar i , P. C. & Gupt a, S. (2010) , Markov Approach for Predi ct i ve Model i ng and
Perfor mance Eval uat i on of a Thermal Power Pl ant . Int ernat i onal Journal of
Rel i abi l i t y, Qual i t y and Saf et y Engi neer i ng, Vol . 17 No. 1, pp 41 - 55.
[ 10] Dhi l l on, B. S. , Rel i abi l i t y Engi neer i ng i n Syst ems Desi gn and Oper at i on, New
Yor k, Van Nost r and- Rei nhol d, 1983.
[ 11] Medhi , J. , St ochast i c Pr ocesses, New Age I nt er nat i onal , New Del hi , 1994.
[ 12] Dhi l l on, B. S. , Desi gn Rel i abi l i t y f undament al s and Appl i cat i ons, CRC Pr ess
LLC,
1999.
[ 13] Sr i nat h, L. S. , Rel i abi l i t y Engi neer i ng, 3
r d
edi t i on, East - West Pr ess Pvt . Lt d. , New
Del hi , I ndi a, 1994.















Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 38

BASICS OF FLUID DYNAMICS FROM THE PERSPECTIVES OF
CFD: AN ENGINEERING APPROACH

Nishit Bedi
Assistant Professor, School of Mechanical Engineering, MVN Institute of Technology, Faridabad

bedin732@gmail.com


ABSTRACT
CFD is today an equal partner with pure theory and pure experiment in the analysis and solution of fluid
dynamic problems. Before the application of computational fluid dynamics in any aspect we must fully
appreciate the governing equations and the physics described by them. This paper presents the basic concepts of
fluid dynamics the related physics behind the concepts from the view point of CFD.
Keywords: fluid dynamics, CFD
1 INTRODUCTION
The advent of high speed digital computers combined with the development of accurate numerical algorithms
for solving physical problems on these computers have revolutionized the way we study and practice any
engineering problem with the area ranging from research to product development for industry. Thus computers
have provided all new third approach which complements the other two approaches of pure theory and pure
experiment to keep things in perspectives. The future advancement of fluid dynamics will rest upon a proper
balance of all three approaches, with CFD helping to interpret and understand the results of theory and
experiment.

1.1 CFD as a Research and Design tool
CFD results are directly analogous to the wind tunnel results obtained in the laboratory. This provides us tool to
handle complex problems countered in research in two ways
1) The results can explain physical aspects of a flow field in a manner not achievable in a real laboratory
experiments.
Consider subsonic compressible flow over an airfoil, we want to know the difference between laminar and
turbulent flow for Re = 100000. It is a matter of single run for computer once with turbulence model switched
off and the other time turbulence model switched on and comparing the results (as shown in fig 1(i) and (ii)).
The flow is separated over both the top and bottom surfaces of the airfoil also the flow is unsteady. The situation
is not quite readily being possible during experiments.
Fig.1 (i) Laminar Flow
Fig.1 (ii) Turbulent Flow

Fig.1 (i) Laminar Flow Fig.1.(ii) Turbulent Flow

2) Numerical experiments can be used to ascertain basic phenomenological aspects of the experiments which is
not apparent from the laboratory data.
During the experiments there was uncertainty based on experimental observations, whether the flow is laminar
or turbulent. fig 2(i) and (ii) represents the plot for lift and drag coefficient as a function of angle of attack for
the same airfoil. The laminar flow values are not even close to the experimental measurements, but values of
turbulent steady flow is in close agreement with the experimental data. Thus by examining the CFD data we
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 39

have to conclude that flow is indeed turbulent. Thus CFD can work closely with the experimentation by
providing quantitative comparison also helping to interpret basic phenomenological aspect of experimental
conditions.







Fig.2 (i) Lift coefficient V/s angle of attack Fig.2(ii) Drag coefficient V/s for angle of attack
for Wortmann airfoil Experimental results. for Wortmann airfoil Experimental results.
Turbulent, Laminar Turbulent, Laminar

The particular solutions already existing in theory have inherent drawback that they are mostly two dimensional.
The real world of engineers is 3 Dimensional. The storage and speed capacity of digital computers today allow
CFD to operate in any practical fashion. Ex: The automobile industry is using CFD to better understand the
physical flow processes, thus designing improved vehicles in terms of performance and design of modern cars
and trucks. fig.3 shows the velocity flow pattern in the valve plane when the piston is at BDC during the intake
stroke. Today CFD is applied to study all aspects of IC engines including combustion, turbulence, manifold and
exhaust pipes etc.







Fig.3 Velocity pattern in the valve plane near bottom dead center of the intake stroke for a piston cylinder
arrangement in an internal combustion engine

1.2 What is CFD.
Any of fluid dynamics problems is based on three laws, law of conservation of mass, momentum and energy
and these are expressed in terms of mathematical equations having differentials and integrals in it.
Computational fluid dynamics is art of replacing these differentials and integrals in algebraic forms which will
be solved to obtain numbers for flow properties at finite points. The aim of the paper is to provide some insight
to the power and philosophy of CFD and an understanding of the governing equations of fluid dynamics in
forms particularly suited to CFD.

2. GOVERNING EQUATIONS OF FLUID DYNAMICS

In solid, when in motion, every particle has same velocity whereas in liquids, when in motion every particle has
different velocity. Thus to examine any problem in fluids based on Eulers and lagrangian approach we can have
4 models as shown below in fig. 4.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 40





(a) Finite control volume fixed in space with (b) finite control volume moving with the fluid
the fluid moving through it. i.e. the same fluid particles are always in same CV






(c) Infinitesimal fluid element fixed in space (d) Infinitesimal fluid element moving along a
with the fluid moving through it. streamline with the velocity V equal to the local
flow velocity at each point
Fig.4 Models of the flow

An important comment: In given algorithm of CFD, the use of equations in one form may lead to success
whereas the use of alternate form may result in oscillations in numerical results, incorrect results or instabilities.
Thus the correct form is a necessity. The fluid flow equations we obtain by applying the fundamental physical
principle to the finite control volume in integral form and to infinitesimal fluid element in differential form
moreover if that control volume or infinitesimal fluid element is fixed in space, the equation obtained is in
conservation form.
The integral form allows for the presence of discontinuities inside the control volume fixed in space whereas the
differential form of the governing equations assumes the flow properties are differentiable, hence continuous.
Thus the integral form is considered more fundamental than the differential form.

2.1 Expressions for governing equations
1) Non conservation Form
Continuity Equation (D/Dt) + V = 0
Momentum Equation x component (Du/Dt) = (-p/x) + (

) + (

) + (

) + (f
x
)
y component (Dv/Dt) = (-p/y) + (

) + (

) + (

) + (f
y
)
z component (Dw/Dt) = (-p/z) + (

) + (

) + (

) + (f
z
)
Energy Equation

(e + V
2
/2) = q +

( k

) +

( k

) +

( k

+ f V
2) Conservation form
Continuity Equation (/t) + (V) = 0
Momentum Equation x component ((u)/t) + (uV) = (-) + (

) + (

) + (

) + (f
x
)
y component ((v)/t) + (vV) = (-p/y) + (

) + (

) + (

) + (f
y
)
z component ((w)/t) + (wV) = (-p/z) + (

) + (

) + (

) + (f
z
)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 41

Energy Equation

(e + V
2
/2)] + (e + V
2
/2)V] = q +

( k

) +

( k

) +

( k

+ f V

1 Momentum and energy equations are having difference between the non conservation and conservation forms
of the equations is just the left hand side. The right hand side of the equations is the same.
2 Navier stokes solution simply means a solution of a viscous flow problem using the full governing equations
thus Numerical solution to equation is usually referred to the solution of complete system of equations i.e.
continuity, momentum and energy equation.
3 In the equations mentioned above we have six unknown variables and five equations. It is reasonable to
assume the gas is a perfect gas thus another equation will be of perfect gas p = RT but this introduces
temperature as one more variable. The seventh equation will come from the thermodynamic relation i.e. e = c
v
T
2.2 Governing Equations presented in conservation form are compelling why?
1) Numerical solution usually referred to the solution of complete system of equations i.e. continuity,
momentum and energy equation. Thus it is desirable that all three (continuity, momentum and energy) equations
should be expressed in a single equation. Conservation form provides this computer programming convenience
that all three equations can be expressed in single generic equation
(U/t) + (F/x) + (G/y) + (H/z) = J

Where U =

F =

G =



H =

J =



This equation has time term thus either for an inherent transient solution, or steady state solutions the solution of
equation takes the form of a time marching solution where the variables are solved progressively in steps of
time. Where the steady state solutions can be achieved at large times

Thus (U/t) = J - (F/x) - (G/y) - (H/z)

Marching solutions can also be obtained for a specific direction i.e. for analyzing the effects of parameters in a
particular direction, steady state solutions can be marched in a specific special direction. Ex. If a marching
solution is allowed in x direction then

(F/x) = J - (G/y) - (H/z)

2) When using computational methods for calculations of flows with shocks. Shock fitting method generates
satisfactory results for both conservation and non conservation form of equations. On the other hand shock
capturing method generates satisfactory and stable results when conservation of equations are used because such
form uses flux variables as a dependent variables in contrast to primitive variables used as dependent variables
in non conservation form, changes in flux variables are either zero or small across a shock wave thus preserving
the numerical quality of the shock capturing method.





Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 42

3. BEHAVIOR OF PARTIAL DIFFERENTIAL EQUATIONS - EXPRESSING
PHYSICS

If the numerical solution of the equation is valid it should exhibit the property of obeying the general
mathematical property of governing equations. At the same time if the mathematical behavior of the same
equation is different at different locations it will correspond to different physical behavior.
The general equation of conic sections in analytic geometry is ax
2
+ bxy +cy
2
+ dx + ey + f = 0
Where if b
2
4ac > 0 the conic is hyperbola,
two real and distinct roots are available in xy plane i.e. two directions. These roots indicate domain and
boundaries for the solution of hyperbolic equations. Fig.5(i) shows such a plane. A significance of these
characteristics is that information at point P influences only the region between the two characteristics. A small
disturbance at point P is felt at every point ahead of P in region I, also the properties at P depend only the
happenings within the region III shown upstream P. Point P is outside region II thus does not feel the
information from point C.
In CFD, hyperbolic partial differential equations are solved using marching technique. The algorithm is
designed to start with the given initial condition in one direction and subsequently solution is obtained in steps
marching in other direction.














(i) (ii) (iii)
Fig.5 Domain and boundaries for the solution of hyperbolic, parabolic and elliptic equations in two dimensions

b
2
4ac = 0 the conic is parabola,
one root is available in xy plane i.e. only one characteristic direction through point P. Fig.5 (ii) shows domain
and boundaries for the solution of parabolic equations, if the boundary conditions are known along ab and cd
the solution will march within this region. Moreover starting with the initial line the solution will move forward
towards positive x axis. Thus characteristic direction is given by a vertical line through P. so the information at
P influences the entire region downstream of it contained within the boundary condition.
In CFD, parabolic partial differential equations are also solved using marching technique. Starting with the
initial data line ac the solution is marched within the boundary conditions in x direction.

b
2
4ac < 0 the conic is ellipse
The roots obtained are imaginary, i.e. there is no limited domain of dependence and regions of influence
information is propagated everywhere in all directions. Fig.5(iii) shows the domain and boundary for the
solution of elliptic equations, if a small disturbance is introduced at point P its disturbance is felt everywhere in
the region. As point P influences all points in the domain, the solution at point P is affected by changes
anywhere in the entire closed region. Therefore the solution at point P must be carried out simultaneously with
the solution at all points in the domain.

4. CONCLUSIONS

CFD can be very handy player to have in team of pure theory and experimentation to strengthen results both in
research and design, but the word computational is only adjective to fluid dynamics the basic conceptual
understanding of fluid dynamic is a must for the correct interpretation of the results.
The classification of governing equations as conservation and non conservation form grew out for CFD and it is
convenient to have governing equations in conservation form because of stable results produced during shock
capturing methods.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 43

A physical understanding of the mathematical behavior of equations as hyperbolic, parabolic and elliptic is
presented. Elliptic equations behave entirely different than those of hyperbolic and parabolic equations. Thus the
solution of equations represented by elliptic form should be approached in a separate manner than hyperbolic or
parabolic form.

REFERENCES

[1] Modi P.N. and Seth S.M., Hydraulics and Fluid Mechanics Including Hydraulic Machines, 17
th
edition,
Rajson Publications 2009.
[2] Anderson John.D, Fundamental of Aerodynamics, 4
nd
Indian edition, TMH 2009.
[3] Cengels A Yunus, Heat and mass Transfer A Practical Approach, 3
rd
Indian edition, TMH 2007.
[4] Anderson John.D, Computational Fluid Dynamics the basics with applications, TMH International 1995.
[5] Moretti G & M Abbett; Time Dependent Computational Method for Blunt Body Flows AIAA. J, volume
4(12) pp 2136-2141, December 1966.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 44

NANOTRIBOLOGY & ITS ADVANCEMENT FOR INDIA'S
DEVELOPMENT

Sharad Shrivastava
1
, Rakesh Uchenia
2
, Kunal Sharma
3
, Vibhu Tripathi
4
Poornima College of Engg, Sitapura, Jaipur
1
,Rajasthan Institute of Engg. & Technology, Bhankrota, jaipur
2
,
Rajasthan Institute of Engg. & Technology, Bhankrota, jaipur
3
, Suresh Gyan Vihar University, Jagatpura, Jaipur
4
Sharad_bsf@rediff.com
1
, rakeshuchenia@gmail.com
2
, V.2006@live.com
3
, kunalsharma.mnit@gmail.com
4


ABSTRACT

Feynman once said "there is plenty of room at the bottom". Nanotechnology means any technology Process at
the nanoscale that has applications to the real wonder world. Going from macro to Nano scale size, the surface
area to volume ratio increases considerably and the surface forces such as friction, adhesion, meniscus forces,
viscous drag, and surface area significantly increase. Rapid actuation requires fast moving interacting surfaces.
The need to miniaturize the components has presented challenges and the importance of tribology at a
proportionate scale is being felt. Materials with low friction and adhesion are desirable. And hence
"Nanotribology" is today one of the most important mechanical technology.

Keywords: Adhesion, angstrom level, Friction, lift mode, Nanoscale, Nanolubrication, Scanning.

NANOTRIBOLOGY - The road to no WEAR!

"Tribology" is a combination of two Greek words - "tribo" and "logy". "Tribo" means rubbing and "logy" means
knowledge. The Greeks originally applied it to understand the motion of large stones across the earth's surface.
Today tribology plays a critical role in diverse technological areas - in the advanced technological industries of
semiconductors and data storage, tribological studies help to optimize polishing processes and lubrication of the
data storage substrates. Tribology helps to increase the lifespan of mechanical components. However many
industrial processes require a detailed understanding of tribology at the nanometer scale.The development of
lubricants in the automobile industry depends on adhesion of nanometer layers or monolayers to the material
surface. Assembly of components can depend critically on the adhesion of materials at the nanometer length
scale.
Nanotribology is the investigations of interfacial processes, on scales ranging in the molecular and atomic
scale, occurring during adhesion, friction, scratching, wear, nanoindentation, and thin-film lubrication at sliding
surfaces.

NEED OF NANOTRIBOLOGY

Nanotribological behavior can help us control and manipulate matters at nanoscale. The new electronic and
atomic interactions as well as new properties like magnetic and mechanical properties observed at nano-levels
for synthesis, assembling and processing of nanoscale building blocks, composites, coatings, porous materials,
smart materials within built condition based maintenance and selfrepair, self cleaning surfaces with reduced and
controlled friction, wear and corrosion.
1. For advanced health care: to modify surfaces in order to create structures that control interaction between
materials and biological systems
2. For energy conversion and storage: nanoscale carbide coatings, self-assembled layers for friction control,
materials performances at NANO & MEMS scale as a function of aging. For in-situ lubrication study and
control. For advancements in ultra low flying head disk interfaces
3. Microcraft space exploration and industrialization: to make self repairing materials and self-replicating,
biomemmetic materials and nanoscale devices which can sustain any need for movement of sliding surfaces
for long periods. Also ultra light weight and ultra strong materials with unique properties for required for
demanding projects.



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 45

How to study nanotribology.

Nanotribology uses many new instruments designed over the last 50 years, such as the SFA, STM, AFM, and
the FFM. SFA or the surface force apparatus was developed in the 1960's and has been commonly used to study
the static and dynamic properties of molecularly thin films sandwiched between two molecularly smooth
surfaces.
The scanning tunneling microscope or the STM was developed in 1981 and has since then been used to image
clean conducting surfaces and lubrication molecules. The STM has a resolution in the atomic level.AFM: The
atomic force microscope was invented in the year of 1985 and its common uses include
1. Measuring ultra-small forces between probe tip and the surface.
2. Topographical measurements on the nanoscale
3. Adhesion force measurements
4. Electrostatic force measurements
5. Investigating scratching, wear and indentation
6. Detection of transfer of material
7. Boundary lubrication
8. Fabrication and machining
The friction force microscope or the FFM is a modified form of the AFM and gives the atomic and micro scale
studies of friction and lubrication. The FFM is also known as the LFM (lateral force microscope). It uses a sharp
diamond tip mounted on a stiff cantilever beam.

Surface Force Apparatus (SFA)

The SFA consists of a pair of automatically smooth surfaces, usually mica sheets, which are mounted on crossed
cylinders that can be pressed together to form a circular contact under pressure. The mica surfaces can be treated
to attach molecules of interest, and the surfaces may be immersed completely within a liquid, or maintained in a
controlled environment. Actuators attached to either or both of the surface's supports are used to apply a load or
shear force and used to control the distance of separation between them. Sensors are attached to measure the
load and friction forces.The separation distance can be measured and controlled to the angstrom level. The
lateral resolution is limited to the range of several micrometers. The instrument is thus a model contact where
the contacting geometry is known, the material between the surfaces can be varied, and the interaction forces
can be controlled and measured. The drawbacks of the instrument are that the lateral resolution is limited and
molecular smoothness is required to obtain meaningful results and so usually the substrate is restricted to mica.



The atomic force microscope was developed by Gerd Binnig et al. in 1985.The AFM relies on a scanning
technique to produce very high resolution, three-dimensional images of sample surfaces. AFM measures ultra
small forces (less than 1 nN) present between the AFM tip surface mounted on a flexible cantilever beam, and a
sample surface. These small forces are measured by measuring the motion of a very flexible nanosized
cantilever beam having an ultra small mass, by a variety of measurement techniques including optical
deflection, optical interference, capacitance, and tunneling current. The deflection can be measured to within
0.02 nm, so for a typical cantilever force constant of 10 N/m, a force as low as 0.2 nN can be detected. In the
operation of high- resolution AFM, the sample is generally scanned rather than the tip because any cantilever
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 46

movement would add vibrations. Tips with a radius ranging from 10 to 100 nm are commonly available.
The friction force microscope or the lateral force microscope (LFM), designed for atomic-scale and microscale
studies of friction and lubrication. This instrument measures lateral or friction forces (in the plane of sample
surface and in the direction of sliding). By using a standard or a sharp diamond tip mounted on a stiff cantilever
beam, AFM is also used in investigations of scratching and wear, indentation, and fabrication/machining.
Surface roughness, including atomic-scale imaging, is routinely measured using the AFM. Adhesion, friction,
wear and boundary lubrication at the interface between two solids with and without liquid films have been
studied using AFM and FFM.






AFM TO STUDY NANOTRIBOLOGY (Surface roughness and friction force
measurements)

Simultaneous measurements of surface roughness and friction force can be made with the AFM. For such
measurements, the sample is mounted on a PZT tube scanner which consists of separate electrodes to scan
precisely the sample in the X-Y plane in a raster pattern and to move the sample in the vertical (Z) direction. A
sharp tip at the end of a flexible cantilever is brought in contact with the sample. Normal and frictional forces
being applied at the tip-sample interface are measured using a laser beam deflection technique. A laser beam
from a diode laser is directed by a prism onto the back of a cantilever near its free end, tilted downward at about
10 degrees with respect to the horizontal plane. The reflected beam from the vertex of the cantilever is directed
through a mirror onto a quad photo detector (split photo detector with four quadrants). The differential signal
from the top and bottom photodiodes provides the AFM signal which is a sensitive measure of the cantilever
vertical deflection. Topographic features of the sample cause the tip to deflect in the vertical direction, the
sample is scanned under the tip. This tip deflection will change the direction of the reflected laser beam,
changing the intensity difference between the top and bottom photo detector (AFM signal).A feedback circuit is
used to modulate the voltage applied to the PZT scanner to adjust the height of the PZT, so that the cantilever
vertical deflection will remain constant during scanning. The PZT height variation is thus a direct measure of
the surface roughness of the sample. The oscillating amplitude is kept large enough so that the tip does not get
stuck to the sample because of adhesive attractions. received in the left and right quadrants of the photo detector.





Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 47

Adhesion measurements
Adhesive force measurements are performed in the so-called force calibration mode.( force distance curves) The
horizontal axis gives the distance the piezo travels and the vertical axis gives the tip deflection. As the piezo
extends, it approaches the tip, which is at this point in free air and hence shows no deflection. This is indicated
by the flat portion of the curve. As the tip approaches the sample within a few nanometers (point A), an
attractive force exists between the atoms of the tip surface and the atoms of the sample surface. The tip is pulled
towards the sample and contact occurs at point B on the graph. From this point on, the tip is in contact with the
surface and as the piezo further extends, the tip gets further deflected. This is represented by the sloped portion
of the curve. As the piezo retracts, the tip goes beyond the zero deflection (flat) line because of attractive forces
into the adhesive regime. At point C in the graph, the tip snaps free of the adhesive forces, and is again in free
air. The horizontal distance between points B and C along the retrace line gives the distance moved by the tip in
the adhesive regime. This distance multiplied by the stiffness of the cantilever gives the adhesive force.














SCRATCHING, WEAR AND FABRICATION / MACHINING

For microscale scratching, microscale wear, nanofabrication/ nanomachining and nanoindentation hardness
measurements, an extremely hard tip is required. A three-sided pyramidal single-crystal natural diamond tip
with an apex angle of 80 degrees and a radius of about 100 nm mounted on a stainless steel cantilever beam with
normal stiffness of about 25 N/m is used at relatively higher loads (1 microN-150 microN). For scratching and
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 48

wear studies, the sample is generally scanned in a direction orthogonal to the long axis of the cantilever beam
(typically at a rate of 0.5 Hz) so that friction can be measured during scratching and wear.

Surface potential measurements
To detect wear precursors and to study the early stages of localized wear, the multimode AFM can be used to
measure the potential difference between the tip and the sample by applying a DC bias potential and an
oscillating (AC) potential to a conducting tip over a grounded substrate in a so-called 'nano-Kelvin probe'
technique. Mapping of the surface potential is made in the so-called 'lift mode'. These measurements are made
simultaneously with the topography scan in the surface and traced over the same topography at a constant
distance of 100 nm tapping mode, using an electrically-conducting (nickel-coated single- crystal silicon) tip.



Nanoindentation measurements
For nanoindentation hardness measurements the scan size is set to zero and then a normal load is applied to
make the indents using the diamond tip. During this procedure, the tip is continuously pressed against the
sample surface for about two seconds at various indentation loads. The sample surface is scanned before and
after the scratching, wear or indentation to obtain the initial and the final surface topography, at a low normal
load of about 0.3 mN using the same diamond tip. An area larger than the indentation region is scanned to
observe the indentation marks. However, it becomes difficult to identify the boundary of the indentation mark
with great accuracy. This makes the direct measurement of contact area somewhat inaccurate. The indentation
system consists of a three-plate transducer with electrostatic actuation hardware used for direct application of
normal load and a capacitive sensor used for measurement of vertical displacement. The AFM head is replaced
with this transducer assembly while the specimen is mounted on the PZT scanner which remains stationary
during indentation experiments. Indent area and consequently hardness value can be obtained from the load-
displacement data. Indentation experiments provide a single-point measurement of the Young's modulus of
elasticity.

Boundary lubrication measurements
The classical approach to lubrication uses freely supported multimolecular layers of liquid lubricants. The liquid
lubricants are sometimes chemically bonded to improve their wear resistance. To study depletion of boundary
layers, the micro- scale friction measurements are made as a function of number of cycles. Also the force-
distance (between the probe and the surface) curve obtained from AFM measurements, one can directly get the
thickness of the lubricating film and also an indication of its density. For nanoscale boundary lubrication
studies, the samples are typically scanned using a Si3N4 tip. The samples are generally scanned with a scan rate
of 1 Hz and a scanning speed of 2 mm/s. Coefficient of friction is monitored during scanning for a desired
number of cycles.

APPLICATIONS OF NANOTRIBOLOGY

(1) Ultra low flying head disks interfaces
For extremely high density recording, spacing (fly-height) is expected to be reduced further. It is estimated that
in order to achieve a recording density of 100 Gb/in2, the flying height would have to be about 6 to 7 nm. The
tribological issues encountered for such low flying heights are enormous. At such flying heights the air bearing
force balances the pre-load and adhesion forces between the surfaces is insignificant.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 49

(2) Nanolubrication
Lubrication provide the fluid pressure to separate the surfaces to avoid contact; Easily sheared chemical films
formed on the surface to redistribute the stresses and sacrificially worn off, to protect the surface. Lubrication at
nanoscale requires lubricant molecules which are non- volatile, oxidation and temperature resistant, good
adhesion and cohesion and self repairing or self regenerating. This leads to an all organic film, if an organic film
can stay in contact under the application of forces and repair by itself.
The wear resistance of monomolecular film can be related directly to the bonding strength and cohesive strength
of the monolayer. The famous Langmuir-Blogett films behave like solids, i.e. a solid will deform under contact
stress. This can be done by introducing self repairing self generating property which is defined as the ability of
molecules to rearrange themselves into the original state after they have been disrupted by a contact.


REFERENCES

[1] Friction on the nanoscale: new physical mechanisms: Materials Letters 38 1999 360-366 .
[2] Effect of surface topography on the frictional behavior at the micro/nano-scale : Wear 254 (2003)
1019-1031 .
[3] Fabrication of a novel scanning probe device for quantitative nanotribology: Sensors and Actuators 84
2000 18-24 .
[4] Nanotribology and nanomechanics : Wear 259 (2005) 1507-1531 .
[5] Nanotribology: tip-sample wear under adhesive contact : Tribology International 33 (2000) 443-452 .
[6] Quantitative nanotribology by atomic force microscopy : J. Phys. D: Appl. Phys. 38 (2005) 895-899 .
[7] Reliability aspects of tribology : Tribology International 34 (2001) 801-808 .
[8] Surface analysis of nanomachined films using atomic force microscopy, Materials Chemistry and
Physics 92 (2005) 379-383.
[9] http://www.physics.leidenuniv.nl/sections/cm/ip/projects/nano-tribo.
[10] http://www.fi.tartu.ee/labs/mtl/mtl/nanotribo.htm
[11] www.nano-world.org/Reibung/frictionmodule.
[12] www.nano-world/frictionmodule





Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 50

BIOFUELS & ITS ADVANCEMENT FOR INDIA'S DEVELOPMENT IN
MODERN ERA

Sharad Shrivastava
1
, Rakesh Uchenia
2
, Kunal Sharma
3
, Nitin Dadu
4
1
M.Tech. Student,Production Engineering,Poornima College of Engineering,Sitapura, Jaipur,
2
Suresh Gyan
Vihar University, Jagatpura, Jaipur,
3, 4
Rajasthan Institute of Engg. & Technology, Bhankrota,

1
shrivastavasharad1@gmail.com,
2
rakeshuchenia@gmail.com,
3
kunalsharma.mnit@gmail.com,
4
nitin_dadu2003@yahoo.com


ABSTRACT

Today, applications in the transport sector are based on liquid fuels. The advantage of liquid fuels is that they
are easy to store. Gaseous. Even less applications exist for solid fuels. They were only used in the past e.g. for
trains. Transport fuels are classified into two basically different categories: fossil fuels which are mainly based
on crude oil and natural gas, and biofuels made from renewable resources. The use of biofuels largely depends
on the potential of available feedstock sources. The overall biofuel potential which largely depends on climate,
land availability and the productivity of dedicated energy crops. For the production of fossil and renewable
transport fuels different primary energy sources are needed. Although mainly crude oil is used for the
production of transport fuels today. Biodiesel is similar to fossil diesel and bioethanol has similar properties as
petrol. In order to assess benefits from the utilization of biofuels compared to fossil fuels, life cycles have to be
determined. There is substantial scientific evidence that accelerating global warming is a cause of greenhouse
gas (GHG) emissions. One of the main greenhouse gases is carbon dioxide (CO
2
). But also nitrous oxide (N
2
O),
methane (CH
4
) and several other compounds are greenhouse gases which are even more severe to global
warming than CO
2
.International trade of biofuels is small compared to international trade of fossil fuels.
Biofuels are traded mainly between neighboring regions and countries. But since biofuel production is growing
continuously, new trading relationships will be established in future. Thus, also trade over long distances will
increase.

Keywords:
1
Alcohol fuels,
2
Biomass,
3
Biodiesel,
4
Dimethyl ether,
5
Ethanol,
6
Thermochemical.

INTRODUCTION

Biofuels are drawing increasing attention worldwide as substitutes for petroleum-derived transportation fuels to
help address energy cost, energy security and global warming concerns associated with liquid fossil fuels. The
term biofuel is used here to mean any liquid fuel made from plant material that can be used as a substitute for
petroleum-derived fuel. Biofuels can include relatively familiar ones, such as ethanol made from sugar cane or
diesel-like fuel made from soybean oil, to less familiar fuels such as dimethyl ether (DME) or Fischer-Tropsch
liquids (FTL) made from lignocellulosic biomass.
A relatively recently popularized classification for liquid biofuels includes first-generation and second-
generation fuels. A first-generation fuel is generally one made from sugars, grains, or seeds, i.e. one that uses
only a specific (often edible) portion of the above-ground biomass produced by a plant, and relatively simple
processing is required to produce a finished fuel. Second-generation fuels are generally those made from non-
edible lignocellulosic biomass, either non-edible residues of food crop production (e.g. corn stalks or rice husks)
or non-edible whole plant biomass (e.g. grasses or trees grown specifically for energy). Second-generation fuels
are not yet being produced commercially in any country.
The substitutability of various biofuels for common petroleum-derived fuels. Alcohol fuels can substitute for
gasoline in spark-ignition engines, while biodiesel, green diesel and DME are suitable for use in compression
ignition engines. The Fischer-Tropsch process can produce a variety of different hydrocarbon fuels, the primary
one of which is a diesel-like fuel for compression ignition engines. Biofuels for the transport sector, the use of
biofuels for cooking is a potential application of wide relevance globally, especially in rural areas of developing
countries. In all cases, combustion of biofuels for cooking will yield emissions of pollutants that are lower than
emissions from cooking with solid fuels.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 51





First-generation biofuels

The most well-known first-generation biofuel is ethanol made by fermenting sugar extracted from sugar cane or
sugar beets, or sugar extracted from starch contained in maize kernels or other starch-laden crops. Similar
processing, but with different fermentation organisms, can yield another alcohol, butanol. Global production of
first-generation bio-ethanol in 2006 was about 51 billion liters with Brazil (from sugar cane) and the United
States (from maize) each contributing about 18 billion liters, or 35 per cent of the total. China and India
contributed 11 per cent to global ethanol production in 2006, and production levels were much lower in other
countries with feedstocks that include cane, corn, and several other sugar or starch crops (sugar beets, wheat,
and potatoes). Many countries are expanding or contemplating expanding their first-generation ethanol
production.
Biodiesel made from oil-seed crops is the other well-known first-generation biofuel. Jatropha, a non-edible-oil
tree, is drawing attention for its ability to produce oil seeds on lands of widely varying quality. In India, Jatropha
biodiesel is being pursued as part of a wasteland reclamation strategy, the perspective of petroleum substitution
or carbon emissions reductions potential, biodiesel derived from oil-bearing seeds are like starch-based alcohol
fuels limited.

Second-generation biofuels

Second-generation biofuels share the feature of being produced from lignocellulosic biomass, enabling the use
of lower-cost, non-edible feedstocks, thereby limiting direct food vs. fuel competition. Second-generation
biofuels can be further classified in terms of the process used to convert the biomass to fuel: biochemical or
thermochemical. Second-generation ethanol or butanol would be made via biochemical processing, while all
other second-generation fuels would be made via thermochemical processing. The second-generation
thermochemical fuels include methanol, refined Fischer-Tropsch liquids (FTL), and dimethyl ether (DME).
Mixed alcohols can also be made from fossil fuels, but there is no commercial production today due to the
immature state of some components of systems for producing these. The other thermochemical biofuel in is
green diesel, for which there is no obvious fossil fuel analog. Unrefined fuels, such as pyrolysis oils, are also
produced thermochemically, but these require considerable refining before they can be used in engines.
Production pathways to liquid fuels from biomass and, for comparison, from fossil fuels
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 52



1 Second-generation biochemical biofuels

The fuel properties of second-generation ethanol or butanol are identical to those of the first-generation
equivalents, but because the starting feedstock is lignocelluose, Second-generation biochemically-produced
alcohol fuels are often referred to as cellulosic ethanol and cellulosic biobutanol. The basic steps for
producing these include pre-treatment, saccharification, fermentation, and distillation Pretreatment is designed
to help separate cellulose, hemicellulose and lignin so that the complex carbohydrate molecules constituting the
cellulose and hemicellulose can be broken down by enzyme catalyzed hydrolysis (water addition) into their
constituent simple sugars.

Process steps for production of second-generation fuel ethanol



2 Second-generation thermochemical biofuels

Thermochemical biomass conversion involves processes at much higher temperatures and generally higher
pressures than those found in biochemical conversion systems. Key intrinsic characteristics distinguishing
thermochemical from biochemical biofuels are the flexibility in feedstocks that can be accommodated with
thermochemical processing and the diversity of finished fuels that can be produced. Thermochemical production
of biofuels begins with gasification or pyrolysis. The different biofuels is Fisher-Tropsch liquids (FTL),
dimethyl ether (DME), and various alcohols.
During gasification, biomass is heated to cause it to be converted into a mixture of combustible and non-
combustible gases
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 53



Three thermochemically-produced fuels are getting considerable attention in different parts of the world
today: FTL, DME and alcohol fuel.

Fischer-Tropsch liquid (FTL) is a mixture of primarily straight-chain hydrocarbon compounds (olefins and
paraffin) that resembles a semi-refined crude oil.FTL is synthesized by catalytically reacting CO and H2. Thus,
any feedstock that can be converted into CO and H2 can be used to produce FTL. In particular, coal, natural gas
or biomass can be used as a feedstock for FTL production.
Dimethyl ether (DME) is a colorless gas at normal temperatures and pressures, with a slight ethereal odour. It
liquefies under slight pressure, much like propane. It is relatively inert, non-corrosive, non-carcinogenic, almost
non-toxic, and does not form peroxides by prolonged exposure to air. Its physical properties make it a suitable
substitute (or blending agent) for liquefied petroleum gas (LPG, a mixture of propane and butane). If the DME
blending level is limited to 1525per cent by volume, mixtures of DME and LPG can be used with combustion
equipment designed for LPG without changes to the equipment .DME is also an excellent diesel engine fuel due
to its high cetane number and absence of soot production during combustion. It is not feasible to blend DME
with conventional diesel fuel in existing engines, because DME must be stored under mild pressure to maintain
a liquid state. Alcohol fuel that can be made via syngas processing is drawing attention in the United States at
present. One such fuel is ethanol (or butanol); a second is a mixture of alcohols that includes a significant
fraction of ethanol plus smaller fractions of several higher alcohols. Butanol and the mixed-alcohol fuel have
the potential to be used much the way ethanol is used today for blending with gasoline. These are characterized
by higher volumetric energy densities and lower vapour pressures than ethanol, however, making them more
attractive as a fuel or blending agent.

First- vs. second-generation biofuels



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 54

Implications for trade and development
There exists today a significant demand in industrialized countries for bio fuels, driven largely by regulatory
mandates for blending of bio fuels into petroleum fuels. This demand is likely to grow considerably in the years
ahead, driven by increasingly ambitious regulatory mandates, sustained high oil prices, and energy security
concerns. Bio fuel demands in many developing countries will also grow, driven by similar factors.
Opportunities for trade in bio fuels or bio fuel feed stocks will be expanding .The development of bio fuels
industries in developing countries supplying domestic and/or global markets.
The limitations of first-generation bio fuels in terms of direct food vs. fuel conflict, cost-competitiveness, and
greenhouse gas emissions reductions are not likely to be substantially different in developing countries than in
industrialized countries. While the climate in many developing countries is better suited than in many
industrialized countries to growing first-generation bio fuel feed stocks, agricultural productivities are generally
lower. The economics of first generation bio fuels may not be much better than can be achieved in industrialized
countries, because global commodity markets may set prices for first-generation bio fuel feed stocks.
Considering second-generation bio fuel technologies, given that they are primarily being developed in
industrialized countries, Technologies developed for industrialized country applications will typically be capital-
intensive, labour-minimizing, and designed for large-scale installations to achieve best economics.

REFERENCES

[1] www.marketreports.com/Sample/.../Bio fuels_Market-India-Sample
[2] www.alt-energy.info/biofuel/new-mobile-biofuel-process
[3] www.unctad.org/en/docs/ditcted200710_en.pdf
[4] www.filedigg.com/fileSearch?...ppt%20on%20biodiesel
[5] www.reegle.info
[6] www.globalproblems-globalsolutions-files.org/unf.../
[7] www.pptconverter.com/.../biofuel-powerpoint-ppt-template
[8] www.scribd.com/doc/40040259/Biofue





















Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 55

SIMULATION OF OCEAN WAVES PARTICLE TRAJECTORIES IN 2-
DIMENSIONAL USING LINEAR THEORY AND CONCEPT OF
TSUNAMI WAVES

Venkatesan G
1
,

Prabaharan M
2

Ecole centrale de Nantes, France
1
GTRE, Bangalore
2

1
venkat.aero2007@gmail.com


ABSTRACT

Airy wave theory (linear wave theory) gives a linearised description of the propagation of gravity waves on the
surface of a homogenous fluid layer. The theory assumes that the fluid layer has a uniform mean depth; the fluid
is inviscid, incompressible and irrotational. This theory also gives a description of the wave kinematics
and dynamics of high-enough accuracy. Further, several second-orders nonlinear properties of surface gravity
waves, and their propagation, can be estimated from its results. Moreover, Airy wave theory uses a potential
flow approach to describe the motion of gravity waves on a fluid surface. We use linear wave theory in our
paper to calculate the various characteristics of waves. The linear theory is only valid for non-breaking waves
with small amplitude, i.e. when the amplitude (H) is small compared to the wave length () and the water depth
(h). The main objective of our paper is to set-up a program simulating particle trajectories under regular 2D
waves in linear theory. For the Shallow depth, Intermediate depth and Deep water case, we have to estimate the
velocity at the mean particle position, the velocities at the actual time-depending particle position and to show
that a drifting motion appears. In addition we also estimate the mean drift velocities at the free surface for
different wave amplitudes. Moreover we also this paper explain some concepts about Tsunami waves.

INTRODUCTION
Wave is a kind of propagation which cold travel through space and time. The propagation represents the amount
of transferred-energy. Moreover, at any fixed time (instant time) the wave will be sinusoidal along x direction
and so on in reverse way (shown at Figure 1 below). We could express several relationships for waves as a
function of
0
( , ) cos( ). A t x A kx t e = (Where we are using cos function instead of sin because of the matter of
convenience. The difference is only the Phase of the wave.





Figure 1. Sinusoidal wave along x direction at fixed time
Where: a=amplitude of the waves, k =wave number (
L
k
t 2
= ), x=space where the wave propagate,
=angular frequency (number of radians of the wave that pass a given location per unit time), t =time, T=wave
period (time between two crest passage of same vertical section), H=wave height, L=wave length, c=phase
velocity, u=horizontal particle velocity, w=vertical wave velocity, h=water depth.

GROUP AND PHASE VELOCITY
Regarding the propagation of the waves there are two different types of velocity which could impose a waves,
which are group velocity and phase velocity. Phase velocity of the waves is the rate where the wave is
propagates in space with certain speed and frequency. Group velocity is a velocity which the overall shape of
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 56

the waves amplitudes. The group velocity is imposed from simple waves which have a different frequency and
wavelength.





Figure 2. A and B waves with different frequency and wavelength, represents the phase velocity each. A+B: the
group velocity which is imposed from wave A and B.

For the group velocity, when we do the superposition of two elementary waves, with the same frequency, we
have :
| | | | ) | | | | )
{ } ) cos( ) cos( lim
cos(
2
cos(
2
lim ) , (
0 ,
0 ,
kx t kx t a
x k k t
a
x k k t
a
t x
k
k
o oe e
o oe e o oe e q
oe o
oe o
=
)
`

+ =




Figure 3: The carrier waves and the envelope represent the phase and group velocity along x direction.
Individual waves propagate at the phase velocity:
1
2
tanh( )
g
C kh
T k k
e (
= = =
(


The envelope propagates at the group velocity
1
1
2 ( ). ( )
kh
Cg C
k sh kh ch kh
e ( c
= = +
(
c


For shallow water Cg=C and for the deep water Cg=C/2.

Waves Trajectories
In general the particle paths can be determined by integrating the velocity of the particle in time, which means
solving the following two equations:
( ; ; ) ( ; ; ) (1)
dx dw
u x z t w x z t
dt dt
= = Where u and w are the particle velocity
components and they are given by
a*g*k cosh k(z + h)
cos ( t -kx) (2)
cosh kh
u e
e
=

a*g*k sinh k(z + h)
sin( t -kx) (3)
cosh kh
w e
e
=
-1500 -1000 -500 0 500 1000 1500
-5
-4
-3
-2
-1
0
1
2
3
4
5
Phase and Group Velocity


Group velocity
Phase velocity
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 57

These equations cannot be solved analytically because of the way u and w depend on x and z. We use the small
amplitude assumption (H/L) << 1. To linearize the above equation, we assume the particle paths are closed
orbits and we can introduce a mean particle position (x, z) =( , ) , . Moreover, based on the first order theory
we assume that, the particle oscillations , x z A A from respectively and , are small compared to the wave
length, L, and water depth, h. We can write the instantaneous particle position (x, z) as: x = + x A and z = , +
z A . We now insert the above instantaneous particle position equation(1) into equation (2)

and (3) and make a
Taylor expansion of the sin, cos, sinh and cosh functions from the mean position ( , ) , . Terms of higher order
are discarded and here after we can solve equation (1) with respect to x and z. After performing all these
mathematical operation we finally get
2 2
1
( ) ( )
x y
A B
,
,
| | | |
+ =
| |
\ . \ .
. Leading to the conclusion that the particle
paths are for linear waves elliptical with center ( , ) , , ( ) A and ( ) B , are horizontal and vertical amplitude
respectively. Generally speaking the amplitudes are a function of , (depth). At the surface the vertical
amplitude is equal to H/2 and the horizontal one is equal to H/2 coth kh. Below is a figure showing the particle
paths, the foci points and the amplitudes.

Figure 4. Particle path of the waves

Thus it is clear that in the case of intermediate depth, the particle paths for linear waves are elliptical with center
( , ), , with ( ) A and ( ) B , as the horizontal and vertical amplitude respectively.


Figure 5: Plot of particle trajectory in Intermediate depth

Thus from the above plot we can see that , at intermediate depth the motion of the particle follows an elliptical
orbit, The elliptical movement of a fluid particle flattens with decreasing depth as they approach the bottom of
sea. At the bottom of the sea, the particles only move back and forth. The trajectory radius, decrease with
increasing distance below the surface. The bigger the wave, larger is the size of the orbit. In the case of deep
-40 -30 -20 -10 0 10 20 30 40
-50
-40
-30
-20
-10
0
10
DISTANCE (X)
S
E
A

D
E
P
T
H

(
m
)
Partical Path in Intermediate Depth)
MEANLEVEL
BOTTOMLEVEL
) sinh(
) ( sinh
2 kh
h kh H
B
+
=
,

) sinh(
) ( cosh
2 kh
h kh H
A
+
=
,

) , ( ,
H
0 = Z
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 58

water waves, we have
2
1
>
L
h
, corresponding to t
t
> =
L
h
kh
2
, we get
kh kh
e kh and e kh
2
1
sin
2
1
cosh ~ ~ .

By using
kh
kh k kh
kh
h k
sin
sinh sinh cosh cosh
sinh
) ( cosh , , , +
=
+

We get
, ,
, ,
k k
e
H
B and e
H
A
2
) (
2
) ( ~ ~ ,

Figure 6: Plot of particle trajectory in deep water depth
It is clear from the above figure that, the shape of the closed trajectory (for deep water waves) is a circular and
the trajectory is decreasing along the water depth. At the depth z=-L/2 the diameter is approximately only 4% of
H. It shows that the fluid particle move in circles with constant speed.
1) STOKES DRIFT
On the waves, there is important effect due to the incomplete closure of particle path. When there is a large
steepness, we will have an open trajectory instead of closed trajectory. This effects is called stokes drift. It is a
net velocity in the direction of waves propagation. For a pure wave motion, the Stokes drift velocity is
the average velocity when following a specific fluid parcel as it travels with the fluid flow. For instance, a
particle floating at the free surface of water waves, experiences a net Stokes drift velocity in the direction
of wave propagation. In general, the Stokes drift velocity is the difference between the average Lagrangian flow
velocity of a fluid parcel, and the average Eulerian flow velocity of the fluid at a fixed position. The Stokes
drift is the difference in end positions, after a predefined amount of time (usually one wave period), as derived
from a description in the Lagrangian and Eulerian coordinates. The end position in the Lagrangian description is
obtained by following a specific fluid parcel during the time interval. The corresponding end position in
the Eulerian description is obtained by integrating the flow velocity at a fixed position, which is equal to the
initial position in the Lagrangian description during the same time interval. The Stokes drift velocity equals the
Stokes drift divided by the considered time interval. Stokes drift may occur in all instances of oscillatory flow
which are inhomogeneous in space (tides and atmospheric waves).
Stokes Mathematical description:
The Lagrangian motion of a fluid parcel with position vector x = (,t) in the Eulerian coordinates is given by:



Where /t is the partial derivative of (,t) with respect to t, and, (,t) is the Lagrangian position vector of a
fluid parcel, in meters, u(x,t) is the Eulerian velocity, in meters per second, x is the position vector in
the Eulerian coordinate system, in meters, is the position vector in the Lagrangian coordinate system,
in meters, t is the time, in seconds. Often, the Lagrangian coordinates are chosen to coincide with the Eulerian
-40 -30 -20 -10 0 10 20 30 40
-50
-40
-30
-20
-10
0
10
DISTANCE (X)
S
E
A

D
E
P
T
H

(
m
)
Partical Path in Deep Water)
MEANLEVEL
BOTTOMLEVEL
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 59

coordinates x at the initial time t = t
0
. The average Eulerian velocity vector
E
and average Lagrangian velocity
vector
L
are:


Now, the Stokes drift velocity
S
is given by
In the case of deep water case the value of Stokes Drift is:


Based on that equation we could know that the stokes drift velocity is a nonlinear quantity in terms of wave
amplitude (a). Furthermore, the stokes drift velocity is reducing exponentially due to the water depth. Like the
closed trajectory, for deep water case at z=-1/4, the stokes drift velocity remains only about 4%.

Figure 8: Drift motion for intermediate waves
In the above figure, The light-blue line gives the path of these particles. The elliptical movement of a fluid
particle flattens with decreasing depth as they approach the bottom of sea. Observe that the wave period,
experienced by a fluid particle near the free surface, is different from the wave period at a fixed horizontal
position, this is due to the Doppler shift.

Figure 9: Variation of velocity component (u and w) of the particle (for intermediate waves)
-30 -20 -10 0 10 20 30
-40
-35
-30
-25
-20
-15
-10
-5
0
5
DISTANCE (m)
S
E
A

D
E
P
T
H

(
m
)
DRIFT in Intermediate Depth)
MEANLEVEL
BOTTOMLEVEL
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
-3
-2
-1
0
1
2
3
TIME (s)
V
E
L
O
C
I
T
Y

(
m
/
s
)
Variation of velocity of the particle (Intermediate Depth)


Horizontal velocity
Vertical velocity
kz
e ka s U
2 2
e ~

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 60



In addition, we see that the horizontal component of velocity is higher than the vertical component of
velocity. The difference in velocity results in a slight displacement along the wave direction. This net motion is
called Stokes drift.

Drift motion in Deep water case

Figure 10: Drift motion for deep water waves

The blue lines are the path of the fluid particles and it represents the particles position after each different waves
period. From the graph it is proofed that the mean Eulerian horizontal velocity below the wave though is zero.
From the graph we could say also that along the water depth, the drift motion is decreasing, and at the depth
about z=-L/2 the drift motion is approximately only 4% of the surface (at z=0) drift motion.


Figure11: Variation of velocity component (u and w) of the particle (for deep water waves)

Based on the above graph, it is shown that there is a different value of the horizontal and vertical velocities of
the waves. This different of velocities generate the drift motion happens on the waves.

Mean drift velocities for different wave amplitudes:

Figure 12. Variation of Mean velocity with time.
-60 -40 -20 0 20 40 60
-50
-40
-30
-20
-10
0
DISTANCE (m)
S
E
A

D
E
P
T
H

(
m
)
DRIFT in Deep Water)
MEANLEVEL
BOTTOMLEVEL
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
-5
-4
-3
-2
-1
0
1
2
3
4
5
TIME (s)
V
E
L
O
C
I
T
Y

(
m
/
s
)
Variation of (horizontal and vertical ) velocity of the particle (Deep Water)


Horizontal velocity
Vertical velocity
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
M
e
a
n

v
e
l
o
c
i
t
y
Amplitude
Mean velocity Vs Amplitude


Intermediate water
Deep water
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 61

From the graph it is clear that the Mean velocity varies quadratically with change in amplitude. Moreover, as the
amplitude are increases, the mean drift velocity also increasing The Stokes drift velocity decays with depth.

TSUNAMI WAVES
A tsunami is a system of ocean gravity waves formed as a result of a large-scale disturbance of the sea that
occurs in a relatively short duration of time. In the process of the sea water returning by the force of gravity to
an equilibrium position, a series of oscillations both above and below sea level take place, and waves are
generated which propagate outward from the source region. Most tsunamis are caused by earthquakes, with a
vertical disruption of the water column generally caused by a vertical tectonic displacement of the sea bottom
along a zone of fracture in the earths crust which underlies or borders the ocean floor. For the largest tsunami
earthquakes, 100,000 km
2
or more of seafloor may be vertically displaced by up to several meters or even more.
Other source mechanisms include volcanic eruptions next to or under the ocean, displacement of submarine
sediments, coastal landslides that go into the water, or large-scale explosions in the ocean caused by manmade
detonations or meteor impacts. A tsunami travels outward from the source region as a series of waves. Its speed
depends upon the depth of the water, and consequently the waves undergo accelerations or decelerations in
passing respectively over an ocean bottom of increasing or decreasing depth. By this process the direction of
wave propagation also changes, and the wave energy can become focused or defocused. In the deep ocean,
tsunami waves can travel at speeds of 500 to 1000 kilometers per hour. Near shore, however, a tsunami slows
down to just a few tens of kilometers per hour. The height of a tsunami also depends upon the water depth. A
tsunami that is just a meter in height in the deep ocean can grow to tens of meters at the shoreline. Unlike
familiar wind-driven ocean waves that are only a disturbance of the sea surface, the tsunami wave energy
extends to the ocean bottom. Near shore , this energy is concentrated in the vertical direction by the reduction
in water depth, and in the horizontal direction by a shortening of the wavelength due to the wave slowing down.
Tsunamis, on the other hand, are caused by submarine earthquakes that set off waves with long wavelengths in
water and the most destructive tsunamis are caused by subduction zone earthquakes. A subduction zone is where
two of the earth's rigid tectonic plates are converging towards one another (roughly at few centimetres per year),
and one plate, usually composed of heavier oceanic material, dives beneath the other generally lighter plate of
continental material. At the boundary where the two rub against each other, the lower one drags and flexes the
top one slightly downward. When the flexing exceeds the frictional strength of the inter-plate contact, the upper
plate rebounds to its original position causing sea-floor displacement much like the swimming spring-board.
This happens so quickly that the sea surface assumes the shape of the sea-floor displacement. The potential
energy of displacement is converted into the kinetic energy of horizontal motion. This disturbance propagates
outward as a tsunami. And the wave height will at best be a couple of metres. Unlike a tidal wave, a tsunami
extends deep down into the ocean waters. That is, a tsunami crest is just the very tip of a very vast mass of water
in motion. Within several minutes of the quake, the initial tsunami will split into one that travels out to the deep
ocean (distant tsunami) and another that travels towards the nearby coast (local tsunami). The height above the
mean sea level (MSL) of the two oppositely travelling tsunamis is about half that of the original tsunami. The
speed at which both travel varies as the square root of the water depth. Therefore, deep ocean tsunamis travel
faster than local tsunamis. In the deep ocean, this wave travels at speeds of 500-1,000 km/hr. That is, the slope
of the wave - which extends hundreds of kilometres is so gentle, that even ships travelling on top of a tsunami
wave will not feel it. Because the momentum of the tsunami is so great, it can travel great distances with little
loss of energy.
As the tsunamis (both local and distant) approach the shallow coastal waters, their wavelength decreases and the
amplitude increases several fold. As the waves hit against the slope of the coastline, the long waves pile on one
another and the wavelength is reduced while the amplitude increases. As the waves travel over the near-shore
region, a tsunami `run-up' occurs. Run-up is a measure of the height of water observed onshore above MSL.
Tsunamis do not result in breaking waves like the normal surf waves on a beach. They come in like very
powerful and fast local rises in sea level and travel much farther inland than normal waves. Much of the damage
inflicted by tsunamis is on account of strong currents and floating debris. After run-up, part of the tsunami
energy is reflected back to the open ocean. In addition, a tsunami can generate a particular type of waves called
edge waves, which travel back and forth, parallel to the shore. The geometry of the seafloor warping near the
coast has a significant influence on this. These effects result in repetitive arrivals of the tsunami waves at a
particular point on the coast rather than a single wave. Because of the complicated behaviour of the
phenomenon of the waves near the coast, the first run-up of a tsunami is often not the largest, emphasising the
importance of not returning to the beach for several hours after a tsunami hits. In certain cases, the sea can seem
to draw a breath and empty the coast. This is almost immediately followed by a wall of water inundating the
coast.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 62

REFERENCES

[1] Lecture notes of Hydro Dynamics by Prof Pierre FERRANT, Ecole Centrale de Nantes, France.
[2] Lecture notes of Water Wave Mechanics by Thomas Lykke Anderson & Peter Frigaard, AALBORG
University, Denmark.
[3] Guide to wave analysis and Forecasting by A.K Laing, NIWA, and New Zealand.
[4] Introduction in Ship Hydro Mechanics by J.M.J.Journe and Jakob Pinkser, Delft University of
technology, Netherland.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 63

COMPUTER SIMULATION STUDIES ON A FOUR STROKE CYCLE
SPARK IGNITION ENGINE USING GASOLINE AND PROPANE AS
ALTERNATIVE FUELS.

M.Marouf Wani
1
, I.K.Pandita
2
, Shahid Saleem
3
.
Mechanical Engineering Department , National institute of Technology , Hazratbal , Srinagar ,
India

1
maroufwani@yahoo.com


ABSTRACT:

This paper describes the results of computational studies on a single cylinder four stroke cycle spark ignition
engine using Propane as an alternative fuel to petrol. The simulation is done in the professional engine
simulation software from AVL Austria named as BOOST. The modeling methodology involves the use of first
law of thermodynamics for engine as an open system when valves are open and engine as a closed system when
valves are closed. To include the effect of gas exchange in the intake and exhaust manifolds when the valves are
open , the modeling is done using Navier -Stokes equations for manifolds. The design parameters are fixed by
engine geometry. A matrix was prepared for the operating variables to carry out the simulation. First the data
was used as per petrol engine needs, and results for engine performance were generated. The operation was
revised with data for proposed propane fuelled engine and its performance were studied. The software gave
successful results in both the cases. It was observed that the power output was increased with propane as an
alternative fuel due to its higher calorific value, there was reduction in brake specific fuel consumption due to
higher calorific value of propane. The emissions will also reduced with propane as fuel because of less carbon
atoms in propane as compared to petrol fuel. It is proposed that propane can be successfully used in petrol
engine as an alternative fuel.
Keywords : Engine , Petrol , Propane , Alternate fuels , Simulation , Performance , Emissions
INTRODUCTION

Computer simulation studies helps to predict the behavior of the engine in the petrol and Propane fuel modes.
We prepare the models for Petrol and Propane fuel modes for the engine systems and feed the actual data
corresponding to the design and operating conditions of the system. It helps to simulate the results without
actually performing experiments. Thus a lot of money and time is saved. Moreover we can simulate and
compute those results which are very difficult to be measured experimentally. Favorable computed results pave
the way for further experimental investigations.
The objectives are to investigate the feasibility of Propane as alternative fuel in petrol engines. It is also
intended to choose Propane as fuel because it has less number of carbon atoms and as compared to petrol and
therefore will produce minimal pollution. Table 1 at the end gives physico-chemical properties of propane and
petrol which help us to investigate the feasibility of using propane as an alternative fuel to petrol.


THEORETICAL BASIS

The theoretical background including the basic equations for all elements used in the present model is
summarized below to give a better understanding of the program.
The Cylinder , High Pressure Cycle, Basic Equation.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 64

The calculation of the high pressure cycle of an internal combustion engine is based on the first law of
thermodynamics:

+ =
o o o o o d
dm h
d
dQ
d
dQ
d
dV p
d
u m d BB BB w F c c . . ) . (
----------------------------------------------------(Eq.1)

where
o d
u m d c ) . (
= change of the internal energy in the cylinder.
o d
dV pc.
= piston work.
o d
dQF
= fuel heat input.

o d
dQw
= wall heat losses
o d
dm h BB BB.
= enthalpy flow due to blow-by
o d
dmBB
= blow-by mass flow
The first law of thermodynamics for high pressure cycle states that the change of internal energy in the cylinder
is equal to the sum of piston work, fuel heat input, wall heat losses and the enthalpy flow due to blow-by.
Eq.1 is valid for engines with internal and external mixture preparation. However the terms which take into
account the change of gas composition due to combustion, are treated differently for internal and external
mixture preparation.
For internal mixture preparation it is assumed that
- The fuel added to the cylinder charge is immediately burnt.
- The combustion products mix instantaneously with the rest of cylinder charge and thus form a uniform
mixture.
- As a consequence, the Air-Fuel ratio of the charge diminishes continuously from a high value at the
start of combustion to the final value at the end of combustion.
In order to solve this equation, models for the combustion process and the wall heat transfer, as well as the gas
properties as a function of pressure, temperature, and gas composition are required.
Together with the gas equation
p
c
=
V
1
.m
c
.R
o
.T
c
--------------------------------------------------------------(Eq.2)
Establishing the relation between pressure, temperature and density, Eq. 2 for in-cylinder temperature can be
solved using a Runge-Kutta method. Once the cylinder gas temperature is known, the cylinder gas pressure can
be obtained from the gas equation.

Combustion Model

The following equation for the stoichiometric air requirement specifies how much air is required for a complete
combustion of 1 kg fuel:
L
st
= 137.85 .(
01 . 12
c
+
032 . 4
h
+
06 . 32
s
-
0 . 32
o
) [kg Air/kg Fuel] ------------(Eq.3)
For lean combustion, the total heat supplied during the cycle can be calculated from the amount of fuel in the
cylinder and the lower heating value of the fuel. The lower heating value is a fuel property and can be calculated
from the following formula:
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 65

H
u
= 34835 . c +93870 . h +6280 . n +10465 . s -10800 . o -2440 . w [kj/kg] -------(Eq.4)
In rich air fuel mixture combustion, the total heat supplied during the cycle is limited by the amount of air in the
cylinder. The fuel is totally converted to combustion products even if the amount of air available is less than the
amount of stoichiometric air.

Heat Release Approach.

The vibe function is used to approximate the actual heat release characteristics of an engine:

o d
dx
=
c
a
o A
. (m+1) .y
m
. e
-a.y(m+1)
--------------------------------------(Eq.5)
dx =
Q
dQ
--------------------------------------------------------------------(Eq.6)
y = -
c o
o
A
0
-------------------------------------------------------------------(Eq.7)

The integral of the vibe function gives the fraction of the fuel mass which was burned since the start of
combustion:

}
= ) . ( o
o
d
d
dx
x = 1-e
-a.y(m+1)
-----------------------------------------------(Eq.8)

Gas Exchange Process , Basic Equation
The equation for the simulation of the gas exchange process is also the first law of thermodynamics:


+ =
e
e
i
i w c c
h d
dm
h d
dm
d
dQ
d
dV p
d
u m d
. .
. ) . (
o o o o o
-----------------------(Eq.9)
The variation of the mass in the cylinder can be calculated from the sum of the in-flowing and out-flowing
masses:
o d
dmc
=

o o d
dm
d
dm e i
------------------------------------------------------------(Eq.10)

Piston Motion
Piston motion applies to both the high pressure cycle and the gas exchange process.
For a standard crank train the piston motion as a function of the crank angle can be written as:
s= (r+l).cos-r.cos(+)-l.
2
} ) sin( . { 1
l
e
l
r
+ o ----------------------------(Eq.11)
= arcsin(
l r
e
+
) -------------------------------------------------(Eq.12)

Heat Transfer
The heat transfer to the walls of the combustion chamber, i.e. the cylinder head, the piston, and the cylinder
liner, is calculated from:
Q
wi
= Ai .
w
. (T
c
-T
wi
) ----------------------------------------------------(Eq.13)
In the case of the liner wall temperature, the axial temperature variation between the piston TDC and BDC
position is taken into account:
T
L
= T
L,TDC
.
c x
e
cx
.
1

----------------------------------------------(Eq.14)
c = ln{
BDC L
TDC L
T
T
,
,
} --------------------------------------------------(Eq.15)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 66

For the calculation of the heat transfer coefficient, the Woschni 1978 heat transfer model is used.

Woschni Model
The woschni model published in 1978 for the high pressure cycle is summarized as follows:

8 . 0
,
1 , 1 ,
1 ,
2 1
53 . 0 8 . 0 2 . 0
) .(
.
.
. . . . . . 130
(

+ =

o c c
c c
c D
m c c w p p
V p
T V
C c C T p D o ------------------(Eq.16)
C1 = 2.28+0.308.cu/cm
C2 = 0.00324 for DI engines
C2 = 0.00622 for IDI engines
For the gas exchange process, the heat transfer coefficient is given by following equation:

8 . 0
3
53 . 0 8 . 0 2 . 0
) . .( . . . 130 m c c w c C T p D

= o --------------------------------------(Eq.17)
C
3
= 6.18+0.417.c
u
/c
m


Fuel Injector
The fuel injector model is based on the calculation algorithm of the flow restriction. This means that the air flow
rate in the fuel injector depends on the pressure difference across the injector and is calculated using the
specified flow coefficients. In addition, the amount of fuel specified is fed into the air flow.
In the case of carburetor model, the fuel flow is set to a specified percentage of the instantaneous mass flow.
For the injector model, a measuring point must be specified at the location of the air flow meter. In this case the
mean air flow at the air flow meter location during the last complete cycle is used to determine the amount of
fuel. As is the case for continuous fuel injection, the fuelling rate is constant over crank angle.
The fuel is added in gaseous form to the pipe flow. No evaporation is considered.


Pipe Flow
The one dimensional gas dynamics in a pipe are described by the continuity equation

dx
dA
A
u
x
u
t
.
1
. .
) . (



c
c
=
c
c
, -------------------------------------------------(Eq.18)
the equation for the conservation of momentum

V
F
x
A
A
u
x
p u
t
u R

c
c

c
+ c
=
c
c
.
1
. .
) . ( ) . (
2
2


,----------------------------------(Eq.19)
and by the energy equation

V
q
dx
dA
A
p E u
x
p E u
t
E w
+ +
c
+ c
=
c
c
.
1
). .(
)] .( [
.------------------------------(Eq.20)
The wall friction force can be determined from the wall friction factor f :
u u
D
f
V
FR
. . .
. 2

= ------------------------------------------------------------------(Eq.21)
Using the Reynolds analogy, the wall heat flow in the pipe can be calculated from the friction force and the
difference between wall temperature and gas temperature:
) .( . . .
. 2
T Tw c u
D V
q
p
f w
=

--------------------------------------------------------(Eq.22)
During the course of numerical integration of the conservation laws defined in the Eq.20, Eq.21 and Eq.22,
special attention should be focused on the control of the time step. In order to achieve a stable solution, the CFL
criterion (stability criterion defined by Courant, Friedrichs and Lewy ) must be met:

a u
x
t
+
A
s A --------------------------------------------------------------------------(Eq.23)
This means that a certain relation between the time step and the lengths of the cells must be met. The time step
to cell size relation is determined at the beginning of the calculation on the basis of the specified initial
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 67

conditions in the pipes. However, the CFL criterion is checked every time step during the calculation. If the
criterion is not met because of significantly changed flow conditions in the pipes, the time step is reduced
automatically.
An ENO scheme is used for the solution of the set of non-linear differential equations discussed above. The
ENO scheme is based on a finite volume approach. This means that the solution at the end of the time step is
obtained from the value at the beginning of the time step and from the fluxes over the cell borders.

RESULTS AND DISCUSSION

Effect of Speed on Power
The Fig.1 below shows the effect of speed on power. It is seen as the speed increases the power also
increases due to more number of power cycles per unit time. Further it is seen that the power
developed by petrol as fuel is slightly higher than propane due to higher volumetric efficiency with
petrol.




Effect of Speed on Brake Specific fuel Consumption. (bsfc)
Fig.2 below shows the effect of speed on brake specific fuel consumption (fuel consumed per
unit power output ) . It is seen that the operation of propane engine is approximately as
economical as the petrol fuel. From the physical and chemical properties of petrol and propane
it is seen that propane has higher heating value but at same time its volumetric efficiency is
lesser as it converts more easily into gaseous phase on release of pressure. Since the power with
each fuel is comparable so the bsfc also remains comparable.





Fig.1 Effect of Speed on Power
0
5
10
15
20
0 2000 4000 6000 8000
Speed r.p.m.
P
o
w
e
r

K
w
Petrol
Propane
Fig.2 Effect of Speed on Brake
Specific Fuel Consumption
0
50
100
150
200
250
300
350
0 2000 4000 6000 8000
Speed r.p.m.
b
s
f
c

,

g
/
K
w
.
h
r
Petrol
Propane
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 68

Effect of Speed on Torque.
Fig.3 below shows the effect of speed on torque. It is seen that in each case the maximum
torque is produced at 2000 r.p.m.. This is because the combustion characteristics and in-
cylinder pressure development is best at this speed. Petrol produces slightly better torque due to
its higher volumetric efficiency as compared to propane.




Effect of Speed on Exhaust Gas Temperature.
Fig.4 below shows the effect of speed on exhaust gas temperature. It is seen from the graph that
the exhaust gas temperatures in case of petrol and propane are comparable. This is also a clear
indication that propane also produces higher temperatures and produces comparable power as
its heating value is higher than petrol. But the volumetric efficiency of propane is lower.






CONCLUSIONS

1. Propane can safely be used in petrol engines as an alternative fuel to conventional petrol.
2. Pollution formation from engines using Propane as fuel will be less due to less carbon atoms in
propane fuel.
3. Comparable power is produced by propane.
4. The Propane engine operation is as economical as petrol approximately.
5. Since the Octane number of propane is higher than that of petrol so there are lesser chances of
knocking. It will result in better combustion and also gives way to increase the compression ratio of
engine slightly which can boost power futher.

REFERENCES
[1] AVL LIST GmbH , Examples , AVL BOOST Version 2009.1
[2] Richard L. Bechtold , Alternative Fuels Handbook , SAE Publication.
Fig.3 Effect of Speed on Torque
0
5
10
15
20
25
30
35
40
0 2000 4000 6000 8000
Speed r.p.m.
T
o
r
q
u
e

,

N
.
m
Petrol
Propane
Fig.4 Effect of Speed on Exhaust Gas
Temperature
850
900
950
1000
1050
1100
0 2000 4000 6000 8000
Speed r.p.m.
E
.
G
.
T
.

,

K
Petrol
Propane
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 69


APPENDIX-A

NOMENCLATURE
a = speed of sound
A = pipe cross-section
A
eff
= effective flow area
A
i
= surface area (cylinder head, piston, liner)
AF
CP
= air fuel ratio of combustion products
A
geo
= geometrical flow area
c = mass fraction of carbon in the fuel
c
V
= specific heat at constant volume
c
p
= specific heat at constant pressure
C1 = 2.28+0.308.cu/cm
C2 = 0.00324 for DI engines
C2 = 0.00622 for IDI engines
Cm = mean piston speed
Cu = circumferential velocity
c
u
= circumferential velocity
D = cylinder bore
D = pipe diameter
dm
i
= mass element flowing into the cylinder
dm
e
= mass element flowing out of the cylinder
d
vi
= inner valve seat diameter (reference diameter)
o d
dmBB
= blow-by mass flow
e = piston pin offset
E = energy content of the gas (=. T cV .

+ ) . .
2
1
2
u
f = fraction of evaporation heat from the cylinder charge
F
R
= wall friction force
h = mass fraction of hydrogen in the fuel
h
BB
= enthalpy of blow-by
h
i
= enthalpy of in-flowing mass
h
e
= enthalpy of the mass leaving the cylinder
H
u
= lower heating value
k = ratio of specific heats
l = con-rod length
m = shape factor
.
m = mass flow rate
m
c
= mass in the cylinder
m
ev
= evaporating fuel
m
pl
= mass in the plenum
n = mass fraction of nitrogen in the fuel
o = mass fraction of oxygen in the fuel
p = static pressure
P
01
= upstream stagnation pressure
Pc,o = cylinder pressure of the motored engine[bar]
Pc,1 = pressure in the cylinder at IVC[bar]
p
pl
= pressure in the plenum
p
c
= cylinder pressure
p
2
= downstream static pressure
q
ev
= evaporation heat of the fuel
q
w
= wall heat flow
Q = total fuel heat input
Q
F
= fuel energy
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 70

Q
wi
= wall heat flow (cylinder head, piston, liner)
r = crank radius
R
0
= gas constant
s = piston distance from TDC
t = time
T = temperature
Tc,1 = temperature in the cylinder at intake valve closing (IVC)
T
c
= gas temperature in the cylinder
T
wi
= wall temperature ( cylinder head, piston, liner)
T
L
= liner temperature
T
L,TDC
= liner temperature at TDC position
T
L,BDC
= liner temperature at BDC position
T
w
= pipe wall temperature
T
01
= upstream stagnation temperature
u = specific internal energy
u = flow velocity
V = cylinder volume
V = cell volume (A.dx)
VD = displacement per cylinder
w = mass fraction of water in the fuel
x = relative stroke (actual piston position related to full stroke)
x = coordinate along the pipe axis
= crank angle

o
= start of combustion

c
= combustion duration

w
=

heat transfer coefficient
= density
= flow coefficient of the port
= crank angle between vertical crank position and piston TDC position
f = wall friction coefficient
t = time step
x = cell length







APPENDIX-B
Petrol Engine Specifications
Bore 84 mm
Stroke 90 mm
Compression Ratio 9
Number of Cylinders 1

APPENDIX-C
Table 1: Physico-Chemical Properties of Petrol and Propane [2]

Fuel Property Propane Petrol
Formula C3H8 C4 to C12
Molecular weight 44.1 100-105
Lower heating value,
MJ/Kg
46.44 42.5
Stoichiometric air-fuel
ratio, weight
15.67 14.7
Octane number 110 80-98

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 71


EXAMINING THE PERFORMANCE OF MPFI AND CARBURETOR SI
ENGINE WITH VARYING PERCENTAGE OF UNLEADED
GASOLINE-ETHANOL BLENDS
Amit Kumar Thakur
1

1
Astt Professor, Department of Mechanical Engg, Dev Bhoomi Institute of Technology, Dehradun. (UA)

amit_t77@yahoo.com
INTRODUCTION

Presently main energy source is fossil fuels. All kinds of vehicle engines work with fuels produced from fossil
fuels. Fossil fuels reserves in the world are limited and expected to be exhausted in next 40 years. The rapid
depletion of the worlds crude oil reserves and environmental considerations has focused on the clean,
renewable, and sustainable and non-petroleum fuels. The energy crisis and environmental pollution by fossil
fuels necessitates to development of alternative fuel for internal combustion engine. A lot of investigation has
already been done by using ethanol as a fuel in SI engines. In a number of countries, use of ethanol blends with
gasoline is mandatory.Considerable amount of published work is available on Carburetor type SI engine using
gasoline-ethanol blends of fuel. In the present investigation, performance analysis were carried out on MPFI SI
engine as well as identical type Carburetor SI engine by carrying out a number of experiment using gasoline,
gasoline-ethanol blends as fuel.
Ethanol, which is one of the renewable energy sources and is obtained from biomass, has been tested intensively
in the internal combustion engines. Ethanol was the first fuel among the alcohols to be used to power vehicles in
the 1990s. The main reasons for advocating ethanol is that it can be produced by fermenting and distilling
starch crops that have been converted into simple sugars.
EXPERIMENTAL SETUP
Ethanol is one of the best available alternative for fossil fuels in IC engines. The properties of hydrogen suggest
that it will be a good IC engine fuel because of its better combustion characteristics and non-polluting nature.
For present investigation, we have selected two identical engines, one Carburetor SI engine and other MPFI
engine. Both engines test rigs have the same specifications. Both engines are the three cylinders, four stroke, and
MARUTI 800 engine. The compression ratio for both the engines is same.
CARBURETOR SI ENGINE
For several decades, Carburetors were used on most SI engines as the means of adding fuel to the intake air. The
basic principle on which the carburetor work is extremely simple and well known.

Photographic view of experimental setup (CarburetorSIengine) Engine

Investigation was aimed towards the use of ethanol for stationary engine. So it was decided to use most widely
used Carburetor type SI stationary engine. Its specifications are as follows:
General details three cylinder, four stroke, spark ignition, water cooled, Carburetor type SI MARUTI 800
engine
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 72

Bore = 68.5 mm, stroke = 72 mm
Piston displacement = 796 cc
Compression ratio = 8.7
Maximum output = 37 HP @ 5000 RPM

Photographic view of Carburetor SI engine MPFI SI Engine
Multipoint Port Fuel Injectors is most widely used in the modern automobile SI engine.
Multipoint Port Fuel Injector engine having one or more injectors are mounted by the intake valve (s)
of each cylinder. They spray fuel into the region directly behind the intake valve, sometimes directly onto the
back of the valve face. The injectors are usually timed to spray the fuel into the quasi-stationary air just before
the intake valve opens. Because injection starts before the intake valve is open, there is a momentary pause in
the air flow, and the air velocity does not promote the needed mixing and evaporation enhancement. When the
valve then opens, the fuel vapor and liquid droplets are carried into the cylinder by the onrush of air, often with
the injector continuing to spray any backflow of hot residual exhaust gas that occurs when the intake valve
opens also enhances the evaporation of fuel droplets.

Photographic view of Experimental setup ( MPFI SI Engine) Engine
Investigation was aimed towards the use of ethanol for stationary engine. So it was decided to use most widely
used Carburetor type SI stationary engine already given. The specifications of MPFI SI engines are same as
already given.

Performance of carburetor SI engine using different blends
The effect of ethanol addition to unleaded gasoline on carburetor type SI engine performance and exhaust
emissions at full throttle opening at constant speed were investigated. For this investigation main objective was
to analyze performance of ethanol blended with gasoline fuelled engine and find out the best blend for all loads.
The engine was started and allowed to warm up for a period of 10-20 min. Engine test were performed
at different blends with the volumetric ratios of 0% to 50% with an increment of 5% ethanol. Engine was
operated with each blend at constant speed with varying load. The mixture was prepared just before the
experiments to prevent the reaction of ethanol with water vapor. The desired speed is maintained by the throttle
valve. The required engine load was obtained through the dynamometer control. Fuel consumption was
measured by using a calibrated burette and a stopwatch. The concentrations of the exhaust emissions were
measured using a exhaust gas analyzer.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 73

For each experiment, two runs were performed, one loading and other unloading to obtain loading and
unloading. We have taken average value of loading and unloading for calculating the performance parameters.
The variables that were continuously measured include force on dynamometer, time required to consume 10 cc
of fuel blend, air-fuel ratio, CO, CO2, NOx and HC emissions.
The parameters, such as fuel consumption rate, brake specific fuel consumption, volumetric efficiency,
brake thermal efficiency, mechanical efficiency were estimated.
Calculations
A load test was done with gasoline and its blends with ethanol as fuel on the Carburetor type SI engine. This
was done to obtain a set of performance parameters. The performance parameters can be calculated as follows:
Brake power
* *0.736
( )
W N
BP kW
C
=

where, W = Spring balance reading in kg
N = speed of the engine in rpm
C = constant = 1000
Mass of fuel consumption

*0.72*3600
( / )
1000*
cc
X
mfc kg hr
T
=
where, X = burette reading in cc
0.72 = density of gasoline in gram/cc
T = time taken in seconds
Specific fuel consumption

( / )
mfc
Sfc kg kWhr
BP
=

Actual volume of air sucked into the cylinder
3
( / ) * * 2 *3600
a d
V m hr C A gh =

where, ( ) *
1000
h w
H m
a
o
o
= meter of water
A = area of orifice = d
2
/4 in m2
d = 20 mm
h = manometer reading in mm
w o = density of water = 1000 kg/m3
a o = density of air = 1.193 kg/m3

d
C = co-efficient of discharge = 0.62
g = acceleration due to gravity =9.81 m2/s

Swept volume

2
3
( / ) * * *60*3
4 2
s
d N
V m hr L
t
=
where, d = diameter of the bore = 0.0685 m
L = length of the stroke = 0.072 m
N = speed of the engine in rpm
Volumetric efficiency

(%) *100
a
v
s
V
V
q =

Brake thermal efficiency

*3600
(%) *100
*
bth
v
BP
mfc C
q =
where,
v
C =calorific value of gasoline= 44000 kJ/kg
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 74


Mechanical efficiency
(%) *100
mech
BP
IP
q =
where, IP = indicated power in kW

Air-Fuel ratio

.
.
/
a
f
m
A F
m
=

where,
.
a m = mass of air in kg

.
f m = mass of fuel in kg

RESULTS AND DISCUSSION

The following discussion is based on the results of the load test conducted on the three cylinder Carburetor
SI engine experimental setup. The experiment was conducted an engine speed of 2500 rpm.
Brake specific fuel consumption
Fig. 1 shows the effect of using ethanol-gasoline blends on brake specific fuel consumption. Owing to fact that
the heating value of ethanol is lower than that of gasoline, the SEC increases as the ethanol content in the blend
increases. At no-load the SFC curve is at infinity as the engine is producing no useful work and is consuming
fuel. As the load is increased the curve starts to drop and achieve a minimum. Once it reaches the rated load, due
to incomplete combustion of charge and dissociation losses, it results in a drop in the brake power.

Fig. 1. The effect of ethanol addition on the specific fuel consumption.
Brake thermal efficiency

The effect of using ethanol-gasoline blends on brake thermal efficiency. As shown in the figure, brake thermal
efficiency increases as the E% increases. The maximum brake thermal efficiency is recorded with 20% ethanol
in the fuel blend for all loads. The brake thermal efficiency curve is the inverse of specific fuel consumption
curve. The brake thermal efficiency decreased with increasing ethanol using more percentage of ethanol content
with gasoline because specific fuel consumption increases.

Volumetric efficiency
Fig. 2 shows an increase in the volumetric efficiency as the percentage of ethanol in the fuel blends increases.
This is due to the decrease of the charge temperature at the end of the induction process. The E% in the fuel
blend increases, the volatility and the latent heat of the fuel blend increases. As the quantity of ethanol in the
fuel blend increases to 20%, the effect of the increasing volatility and latent heat of the fuel blend is more
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 75

significant, resulting in drop of charge temperature increasing. It is clear that as the E% in the fuel blend
increases from 0% to 20%, the volumetric efficiency increases due to the charge temperature decreases.



Fig.2.The effect of ethanol addition on the volumetric efficiency.
Mechanical efficiency:

Fig. 3 The effect of ethanol addition on mechanical efficiency.
Fig.3shows increase in the mechanical efficiency as the percentage of ethanol in the fuel blends
increases. The mechanical efficiency increases up to 20% ethanol blend with gasoline. The ethanol has the
better lubricating property comapred to gasoline because of this it reduces the friction losses. The indicated
power increases and frictional power almost constant . Maximum mechanical efficiency obtained at 20%
ethanol blend with gasoline
Air-Fuel ratio
Fig. 4 shows an decrease in the air-fuel ratio as the percentage of ethanol in the fuel blends increases. The air-
fuel ratio decreases with increasing ethanol percentage in the blends with gasoline. The 20% ethanol blends with
gasoline gave the minimum air-fuel ratio.

Fig. 4 The effect of ethanol addition on the air-fuel ratio.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 76


Exhaust emissions:
Figs. 5 - 8 shows the effect of the E% in the fuel blend on the CO, NOx, HC and CO
2
. From figs. , it can be seen
that as the E% increases to 20%, the CO, NOx and HC concentrations decreases and then increases for all loads.
CO is a toxic gas that is the result of incomplete combustion. When ethanol containing oxygen is
mixed with gasoline, the combustion of the engine becomes better and therefore CO emission is reduced. As
seen from fig. 5, 20% ethanol blend gave the minimum carbon monoxide emissions compared to other ethanol
blends with gasoline.As the ethanol content in the blend increases, NOx decreases. As a result engine-out NOx
emissions decrease.As seen from fig. 7, HC decreases to some extent as ethanol added to gasoline increases.

Fig. 5 The effect of ethanol addition on CO emission.

Fig. 6 The effect of ethanol addition on NOx emission.

Fig. 7 The effect of ethanol addition on HC emission.

Fig. 8 The effect of ethanol addition on CO
2
emission.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 77

The increase in CO
2
emissions are observed when ethanol is used with gasoline. Carbon dioxide is non-
toxic but contributes the green house effect. Because the ethanol contains lower C atom than gasoline, so it
gives off lower CO
2
emissions.
It is concluded that the ethanol blends with gasoline gave the better result. The 20% ethanol percentage
of ethanol gave the better engine performance with compared to other blends. The exhaust emission also
improved with using ethanol blends. The CO, NOx and HC emissions decreased but CO2 emissions increased
with increasing ethanol percentage in gasoline.
Performance of MPFI SI engine using different blends
Most modern automobile SI engines have multipoint port fuel injectors. Fuel injectors are nozzles that inject a
spray of fuel into the intake air. In this type of system, one or more injectors are mounted by the intake valve(s)
of each cylinder. They spray fuel into the region directly behind the intake valve, sometimes directly onto the
back of the valve face. The injectors are usually timed to spray the fuel into the quasi-stationary air just before
the intake valve opens.
The effect of ethanol addition to unleaded gasoline on MPFI type SI engine performance at constant
speed was investigated. For this investigation main objective was to analyze performance of ethanol blended
with gasoline fuelled engine and find out the best blend for all loads.
The engine was started and allowed to warm up for a period of 10-20 min. Before starting the engine,
we have taken all precautions and checked all system, like cooling system, lubrication system, injection system
etc. Engine test were performed at different blends with the volumetric ratios of 0% to 50% with an increment
of 5% ethanol. Engine was operated with each blend at constant speed with varying load. The mixture was
prepared just before the experiments to prevent the reaction of ethanol with water vapor. The desired speed is
maintained by the Electronic Control Unit (ECU). The required engine load was obtained through the hydraulic
dynamometer control.
Before running the engine to a new fuel blend, it was allowed to run for sufficient time to consume the
remaining fuel from the previous experiment. For each experiment, two runs were performed, one loading and
other unloading to obtain loading and unloading. The variables that were continuously measured include force
on hydraulic dynamometer (kg), fuel consumed (kg) in 2 min., air-fuel ratio.
The parameters, such as fuel consumption rate, brake specific fuel consumption, volumetric efficiency,
brake thermal efficiency, mechanical efficiency were estimated using the standard equations.
Calculations
A load test was done with gasoline and its blends with ethanol as fuel on the MPFI SI engine. This was
done to obtain a set of performance parameters. The performance parameters can be calculated as before
Results and Discussion
The following discussion is based on the results of the load test conducted on the three cylinder MPFI
SI engine experimental setup. The experiment was conducted an engine speed of 2500 rpm.
Brake specific fuel consumption
Fig.9 shows the effect of using ethanol-gasoline blends on brake specific fuel consumption. Owing to
fact that the heating value of ethanol is lower than that of gasoline, the SEC increases as the ethanol content in
the blend increases. At no-load the SFC curve is at infinity as the engine is producing no useful work and is
consuming fuel. As the load is increased the curve starts to drop and achieve a minimum.



Fig. 9 The effect of ethanol addition on the specific fuel consumption

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 78


Fig. 10 The effect of ethanol addition on the brake thermal efficiency.
Brake thermal efficiency

Fig. 10 presents the effect of using ethanol-gasoline blends on brake thermal efficiency. As shown in the figure,
brake thermal efficiency increases as the E% increases. The maximum brake thermal efficiency is recorded with
20% ethanol in the fuel blend for all loads. The brake thermal efficiency decreased with increasing ethanol
using more percentage of ethanol content with gasoline because specific fuel consumption increases.

Volumetric efficiency
Fig. 11 shows an increase in the volumetric efficiency as the percentage of ethanol in the fuel blends
increases. The volumetric efficiency increases with increasing ethanol content in the fuel because ethanol
contains an oxygen atom in original form. Volumetric efficiency increases up to 20% ethanol in blend, after it
has decreased.

Fig. 11 The effect of ethanol addition on the volumetric efficiency.



Fig. 12 The effect of ethanol addition on the mechanical efficiency.


Mechanical efficiency
Fig. 12 shows an increase in the mechanical efficiency as the percentage of ethanol in the fuel blends increases.
The mechanical efficiency increases up to 20% ethanol blend with gasoline. The speed of the engine in this
investigation is constant so mechanical efficiency increases. Maximum mechanical efficiency obtained at 20%
ethanol blend with gasoline.
Comparison of the performance of MPFI SI Engine and carburetor SI engine
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 79

In the present investigation, we have analyzed the performance of SI engines. Performance tests were
carried on the MPFI SI engine as well as an identical Carburetor SI engine and investigated the performance for
all blends at different loads. From the investigation, it was concluded that the 20% ethanol blend with gasoline
gave the best result for both the engines.
Specific Fuel Consumption
The comparison for specific fuel consumption for both engines as shown in fig. 13. The specific fuel
consumption for MPFI SI engine is lower compared to Carburetor SI engine. Because fuel is injected directly
into the cylinder, hence fuel loss in intake manifold can be avoided.


Fig. 13 The effect of E0 and E20 on specific fuel consumption for SI engines


Fig. 14 The effect of E0 and E20 on brake thermal efficiency for SI engines
Brake Thermal Efficiency.
Fig. 14 shows the comparison of MPFI SI engine and Carburetor SI engine for pure gasoline and 20%
ethanol blend with gasoline. The comparison shows that the brake thermal efficiency is higher for the MPFI SI
engine than Carburetor SI engine. Because Electronic Control Unit of MPFI engine injected only the minimum
necessary value of fuel to the cylinder as the load demands, so higher thermal efficiency.
6.4 Volumetric Efficiency
Fig.15 shows the comparison of volumetric efficiency for both engines for gasoline and E20 blend. The
volumetric efficiency is higher for the both cases for MPFI SI engines. Because there is no venturi throat to
create a pressure drop as with a Carburetor. Because little or no air-fuel mixing occurs in most of the intake
manifold, high velocity is not as important. In MPFI engine the total volume of air only. As fuel is injected
lighter to the cylinder in MPFI engine. Hence volumetric efficiency increases.

Mechanical Efficiency
As shown in the fig.16, mechanical efficiency is lower for MPFI SI engine than Carburetor SI engine for both
the blends. As the overall efficiencies (brake thermal efficiency) is higher for MPFI engines (as already shown).
At same loss the output from the MPFI engine is higher. So all efficiency is higher.
The experimental performance is on MPFI SI engine and on identical Carburetor SI engine shows that
MPFI SI engine gave a better performance compared to the identical type Carburetor SI engine for all loads and
for all ethanol blends with gasoline. Hence we recommended the use of 20% ethanol blend with gasoline to
MPFI engine.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 80


Fig. 15 The effect of E0 and E20 on volumetric efficiency for SI engines


Fig. 16 The effect of E0 and E20 on mechanical efficiency for SI engines
CONCLUSIONS
Experimental performance analysis on MPFI SI engine and on identical Carburetor SI engines with different
ethanol blends with gasoline gives the following conclusions:
The performance of MPFI engines is always better than the Carburetor SIengine for all ethanol blends with
gasoline.
Ethanol blended with gasoline always improves the performance of MPFI SI engine as well as Carburetor SI
engines and reduced the exhaust emissions.
The results shows that the performance parameters for MPFI SI engine are better and recommended for the
automobile vehicles.
Using SI engine, advising the ethanol as a fuel for complete combustion and pollution is less.
Our experimental results shows that 20% ethanol blend with gasoline gives the best performance for both
MPFI as well as the Carburetor SI engine.
FUTURE SCOPE
Experimentation can be performed on CI engines also, to compare the performance analysis and comparison can
be made for SI and CI using fuel blended with ethanol. Some other derivatives of alcohol can be blended with
gasoline to obtain the optimum results. Optimum value of ethanol can be varied to check for better performance
analysis.





Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 81

REFERENCES
[1] Li-Wei Jia, Mei-Qing Shen, Jun Wang, Man-Qun Lin, Influence of ethanol-gasoline blended fuel on
emission characteristics from a four-stroke motorcycle engine. Journal of Hazardous materials 2005;
A123:29-34.
[2] David M. Mousdale, Biofuels-Biotechnology, Chemistry, and Sustainable Development. CRC Press. 2008.
[3] Mustafa Balat, Havva Balat, Cahide Oz, Progress in bioethanol processing. Progress in Energy and
Combustion Science 2008;34:551-573.
[4] M.L. Cazetta, M.A.P.C. Celligoi, J.B. Buzato, I.S. Scarmino, Fermentation of Molasses by Zymonas
mobiles; Effects of temperature and sugar concentration on ethanol production. Bioresource technology
2007;98:2824-2828.
[5] Mubeccel Ergun, S. Ferda Mutlu, Application of a statistical technique to the production of ethanol from
sugar .

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 82

THERMOACOUSTIC REFREGERATION

SUDEEP VATS

Student, Dept. of Mechanical Engineering, Lingayas Institute of Management and Technology, Faridabad
sudeepvats90@gmail.com


ABSTRACT

From creating comfortable haven environments to manufacturing scurrilously fast and efficient electronic
devices,air conditioning and refrigeration remain exorbitant,yet essential, services for both homes and
industries.
However, in an age of impending energy and environmental crisis, current cooling technologies
continue to generate greenhouse gases with high energy costs.
Thermoacoustic refrigeration is an ingenious alternative for cooling that is both clean and inexpensive.
Through the explanation of a functional model,I will demonstrate the effectiveness of thermoacoustics for
modern cooling.Refrigeration relies on two major thermodynamic principles. First, a fluids temperature rises
when compressed and falls when expanded. Second, when two substances are placed in direct contact, heat will
flow from the hotter substance to the cooler one.
While conventional refrigerators use pumps to transfer heat on a macroscopic scale, thermoacoustic
refrigerators rely on sound to generate waves of pressure that alternately compress and relax the gas particles
within the tube. Although the model did not achieve the original goal of refrigeration, but it suggests that
thermoacoustic refrigerators could one day be viable replacement for the conventional refregeration.

INTRODUCTION

Thermoacoustics is based on the principle that sound waves are pressure waves. These sound waves
propagate through the air via molecular collisions.The molecular collisions cause a disturbance in the
air, which in turn creates constructive and destructive interference. The constructive interference makes the
molecules compress, and the destructive interference makes the molecules expand. This principle is the basis
behind the thermoacoustic refrigerator.One method to control these pressure disturbances is with standing
waves. Standing waves are natural phenomena exhibited by any wave, such as light, sound, or water waves. In a
closed tube,columns of air demonstrate these patterns as sound
waves reflect back on themselves after colliding with the end of the tube. When the incident and reflected
waves overlap, they interfere constructively, producing a single waveform. This wave appears to
cause the medium to vibrate in isolated sections as the traveling waves are masked by the interference.1
Therefore, these standing waves seem to vibrate in constant position and orientation around stationary
nodes. These nodes are located where the two component sound waves interfere to create areas of
zero net displacement. The areas of maximum displacement are located halfway between two nodes
and are called antinodes. The maximum compression of the air also occurs at the antinodes. Due to these node
and antinode properties, standing waves are useful because only a small input of power is needed to create a
large amplitude wave. This large amplitude wave then has enough energy to cause
visible thermoacoustic effects. All sound waves oscillate a specific amount of times per second, called the
waves frequency, and is measured in Hertz. For our thermoacoustic refrigerator we had to calculate the optimal
resonant frequency in order to get the maximum heat transfer rate. The equation for the frequency of a wave
traveling through a closed tube is given by:

f= v/4l
where f is frequency, v is velocity of the wave, and l is the length of the tube.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 83


Figure 1: Shows the relationship between the phase of the wave, the
pressure, and the actual arrangement of the molecules. The black line
shows the phase of the sound wave, the red shows the pressure and the
dots below represent the actual molecules. From Reference 2


Thermodynamics, Heat Cycles, and Heat Pumps
The second fundamental science behind thermoacoustics is thermodynamics, the study of heat
transfer. The Ideal Gas Law states that the pressure on a gas is directly proportional to absolute
temperature, or, as the pressure on a gas increases,the temperature increases. On a microscopic scale,
the gas particles in a system will collide morefrequently if the temperature is increases or if the
volume is reduced. The basic thermodynamic cycles rely on this relationship between temperature and
pressure. In any heat cycle, gases will expand and contract, circulating heat throughout the system.
These movements of kinetic energy can be used to do work. Depending on how the heat oscillations are
controlled, different heat cycles become more efficient, involving less loss of heat from the system.
Thermoacoustic refrigerators use variations of these cycles to pump heat.

The Carnot Cycle
The most efficient cycle of thermodynamics, the Carnot cycle, takes advantage of this principle of gas
expansion. The Carnot cycle uses gas in a closed chamber to extract work from the system. In engines,
pistons are used to output work. The cycle begins with the piston in its rest position. Heat from an
outside source is transferred to the gas via an isothermal process where the temperature does not
change. By the ideal gas law, the gas expands ,pushing the piston to its extended position. This is an
adiabatic process where no heat is transferred into or out of the cylinder. The heated gas then transfers the heat
to a low temperature container doing work on the surroundings. This is also an isothermal process.The
surroundings now do work on the system,adiabatically compressing the gas and allowing the
piston to fall back to its rest position. However,because it is easier to compress the cooler gas than to
add heat to the warm gas, net work is done on the surroundings.To determine the efficiency of the cycle, the
total useful work done is compared to the total heat transferred. In Figure 3, the total heat transferred
equals the red area plus the white area. The work extracted from the system is represented by the white
area. Even the Carnot cycle, the ideal thermodynamic process where each step is reversible and involves no
change in entropy,2 transfers more heat than it does work. However, the Carnot cycle has the best work output
with the given temperature difference and entropy difference, so it is defined to be 100%
efficient.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 84




Figure 2: P-V diagram of the Carnot cycle

Figure 3: T-S diagram showing the four stages in the Carnot cycle

The Stirling Cycle
The Stirling cycle is a variation of the Carnot cycle, but unlike the Carnot cycle, an engine can
actually be constructed that effectively utilizes the Stirling method of heat transfer. In a Stirling engine,
an external heat source (often external combustion) transfers heat into the gas in the chamber. As in the
Carnot cycle, the gas expands, pushing the piston to its extended position. The chamber into which the gas
expands, however, has a heat sink, usually consisting of metal fins, through which the heat in the expanded gas
can dissipate into a cooler chamber. The gas then compresses, and the piston returns to its rest position. A
Stirling engine is useful because it can be powered by almost any external heat source, such as solar power,
nuclear power, or conventional combustion.Both the Stirling cycle and Carnot cycle involve the following basic
thermodynamic cycle: heat enters from a hot container, work comes out of the engine (i.e. moving a piston),
and, as a result, the heat is dissipated into a cooler container. A heat pump, or refrigerator, operates on the same
basic cycle as a heat engine, only in reverse. A heat pump requires an input of work to transfer heat from a
cooler container to a hotter one.

Thermoacoustics
Thermoacoustics combines the branches of acoustics and thermodynamics together to move heat by using
sound. While acoustics is primarily concerned with the macroscopic effects of sound transfer like coupled
pressure and motion oscillations, thermoacoustics focuses on the microscopic temperature oscillations that
accompany these pressure changes. Thermoacoustics takes advantage of these pressure oscillations to move heat
on a macroscopic level. This results in a large temperature difference between the hot and cold sides of the
device and causes refrigeration. The most important piece of a thermoacoustic device is the stack. The stack
consists of a large number of closely spaced surfaces that are aligned parallel to the to the resonator tube. The
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 85

purpose of the stack is to provide a medium for heat transfer as the sound wave oscillates through the resonator
tube. In typical standing wave devices,the temperature differences occur over too small of an area to be
noticeable. In a usual resonator tube,heat transfer occurs between the walls of cylinder and the gas. However,
since the vast majority of the molecules are far from the walls of the chamber, the gas particles cannot exchange
heat with the wall and just oscillate in place, causing no net temperature difference. In a typical column, 99% of
the air molecules are not near enough to the wall for the temperature effects to be noticeable. The purpose of the
stack is to provide a medium where the walls are close enough so that each time a packet of gas moves,the
temperature differential is transferred to the wall of the stack.Most stacks consist of honeycombed plastic
spacers that do not conduct heat throughout the stack but rather absorb heat locally. With this property, the
stack can temporarily absorb the heat transferred by the sound waves. The spacing of these designs is crucial: if
the holes are too narrow, the stack will be difficult to fabricate, and the viscous properties of the air will make it
difficult to transmit sound through the stack. If the walls are too far apart, then less air will be able to transfer
heat to the walls of the stack, resulting in lower efficiency.

Thermoacoustic Cycle
The cycle by which heat transfer occurs is similar to the Stirling cycle. Figure 4 traces the basic thermoacoustic
cycle for a packet of gas, a collection of gas molecules that act and move together. Starting from point 1, the
packet of gas is compressed and moves to the left. As the packet is compressed, the sound wave does work on
the packet of gas, providing the power for the refrigerator. When the gas packet is at maximum compression, the
gas ejects the heat back into the stack since the temperature of the gas is now higher than the temperature of the
stack. This phase is the refrigeration part of the cycle, moving the heat farther from the bottom of the tube.In the
second phase of the cycle, the gas is returned to the initial state. As the gas packet moves back towards the right,
the sound wave expands the gas. Although some work is expended to return the gas to the initial state, the heat
released on the top of the stack is greater than the work expended to return the gas to the initial state. This
process results in a net transfer of heat to the left side of the stack. Finally, in step 4, the packets of gas reabsorb
heat from the cold reservoir to repeat the heat transfer process.


FIGURE-4 Thermoacoustic refrigerator cycle. The left end is towards the closed end of the resonator tube..

Penetration Depth
An essential variable in building a thermoacoustic refrigerator is the spacing between the walls of the stack.
If the walls of the stack are too close, the sound cannot pass through the stack efficiently since the viscous
properties of air prevent the air from vibrating. If the walls are too far apart,the process described above cannot
occur, since gas packets are too far away from the wall to effectively transfer heat. According to G.W. Swift, the
ideal spacing in a stack is 4 thermal penetration depths.The thermal penetration depth is the distance heat can
diffuse in a gas over a certain amount of time.

For example, if a block of aluminum is at a constant low temperature and suddenly one side is exposed to a
high temperature, the distance that the heat penetrates the metal in 1 second is the heat penetration. As time
passes, the heat penetrates farther into the material,increasing the temperature of the interior sections.
However, since sound waves are constantly oscillating between the roles of heat source and heat
sink, the thermal penetration depth is roughly constant. The thermal penetration depth for an
oscillating heat source is a function of the frequency of the standing wave, f , the thermal conductivity,
k, and density, , of the gas, as well as the isobaric specific heat per unit mass of the gas, p c , according to the
following equation
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 86

o
k=
sq root of(
K/
t f p c)

Critical Temperature
The critical temperature is the temperature at which no heat will be transferred through the stack. If
the temperature difference induced by the sound wave is greater than this critical temperature, the
stack will function as a refrigerator, transferring heat from the cold end of the tube to the warm end. If the
temperature is less than the critical temperature then the stack will function as an acoustic engine, moving heat
from the warm region to the colder region and creating sound waves. The function for the critical longitudinal
temperature gradient is5

VT
crit
= p/ c
p

where p is the acoustic pressure and is the acoustic displacement amplitude. The variation in local wall
temperature is represented by crit 2 V T over the maximum displacement of the gas molecules. The maximum
temperature variation caused by the sound waves p 2 p / c. If these two quantities are equal, the critical
temperature is reached and no heat is transferred. This temperature is important in determining the properties of
a thermoacoustic device, since efficiency depends on a temperature differential caused by the sound waves that
is larger than the critical temperature so that a large cooling effect is created.

APPLICATIONS
Thermal management has always been a concern for computer systems and other electronics. Computational
speeds will always be limited by the amount of noise produced by computer chips. Since
most noise is generated by waste heat, computer components and other semiconductor devices operate faster
and more efficiently at lower temperatures.8 If thermoacoustic cooling devices could be scaled for
computer applications, the electronic industry would realize longer lifetimes for microchips, increased
speed and capacity for telecommunications, as well as reduced energy costs.

CONCLUSIONS
Device worked as a proof of concept device showing that a thermoacoustic device is possible and
is able to cool air, abet for only a short period of time. If we were able to build the device with better
materials, such has a more insulating tube, we might have been able to get better results. In order to create a
working refrigerator we probably would have to attach a heat sink to the top of the device, thus, Although this
Research conducted by Professor Steven Garrett at Pennsylvania State University has yielded reliable air
conditioning devices used in submarines and space shuttles.10 However, future applications of thermoacoustic
air conditioners would not be restricted to industrial uses but could offer inexpensive heating and cooling for
homes.Additionally, since current air conditioners use HFCs and other potentially harmful
chemicals,thermoacoustic cooling systems that employ inert
gases would have long-term benefits on the environment. One thermoacoustic device could potentially operate
an entire households air conditioner, water heater, and furnace, eliminating the need for natural gases and
oils.Ben and Jerrys Ice Cream, in collaboration with Professor Garretts research team, has begun production of
thermoacoustic freezers to keep its ice cream cold. Investing over $600,000 in Garretts program, Ben and
Jerrys has already placed the freezers in many of its New York stores. The ice cream companys experiment has
successfully demonstrated the viability of thermoacoustic refrigeration.

REFERENCES

[1] Standing Waves. Rod Nave, Georgia State University. Available: http://hyperphysics.phyastr.
gsu.edu/hbase/waves/standw.html. 17 July 2006.
[2] http://hyperphysics.phy-astr.gsu.edu/hbase/thermo/carnot.html
[3] http://www.howstuffworks.com/stirling-engine.htm
[4] ]http://en.wikipedia.org/wiki/Carnot_cycle
[5] Daniel A. Russell and Pontus Weibull, Tabletop thermoacoustic refrigerator for demonstrations, Am.
J. Phys. 70 (12), December 2002.
[6] G. W. Swift, Thermoacoustic engines and refrigerators, Phys. Today 48, 22-28 (1995)
[7] http://www.rolexawards.com/laureates/laureate-36-lurie_garrett.html
[8] Thermal Management of Computer Systems Using Active Cooling of Pulse Tube Refrigerators. H.H.
Jung and S.W.K Yuan. Available: http://www.yutopian.net/Yuan/papers/Intel.PDF. 17 July 2006.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 87

[9] Thermoacoustic Refrigeration for Electronic Devices: Project Outline. Stephen Tse, 2006 Governors
School of Engineering and Technology.
[10] Frequently Asked Questions about Thermoacoustics. Penn State Graduate Program in Acoustics.
Available:http://www.acs.psu.edu/users/sinclair/thermal/tafaq.html. 17 July 2006.
[11] Chilling at Ben & Jerrys: Cleaner, Greener. Ken Brown










Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 88

COMPARISON OF BASIC IRREVERSIBLE GAS CYCLES
UNDER THE RESTRICTION OF EQUAL HEAT INPUT

N. J. Dembi

Professor, Mechanical Engineering Department, Lingayas University,
Nachauli, Faridabad- 121 002.

njdembi@rediffmail.com


ABSTRACT
This paper compares the theoretical thermal efficiency and theoretical effectiveness of five typical irreverible
gas cycles, viz., Otto, Diesel, Atkinson, Miller and Joule under the restriction of equal heat addition. The
irreversibilities associated with the compression and expansion processes are accounted for. It is shown that
conclusions based on ideal behavior [ Kamiuto K. Applied Energy 2006; 83 : 583-93.] can be misleading in
real life situations. Though Atkinson cycle has the highest ideal thermal efficiency, it is more susceptible to
irreversibilities and therefore less acceptable practically. It is shown that Atkinson cycle has the highest
reduction in thermal efficiency from ideal conditions of compression process efficiency (
c
) and expansion
process efficiency (
e
) each equal to 1 to these being 0.9 each. It is not the Atkinson cycle but Miller cycle
which turns out to be the best overall and deserves further attention for practical application on a large scale.

Keywords: Gas cycles; Irreversibility; Thermal efficiency; Effectiveness.
Nomenclature
c
p
isobaric specific heat (J/kg K)
c
v
isochoric specific heat (J/kg K)
q
1
heat added (J/kg)
q
2
heat rejected (J/kg)
Q dimensionless heat addition (= q
1
/c
v
T
1
)
r compression ratio (= v
1
/v
2
)
r
e
expansion ratio (v
4
/v
3
)
s specific entropy (J/kg K)
T temperature (K)
v specific volume ( m
3
/kg)

Greeks
ratio of specific heats (=c
p
/ c
v
)

t
theoretical effectiveness

c
compression process efficiency

e
expansion process efficiency

t
theoretical thermal efficiency
dimensionless temperature (=T/T
1
)
dimensionless mean effective pressure

Subscripts
1,2,3,4,5 particular states of a cycle
idc inner dead center
odc outer dead center
Superscript
ideal states of a cycle


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 89

INTRODUCTION
Air standard cycles represent the idealized version of the actual cycles of heat engines. Their analyses
provide useful information regarding the influence of important parameters on the engine performance. The
conclusions reached from the analyses of ideal cycles are also applicable to actual cycles [1]. Air is assumed as
the working substance and the processes are assumed to be internally reversible.
The thermal performance of ideal cycles are expressed on the basis of theoretical thermal efficiency and
theoretical effectiveness [1,2,3].The theoretical thermal efficiency

t
is defined by the ratio of the net work output to the heat input, whereas, the theoretical effectiveness
t
is
defined by a ratio of the net work output obtained by a cycle to that achieved by an infinite number of Carnot
engines receiving the same heat input as the cycle examined [3]. The thermal efficiencies of ideal gas cycles
( Otto, Diesel and Dual ) have been compared on the basis of same heat addition / same heat rejection and same
maximum pressure and temperature [ 4,5].With maximum pressure and maximum temperature both fixed the
compression ratio is no longer a variable.
Recently Kamiuto [3] compared the theoretical efficiency and theoretical effectiveness of ideal gas cycles
on the basis of equal heat addition. No irreversibilities
were considered. Since the comparison under the restriction of constant heat-addition
is quite interesting from the view point of effective use of energy [3], it is important
to consider internal irreversibilities of the cycles and then arrive at a conclusion. Further it is useful to compare
the indicated mean effective pressure a measure of the effectiveness with which the displaced volume of the
engine is used to produce work [5].
This is a good parameter to compare engines for design or output because it is independent of engine size and/or
speed [4]. Further, mean effective pressure like work ratio is an indicator of the susceptibility of a cycle to
irreversibility the lower the value the greater is the susceptibility.
The present study takes into account the irreversibilities associated with the compression and expansion
processes in order to make the cycles approach real life situation. In addition to thermal efficiency and
effectiveness, indicated mean effective pressure as a parameter for comparison, as mentioned above, is also
considered in this study. The cycles investigated include the more common practically used cycles like Otto,
Diesel, Atkinson, Miller, and Joule. Whereas Otto, Diesel, Atkinson and Joule cycles were considered in [3] the
Miller cycle has been included replacing Carnot and Takemura cycles (no worthwhile purpose served with their
inclusion). The Miller cycle, named after R. H. Miller (1890-1967), is a modern modification of the Atkinson
cycle and has an expansion ratio greater than the compression ratio. This is accomplished, however, in a much
different way. Whereas an engine designed to operate on the Atkinson cycle needed a complicated mechanical
linkage system of some kind, a Miller cycle engine uses unique valve timing to obtain the same desired results
[4]. The effective close timing of the inlet valve is slightly shifted forward or backward from the bottom dead
center to decrease only the effective compression ratio, making expansion ratio > compression ratio , the
compression ratio can be kept to a level that prevents knocking, while keeping the expansion ratio at a high level
to ensure high efficiency [6] .Recently, the Miller cycle has also been proposed as a means of reducing harmful
NOx emissions while maintaining a high engine efficiency [7].

ANALYSIS
The cycles analyzed are: Otto; Diesel; Atkinson; Miller and Joule. The working substance is assumed to be an
ideal gas with constant specific heats. The inlet state for all the cycles is same and known as p
1
, T
1
. The ideal
cycles are designated as 1-2-3-4-1, and the irreversible cycles as 1-2-3-4-1 except for Miller cycle where the
designation is as 1-2-3-4-5-1 and 1-2-3-4-5-1 respectively. The T-s diagrams of these cycles are shown in
Table 1.The efficiency of the compression process 1-2, c
,
and

the

efficiency of the expansion process 3-4, e
,

are defined as


c
= (T
2
T
1
)/( T
2
-T
1
)
e
= ( T
3
-T
4
)/ (T
3
- T
4s
)
where state 4s is directly below state 3 having the same entropy.
The heat input during 2-3 or 2-3 for non-ideal or ideal cycle respectively, is given by
q
1
= c
v
(T3 T
2
) or cp ( T
3
T
2
)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 90

Depending on whether the process is at constant volume or at constant pressure. It may, however, be noted that
q
1
during 2-3 (non-ideal cycle) is equal to that during 2-3 (ideal cycle) because of equal heat addition
restriction.
Similarly the heat rejected during 4-1 is given by
q
2
= cv (T
4
T
1
) or cp ( T
4
T
1
)
Depending on the process of heat rejection, except for Miller cycle where it is given by
q
2
= c
v
( T
4
- T
5
) + c
p
( T
5
T
1
)
The theoretical thermal efficiency
t
and the theoretical effectiveness
t
[3] are respectively defined as

t
= 1- q
2
/ q
1


t
= (q
1
-q
2
) / [ q
1
T
1
(s
3
-s
2
)] =
t
/ [1 (T
1
/ q
1
) (s
3
-s
2
)]
The entropy change s
3
-s
2
is give by
s
3
-s
2
= c
p
ln ( T
3
/ T
2
) for constant pressure process, and
= c
v
ln ( T
3
/ T
2
) for constant volume process.
The indicated mean effective pressure is defined as
m.e.p. = indicated work / displacement = ( q
1
-q
2
) / ( v
odc
v
idc
)
= q
1

t
/ v
1
( v
odc
/v
1
v
idc
/v
1
) = Q
t
c
v
T
1
/ v
1
( v
odc
/v
1
v
idc
/v
1
)
Non-dimensionalising m.e.p. with p
1
we get
m.e.p. /p
1
= = Q
t
c
v
T
1
/ p
1
v
1
( v
odc
/v
1
v
idc
/v
1
)
= Q
t
/ ( 1) ( v
odc
/v
1
v
idc
/v
1
)
The non-dimensional displacement ( v
odc
/v
1
v
idc
/v
1
) can be evaluated for the particular cycle under
consideration.
The following parameters are introduced to non-dimensionalise temperatures, efficiency, effectiveness and
mean effective pressure:
Q = q
1
/ c
v
T
1
; = T / T
1
; = c
p
/c
v
; r = v
1
/ v
2
; r
e
= v
4
/ v
3
; = m.e.p. /p
1

With these the typical non-dimensionalised representative quantities for the various
cycles are derived and the results shown in Table 1.



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 91

Table 1. Typical Gas Cycles: T-s Diagrams and Non-dimensionalised Representative Quantities

Name of
Cycle

Otto Cycle

Diesel Cycle

Atkinson Cycle


T-s Diagram












T
y
p
i
c
a
l























































N
o
n
-

d
i
m
e
n
s
i
o
n
a
l
i
s
e
d



R
e
p
r
e
s
e
n
t
a
t
i
v
e

Q
u
a
n
t
i
t
i
e
s

1
= 1

2
= 1 + A/
c

3
=
2
+ Q

4
=
3
[ 1-
e
A / ( A+1) ]

t
= 1 (
4
1)/ Q

t
=

t
/ [ 1 ( 1/Q) ln (
3
/
2
) ]

= Q r
t
/ (( 1)( r- 1))

1
= 1

2
= 1 + A/
c

3
=
2
+ Q/

4
=
3
(1-
e
) +
e
[
3
/ ( A+1)]

t
= 1 (
4
1)/ Q

t
=

t
/ [ 1 ( /Q) ln (
3
/
2
) ]

= Q r


t
/ (( 1)( r

-
2
))

1
= 1

2
= 1 + A/
c

3
=
2
+ Q

4
=
3
(1-
e
) +
e
[
3
/ ( A+1)]
1/

t
= 1 (
4
1) / Q

t
=

t
/ [ 1 (1/Q) ln (
3
/
2
) ]

= Q r
t
/ (( 1)( r
4
- 1))






Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 92

Table 1. contd.

Name of
Cycle

Miller Cycle

Joule Cycle


T-s Diagram







Typical
Non-Dimensionalised
Representative
Quantities

1
= 1

2
= 1 + A/
c


3
=
2
+ Q

4
=
3
[1-
e
+
e
r
e
1-
]

5
= r
e
/ r

t
= 1 (
4

1
+( -1)
5
) / Q

t
=

t
/ [ 1 (1/Q) ln (
3
/
2
) ]
= Q r
t
/ (( 1)( r
5
- 1))

1
= 1

2
= 1 + A/
c

3
=
2
+ Q/

4
=
3
(1-
e
) +
e

3
/ ( A+1)

t
= 1 (
4
1) / Q

t
=

t
/ [ 1 ( /Q) ln (
3
/
2
) ]
= Q r


t
/ (( 1)( r

4
-
2
))

Note: A = r
(-1)
1
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 93

RESULTS AND DISCUSSION

The parameters in Table 1 are evaluated for various values of Q and (
c
,

e
). The specific heat ratio is
taken as 1.4. For Miller cycle, the calculations are done for values of r
e
/ r = 1.2 and 1.4, correspondingly
designating the cycles as Miller-1.2 and Miller-1.4 respectively. The representative results are shown
graphically in Figs. 1-6. The efficiency, effectiveness and mean effective pressure, all decrease with any
decrease in
c
or

e
. However, it is found that a decrease in
e
is more detrimental for efficiency, effectiveness
as well as mean effective pressure than a corresponding decrease in
c
. This is also borne out in [8]. For a given
value of
e
the efficiency almost becomes independent of
c
for higher values of Q. The efficiency normally
increases with r but at lower values of Q it shows a decrease at lower values of
c
and

e
, especially
e.
The
effectiveness decreases with increasing Q for (1, 1) since no compensation is needed. However, with
irreversibilities a higher Q shows better effectiveness since it is able to compensate for the irreversibilities. The
mean effective pressure (Figs. 5-6) increases with r for (1, 1) for lower as well as higher heat loads. With the
increase in irreversibilities the m.e.p. decreases with
e
being more dominant. A peak is exhibited which occurs
at higher r with higher Q. The results show that the Otto cycle and the Diesel cycle turn out to be the best in the
region of their operability.

Figures 1 and 3 are similar to those given in [3] except for the inclusion of Miller cycle. Figure 1 reveals
that for Q=1 and
c,

e
as (1, 1) the Atkinson cycle has the highest efficiency with Miller cycle almost
coinciding. For higher Q values and
c,

e
as (1, 1) also, Atkinson has the highest efficiency with Miller cycle
closely behind. This leads to an illusory conclusion [3] of Atkinson cycle being the best. However when
irreversibilities are considered, which take a thermodynamic system closer to reality, it is found that Atkinson
and Joule cycle are more susceptible than other cycles. This is revealed in Fig. 2 for theoretical efficiency and
in Fig. 4 for theoretical effectiveness. The irreversibilities have more detrimental effect on Atkinson and Joule
cycles, especially
e
than

c.
It is only at higher heat inputs that the situation gets better. Again, referring to
Figs. 5-6 for comparing mean effective pressures it is found that the Atkinson and Joule cycles are very poorly
placed. Their low mean effective pressure is the reason for them to be more susceptible to irreversibilities. Of
course, the comments on the Joule cycle are not that relevant since it is not used for reciprocating engines.


The percentage reduction in theoretical efficiency from ideal conditions (1, 1) to (0.9, 0.9) is shown in
Table 2 for Q = 1, 10 for representative compression ratios of 8 and 18. The results show that Atkinson and
Joule cycles turn out to be the worst. At higher heat loads and lower process efficiencies it is the Diesel cycle
which turns out to be better. However, it is the Miller cycle which turns out to be the best overall. Calculations
show that at lower heat inputs re/r = 1.2 gives higher efficiency than re/r = 1.4, and at higher heat inputs it is
vice versa. The percentage reduction in effectiveness and mean effective pressure shown in Table 3 and Table 4
respectively, follow almost the same trend. It, thus, transpires that complete expansion is not that beneficial for
reciprocating engines.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 94


Fig. 1. Efficiency curves for Q = 1 ;
c
= 1 ;
e
= 1 . Note that for ( 1, 1) Otto cycle and Joule cycle have
the same expressions and Atkinson cycle and Miller cycle are also almost same because of Q = 1.


Fig. 2. Efficiency curves for Q = 1 ;
c
= 0.9 ;
e
= 0.9

Q = 1; (1,1); = 1.4
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0 10 20 30
r

t
Otto; Diesel
Atkinson Miller-1.2
Miller-1.4 Joule
Q = 1 ; ( 0.9,0.9 ) ; = 1.4
0
0.1
0.2
0.3
0.4
0 5 10 15 20 25 30
r

t
Otto Diesel Atkinson
Miller-1.2 Miller-1.4 Joule
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 95


Fig. 3. Effectiveness curves for Q = 1 ;
c
= 1 ;
e
= 1 .



Fig. 4. Effectiveness curves for Q = 1 ;
c
= 0.9 ;
e
= 0.9 .

Q = 1; (1 , 1) ; = 1.4;
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 10 20 30
r

t
Otto Diesel
Atkinson Miller-1.2
Miller-1.4 Joule
Q = 1; = 1.4; ( 0.9,0.9 )
0.1
0.2
0.3
0.4
0.5
0.6
0 10 20 30
r

t
Otto Diesel
Atkinson Miller-1.2
Miller-1.4 Joule
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 96


Fig. 5. Non-dimensionalised m.e.p. for Q = 1 ;
c
= 1 ;
e
= 1



Fig. 6. Non-dimensionalised m.e.p. for Q = 1;
c
= 0.9;
e
= 0.9








Q = 1 ; (1 , 1) ;
0.5
0.7
0.9
1.1
1.3
1.5
1.7
1.9
2.1
0 10 20 30
r
m

e

p
/
p
1
Otto Diesel
Atkinson Miller-1.2
Miller-1.4 Joule
Q=1 ; (0.9,0.9)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0 10 20 30
r
m
e
p
/
p
1
Otto Diesel
Atkinson Miller-1.2
Miller-1.4 Joule
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 97

Table 2. Percentage reduction in efficiency from ideal conditions (1, 1) to (0.9, 0.9)

Q

r

Otto

Diesel

Atkinson

Miller-1.2

Miller-1.4

Joule

1

8

45.52

47.93

59.76

44.94

45.78

59.74

1

18

55.31

57.45

73.28

54.95

56.01

73.43

10

8

13.55

9.95

19.42

13.44

13.40

14.97

10

18

14.53

11.91

20.41

14.44

14.40

16.34

Table 3. Percentage reduction in effectiveness from ideal conditions (1, 1) to (0.9, 0.9)

Q

r

Otto

Diesel

Atkinson

Miller-1.2

Miller-1.4

Joule

1

8

47.02


49.54


60.88


46.46


47.28


60.98


1

18

56.33


58.51

73.90


55.99

57.02

74.09


10

8

14.06


10.66


19.89


13.95

13.91


15.65

10

18

15.08


12.64


20.92

14.99


14.95


17.04

Table 4. Percentage reduction in non-dimensionalised mean effective pressure from ideal conditions (1, 1) to
(0.9, 0.9)

Q

r

Otto

Diesel

Atkinson

Miller-1.2

Miller-1.4

Joule

1

8

45.53


47.46

66.86


44.94


45.78


66.35


1

18

55.31


57.25

79.68


54.95

56.01

79.62


10

8

13.55


9.13


37.73


13.44

13.40


26.04

10


18

14.53


11.51


43.38

14.44


14.40


33.03

CONCLUSIONS

This study reveals that misleading conclusions are liable to be drawn from ideal thermodynamic cycles if
irreversibilities are not taken into consideration. It is shown that though Atkinson cycle has the highest
theoretical thermal efficiency and theoretical effectiveness [3] it is more susceptible to irreversibilities than
other cycles which make it to be much less competitive practically. These conclusions are further strengthened
through the study of mean effective pressure. The low mean effective pressure, like work ratio, indicate its
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 98

susceptibility to irreversibility. The percentage reduction in efficiency, effectiveness and mean effective
pressure being the highest for Atkinson cycle for a given change from ideal condititions. Added to this is its
lower mechanical efficiency because of additional linkages needed. No doubt that no large number of these
engines has ever been marketed [4]. It is shown that it is the Miller cycle which is the best overall even under
adverse conditions and needs to be further investigated and optimized. Calculations show that at lower heat
inputs Miller-1.2 is better than Miller-1.4
and at higher heat inputs Miller-1.4 is better than Miller-1.2. The study shows that along with efficiency, mean
effective pressure should also be included in optimizing the Miller cycle. Further, complete expansion is not that
beneficial for reciprocating engines.

REFERENCES
[1] Cengel Y A and Boles M A. Thermodynamics: An Engineering Approach. New York : McGraw-Hill;
2002.
[2] Bejan Adrian. Advanced Engineering Thermodynamics. New York: Wiley; 1988
[3] Kamiuto K. Comparison of basic gas cycles under the restriction of constant heat addition. Applied
Energy 2006; 83 : 583-93.
[4] Pulkrabek W W. Engineering Fundamentals of the Internal Combustion Engine. New Jersey: Prentice Hall;
1997.
[5] Heywood J B. Internal Combustion Engine Fundamentals. New York: McGraw-Hill; 1988.
[6] Fukuzawa Y, Shimoda H, Kakuhama Y, Endo H, Tanaka K. Development of high efficiency Miller cycle
gas engine. Mitsubishi Heavy Industries, Ltd., Technical Review 2001; 38 (3): 146150.
[7] Mikalsen R, Wang YD, Roskilly AP. A comparison of Miller and Otto cycle natural gas engines for small
scale CHP applications. Applied Energy 2009; 86: 92227.
[8] Chen J, Zhao Y, He J. Optimization criteria for the important parameters of an irreversible Otto heat-
engine. Applied Energy 2006;83:228-38




Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 99

HELIOSTAT POWER SYSTEM

1
Vinod Kumar Yadav and
2
Ravindra Kumar
Department of Mechanical Engineering, Gurgaon Institute of Technology&Management, Gurgaon, India

1
vinodyadav79@rediffmail.com,
2
ravindranit@ymail.com


ABSTRACT
Developing efficient and inexpensive energy storage devices is as important as developing new sources of
energy. Energy system can reduce the time between energy supply and energy demand, thereby playing a vital
role in our life.
Solar power towers generate electric power from sunlight by focusing concentrated solar radiation on a tower-
mounted heat exchanger (receiver).These plant suited are best suited for utility-scale applications in the 30 to
400 Mew range. This paper deals with heliostat field that surrounds the tower is laid out to optimize the annual
performance of the plant. The advantages of typically is much less than that required for hydro and is generally
less than that required for fossil (e.g., oil, coal, natural gas), when the mining and exploration of land are
include.

INTRODUCTION

Heliostat power system (also known as 'Central Tower' power plants) is a type of solar furnace using a tower to
receive the focused sunlight. It uses an array of flat, moveable mirrors (called heliostats) to focus the sun's rays
upon a collector tower (the target). The high energy at this point of concentrated sunlight is transferred to a
substance that can store the heat for later use.

The more recent heat transfer material that has been successfully demonstrated is liquid sodium. Sodium is a
metal with a high heat capacity, allowing that energy to be stored and drawn off throughout the evening. That
energy can, in turn, be used to boil water for use in steam turbines. Water had originally been used as a heat
transfer medium in earlier power tower versions (where the resultant steam was used to power a turbine). This
system did not allow for power generation during the evening.

New peaking and intermediate power sources are needed today in many areas of the developing world. India,
Egypt, and South Africa are locations that appear to be ideally suited for power tower development. As the
technology matures, plants with up to a 400 MW rating appear feasible. The value of power is worth more
because a power tower plant can deliver energy during peak load times when it is more valuable. Energy storage
also allows power tower plants to be designed and built with a range of annual capacity factors (20 to 65%).
Combining high capacity factors and the fact that energy storage will allow power to be brought onto the grid in
a controlled manner (i.e., by reducing electrical transients thus increasing the stability of the overall utility grid);
total market penetration should be much higher than an intermittent solar technology without storage. Plants are
typically deployed within desert areas that often lack water and have fragile landscapes. Water usage at power
towers is comparable to other Rankine cycle power technologies of similar size and annual performance.

HELIOSTAT

It comes from Helios, the Greek word for sun and stat, as in stationary. A Heliostat is a device that tracks the
movement of the sun. It is typically used to orient a mirror, throughout the day, to reflect sunlight in a consistent
direction. When coupled together in sufficient quantities, the reflected sunlight from the heliostats can generate
an enormous amount of heat if all are oriented towards the same target. Heliostats have been used for sunlight-
powered interior lighting, solar observatories, and solar power generation. Mirrors and reflective surfaces used
in solar power that do not track the sun are not heliostats.
The simplest heliostat devices use a clockwork mechanism to turn the mirror in synchronization with the
rotation of the Earth. More complex devices need to compensate for the changing elevation of the Sun
throughout a Solar year. The heliostat reflects the sunlight onto the transmission system. This is typically a set
of mirrors that direct the reflected sunlight into the building or, alternatively, a light tube. Fiber optic cabling has
also been used as a transfer mechanism. Various forms of commercial products have been designed for the point
of termination (the "light bulb").
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 100


FUNCTIONS OF SOLAR TOWER

Solar towers consist of a large field of sun-tracking mirrors, called heliostats, which focus solar energy on a
receiver atop a centrally located tower. The enormous amount of energy, coming out of the sun rays,
concentrated at one point (the tower in the middle), produces temperatures of approx. 550C to 1500C. The
gained thermal energy can be used for heating water or molten salt, which saves the energy for later use.
Heartened water gets to steam, which is used to move the turbine-generator. This way thermal energy is
converted into electricity.

MAIN FLUIDS

There are two main fluids which are used for the heat transfer, water and molten salt. Water for example is the
oldest and simplest way for heat transfer. But the difference is that the method in which molten salt is used,
allows storing the heat for the terms when the sun is behind clouds or even at night. Molten salt - better: the heat
of it - can be used until the next dawn when the sun will be back to heat the cooled down salt again.
The molten salt consists of 60% sodium nitrate a 40% potassium nitrate (saltpeter). The salt melts at about
700C and is liquid at approx. 1000C; it will be kept in an insulated storage tank until the time, when it will be
needed for heating up the water in the steam generator. His way of energy storage has an efficiency of approx.
99%, i.e. due to the imperfect insulation 1% of the stored energy gets lost.

WORKING OF THE POWER SYSTEM

Heliostat power system generates electric power from sunlight by focusing concentrated solar radiation on a
tower-mounted heat exchanger. The system uses hundreds to thousands of sun-tracking mirrors called heliostats
to reflect the incident sunlight onto the receiver. In a molten-salt solar power tower, liquid salt at 290C is
pumped from a cold storage tank through the receiver where it is heated to 565C and then on to a hot tank
for storage. When power is needed from the plant, hot salt is pumped to a steam generating system that produces
superheated steam for a conventional Rankin cycle turbine/generator system.

From the steam generator, the salt is returned to the cold tank where it is stored and eventually reheated in the
receiver. Figure 1 is a schematic diagram of the primary flow paths in a molten-salt solar power plant.
Determining the optimum storage size to meet power-dispatch requirements is an important part of the system
design process. Storage tanks can be designed with sufficient capacity to power a turbine at full output for up to
13 hours.
The heliostat field that surrounds the tower is laid out to optimize the annual performance of the plant. The field
and the receiver are also sized depending on the needs of the utility. In a typical installation, solar energy
collection occurs at a rate that exceeds the maximum required to provide steam to the turbine. Consequently, the
thermal storage system can be charged at the same time that the plant is producing power at full capacity. The
ratio of the thermal power provided by the collector system (the heliostat field and receiver) to the peak thermal
power required by the turbine generator is called the solar multiple.


Figure : SYSTEMS LAY OUT
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 101


RANKINE CYCLE
There are four processes in the Rankine cycle, each changing the state of the working fluid. These states are
identified by number in the diagram above.
Process 4-1: First, the working fluid is pumped (ideally isentropic ally) from low to high pressure by a pump.
Pumping requires a power input (for example mechanical or electrical).
Process 1-2: The high pressure liquid enters a boiler where it is heated at constant pressure by an external heat
source to become a superheated vapor. Common heat sources for power plant systems are coal, natural gas, or
nuclear power.
Process 2-3: The superheated vapor expands through a turbine to generate power output. Ideally, this expansion
is isentropic. This decreases the temperature and pressure of the vapor.
Process 3-4: The vapor then enters a condenser where it is cooled to become a saturated liquid. This liquid then
re-enters the pump and the cycle repeats.
REQUIREMENTS
The land and water use values provided in Table 4 apply to the solar portion of the power plant. Land use in
1997 is taken from Solar Two design documents. Land use for years 2000 and beyond is based on systems
studies. The proper way to express land use for systems with storage is ha/MWh/yr. Expressing land use in units
of ha/MW is meaningless to a solar plant with energy storage because the effect of plant capacity factor is lost.
Water use measured at the SEGS VI and VII [20] trough plants form the basis of these estimates. Wet cooling
towers are assumed. Water usage at Solar Two should be somewhat higher than at SEGS VI and VII due to a
lower power block efficiency at Solar Two (33% gross). However, starting in the year 2000, water usage in a
commercial power tower plant, with a high efficiency power block (42% gross), should be about 20% less than
SEGS VI and VII. If adequate water is not available at the power plant site, a dry condenser-cooling system
could possibly be used. Dry cooling can reduce water needs by as much as 90%. However, if dry cooling is
employed, cost and performance penalties are expected to raise level zed-energy costs by at least 10%.

SOLAR POWER APPLICATIONS

Several kinds of very practical solar energy systems are in use today. The two most common are passive solar
heated homes (or small buildings), and small stand-alone photovoltaic (solar electric) systems. These two
applications of solar energy have proven themselves popular over a decade of use. They also illustrate the two
basic methods of harnessing solar energy: solar thermal systems, and solar electric systems. The solar thermal
systems convert the radiant energy of the sun into heat, and then use that heat energy as desired. The solar
electric systems convert the radiant energy of the sun directly into electrical energy, which can then be used as
most electrical energy is used today.

EXAMPLES OF HELIOSTAT POWER PLANTS

The 10 MWe Solar One and Solar Two heliostat demonstration projects in the Mojave Desert have now been
decommissioned. The 15 MW Solar Tres Power Tower in Spain builds on these projects. In Spain the 11 MW
PS10 solar power tower was recently completed. In South Africa, a solar power plant is planned with 4000 to
5000 heliostat mirrors, each having an area of 140 m. A site near Upington has been selected.
BrightSource Energy entered into a series of power purchase agreements with Pacific Gas and Electric
Company in March 2008 for up to 900MW of electricity, the largest solar power commitment ever made by a
utility. BrightSource is currently developing a number of solar power plants in Southern California, with
construction of the first plant planned to start in 2009.
In June 2008, Bright Source Energy dedicated its Solar Energy Development Center (SEDC) in Israel's Negev
Desert. The site, located in the Rotem Industrial Park, features more than 1,600 heliostats that track the sun and
reflect light onto a 60 meter-high tower. The concentrated energy is then used to heat a boiler atop the tower to
550 degrees Celsius, generating steam that is piped into a turbine, where electricity can be produced.




Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 102

CONCLUSIONS

As non-polluting energy sources become more favored, molten-salt power towers will have a high value
because the thermal energy storage allows the plant to be dispatch able. One possible concern with the
technology is the relatively high amount of land and water usage. This may become an important issue from a
practical and environmental viewpoint since these plants are typically deployed within desert areas that often
lack water and have fragile landscapes.

REFERENCES

[1] www.wikepedia.org
[2] www.solar.org
[3] SEMINAR TOPIC FROM :: www.edufive.com/seminartopics.html
[4] Solar power engineering By B.S Magal
[5] Solar energy fundamentals and applications By H P Garg and J Prakash
[6] Renewable energy sources and their environmental impact By S A Abbasi
and Naseema Abbasi.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 103

SIX STROKE ENGINE

Shubham Aggarwal
1
, Rohit Moza
2
,Rakesh Kumar
3
,Rajat Mathur
4
Student, Dept. of Mechanical Engg.,Lingayas Institute of Management and Technology, Faridabad

shubham.limat@gmail.com
1
, rohit_moza15@yahoo.co.in
2
, godara29undefined@gmail.com
3
.
rajatmathur.18@gmail.com

ABSTRACT

The quest for an engine which will be having same or more power with higher fuel efficiency than the existing
ones has been started before many years. As a result of all these researches a new engine concept is formed,
which is a six stroke engine.
During every cycle in a typical four stroke engine, piston moves up and down twice in the chamber, resulting in
four total strokes and one of which is the power stroke that provides the torque to move the vehicle. But in a six
stroke engine there are six strokes and out of these there are two power strokes. The six-stroke engine is a type
of internal combustion engine based on the four-stroke engine, but with additional complexity intended to make
it more efficient and reduce emissions. Two different types of six-stroke engine have been developed since the
1990s:
In the first approach, the engine captures the heat lost from the four-stroke Otto cycle or Diesel cycle and uses it
to power an additional power and exhaust stroke of the piston in the same cylinder. Designs use either steam or
air as the working fluid for the additional power stroke. The pistons in this type of six-stroke engine go up and
down three times for each injection of fuel. There are two power strokes: one with fuel, the other with steam or
air. The currently notable designs in this class are the Crower six-stroke engine, invented by Bruce Crower of
the U.S. ; the Bajulaz engine by the Bajulaz S.A. company of Switzerland; and the Velozeta Six-stroke engine
built by the College of Engineering, at Trivandrum in India.
The second approach to the six-stroke engine uses a second opposed piston in each cylinder that moves at half
the cyclical rate of the main piston, thus giving six piston movements per cycle. Functionally, the second piston
replaces the valve mechanism of a conventional engine but also increases the compression ratio. The currently
notable designs in this class include two designs developed independently: the Beare Head engine, invented by
Australian Malcolm Beare, and the German Charge pump, invented by Helmut Kottmann.

INTRODUCTION
The majority of the actual internal combustion engines, operating on different cycles have one common feature,
combustion occurring in the cylinder after each compression, resulting in gas expansion that acts directly on the
piston (work) and limited to 180 degrees of crankshaft angel.
According to its mechanical design, the six-stroke engine with external and internal combustion and double
flow is similar to the actual internal reciprocating combustion engine. However, it differentiates itself entirely,
due to its thermodynamic cycle and a modified cylinder head with two supplementary chambers: combustion
and an air heating chamber, both independent from the cylinder. In this the cylinder and the combustion
chamber are separated which gives more freedom for design analysis. Several advantages result from this, one
very important being the increase in thermal efficiency.It consists of two cycles of operations namely external
combustion cycle and internal combustion cycle, each cycle having four events. In addition to the two valves in
the four stroke engine two more valves are incorporated which are operated by a piston arrangement.The Six
Stroke is thermodynamically more efficient because the change in volume of the power stroke is greater than the
intake stroke and the compression stroke. The main advantages of six stroke engine includes reduction in fuel
consumption by 40%, two power strokes in the six stroke cycle, dramatic reduction in pollution, adaptability to
multi fuel operation.Six stroke engines adoption by the automobile industry would have a tremendous impact
on the environment and world economy.
Six Stroke engine, the name itself indicates a cycle of six strokes out of which two are useful power strokes.
According to its mechanical design, the six-stroke engine with external and internal combustion and double flow
is similar to the actual internal reciprocating combustion engine.

ENGINE TYPES
Bajulaz six-stroke engine
The Bajulaz six-stroke engine is similar to a regular combustion engine in design. There are however
modifications to the cylinder head, with two supplementary fixed capacity chambers: a combustion chamber and
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 104

an air preheating chamber above each cylinder. The combustion chamber receives a charge of heated air from
the cylinder; the injection of fuel begins an isochoric burn which increases the thermal efficiency compared to a
burn in the cylinder. The high pressure achieved is then released into the cylinder to work the power or
expansion stroke. Meanwhile a second chamber which blankets the combustion chamber, has its air content
heated to a high degree by heat passing through the cylinder wall. This heated and pressurized air is then used to
power an additional stroke of the piston.

The Bajulaz six-stroke engine features:

Reduction in fuel consumption by at least 40%
Two expansion (work) strokes in six strokes
Multifuel, including liquefied petroleum gas
Dramatic reduction in air pollution
Costs comparable to those of a four-stroke engine

Velozeta six-stroke engine
In a Velozeta engine, during the exhaust stroke, fresh air is injected into the cylinder, which expands by heat and
therefore forces the piston down for an additional stroke. The valve overlaps have been removed and the two
additional strokes using air injection provide for better gas scavenging. The engine seems to show 40%
reduction in fuel consumption and dramatic reduction in air pollution. Its specific power is not much less than
that of a four-stroke petrol engine. The engine can run on a variety of fuels, ranging from petrol and diesel to
LPG. An altered engine shows a 65% reduction in carbon monoxide pollution when compared with the four
stroke engine from which it was developed.

Crower six-stroke engine
In his six-stroke engine, power is obtained in the third and sixth strokes. First four strokes of this engine are
similar to a normal four stroke engine and power is delivered in the third stroke. Just prior to the fifth stroke,
water is injected directly into the heated cylinder via the converted diesel engine's fuel injector pump. The
injected water absorbs the heat produced in the cylinder and converts into superheated steam, which causes the
water to expand to 1600 times its volume and forces the piston down for an additional stroke i.e. the second
power stroke. The phase change from liquid to steam removes the excess heat of the engine.
As a substantial portion of engine heat now leaves the cylinder in the form of steam, no cooling system radiator
is required. Energy that is dissipated in conventional arrangements by the radiation cooling system has been
converted into additional power strokes. In Crower's prototype, the water for the steam cycle is consumed at a
rate approximately equal to that of the fuel, but in production models, the steam will be recaptured in a
condenser for re-use. Crower estimated that his design would reduce fuel consumption by 40% by generating
the same power output at a lower RPM. The weight associated with a cooling system could be eliminated, but
that would be balanced by a need for a water tank in addition to the normal fuel tank.

Beare Head
The term "Six Stroke" was coined by the inventor of the Beare Head, Malcolm Beare. The technology combines
a four stroke engine bottom end with an opposed piston in the cylinder head working at half the cyclical rate of
the bottom piston. Functionally, the second piston replaces the valve mechanism of a conventional
engine.Below the cylinder head gasket, everything is conventional, in his design. So one main advantage is that
the Beare concept can be transplanted to existing engines without any redesigning or retooling the bottom end
and cylinder. But the cylinder head and its poppet valves get thrown away in this design. To replace the
camshaft and valves, Beare used a short-stroke upper crankshaft complete with piston, which is driven at half
engine speed through the chain drive from the engine. This piston moves against the main piston in the cylinder
and if the bottom piston comes four times upwards, upper piston will come downwards twice. The compression
of charge takes place in between these two pistons. Much higher compression ratios can be obtained in this
engine. Malcolm used on his first six-stroke, based on a Honda XL125 farm bike. Malcolm Beare claims his
engine is 35% more economical at low revs/throttle openings than an equivalent conventional engine and 13%
less thirsty at high rpm/full throttle.





Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 105



M4+2 ENGINE

The M4+2 engine working cycle
The M4+2 engines have much in common with the Beare Head engines, combining two opposed pistons in the
same cylinder. One piston working at half the cyclical rate of the other. But while the main function of the
second piston in a Beare Head engine is to replace the valve mechanism of a conventional four stroke engine,
the M4+2 take the principle one step further.

Piston charger engine
In this engine, similar in design to the Beare head, a "piston charger" replaces the valve system. The piston
charger charges the main cylinder and simultaneously regulates the inlet and the outlet aperture leading to no
loss of air and fuel in the exhaust. In the main cylinder, combustion takes place every turn as in a two-stroke
engine and lubrication as in a four-stroke engine. Fuel injection can take place in the piston charger, in the gas
transfer channel or in the combustion chamber. It is also possible to charge two working cylinders with one
piston charger. The combination of compact design for the combustion chamber together with no loss of air and
fuel is claimed to give the engine more torque, more power and better fuel consumption. The benefit of less
moving parts and design is claimed to lead to lower manufacturing costs. Good for hybrid technology and
stationary engines. The engine is claimed to be suited to alternative fuels since there is no corrosion or deposits
left on valves. The six strokes are: aspiration, precompression, gas transfer, compression, ignition and ejection.
This is an invention of Helmut Kottmann from Germany.

THEORY
1) Camshaft / Crankshaft Sprockets
In the six stroke engine the crankshaft has 1080 degrees of rotation for 360 degree rotation of the camshaft per
cycle. Hence their corresponding sprockets are having teeth in the ratio 3:1.In the original four stroke engine the
teeth of the sprockets of the crankshaft and the camshaft were in 2:1 ratio. The 34 teeth sprocket of the four
stroke engine camshaft was replaced by a 42 teeth sprocket in the six stroke engine. The camshaft sprockets
were also replaced from 17 teeth to 14 teeth to convert the four stroke engine into six stroke engine.

2) Cam lobes
In the six stroke engine the 360 degrees of the cam has been divided into 60 degrees among the six strokes. The
valve provided at the exhaust has to be kept open during the fourth, fifth and the sixth stroke. The cam has been
made double lobed in order to avoid the hitting of the exhaust valve with the piston head. The profiles of the
exhaust and the inlet cams have been shown in the.

3) Valve Timing
The valve timing of the four stroke Honda engine has been changed. The inlet valve opening (IVO) is 0 at
TDC, same as that of the four stroke Honda activa engine. Inlet valve Closes (IVC) at 25 after BDC, same
as that of the four stroke engine. Exhaust valve opens (EVO) 0 at BDC, which in the original engine was
25 before BDC. Velozeta reduced this 25 advanced opening of exhaust valve to extract maximum
work per cycle. Exhaust valve closes 10 degree before TDC in order to prevent the loss of air fuel mixture
through the exhaust valve. Two reed valves have been provided for the proper working of the engine.

4) Secondary Air Induction System
The secondary air induction system, supplies the air which is used during the fifth and sixth stroke. During the
fifth stroke air from the air filter is sucked into the cylinder through the secondary air induction line. The reed
valve opens to permit the air flow. During the sixth stroke, the air is removed through the exhaust manifold. The
reed valve opens and the reed valve closes during this stroke. The inlet valve remains closed during these
strokes.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 106

Advantages of the Engine

- Reduction in fuel consumption

- Dramatic reduction in pollution normally up to 65%

- Better scavenging and more extraction of work per cycle

- Lower engine temperature - so , easy to maintain the optimum engine temperature level for better
performance

- Less friction hence less wear and tear

- The six-stroke engine does not require any basic modification to the existing engines. All technological
experience and production methods remain unaltered

- Higher overall efficiency


CONCLUSIONS

The six stroke engine modification promises dramatic reduction of pollution and fuel consumption of an internal
combustion engine. The fuel efficiency of the engine can be increased and also the valve timing can be
effectively arranged to extract more work per cycle. Better scavenging is possible as air intake occurs during the
fifth stroke and the exhaust during the sixth stroke. Due to more air intake, the cooling system is improved. It
enables lower engine temperature and therefore increases in the overall efficiency.

REFERENCES

[1] http://www.autoweek.com
[2] http://www.velozeta.com/
[3] http://www.newindpress.com/NewsItems.asp...ram&Page=O
[4] http://www.autocarindia.com/new/Information.aspid=1263
[5] http://en.wikipedia.org/wiki/Six_stroke_engine
[6] http://en.wikipedia.org/wiki/Crower_six_stroke
[7] Bajulaz Six-Stroke Engine Accessed June 2007
[8] Bajulaz Animation Accessed June 2007
[9] Lyons, Pete (February 23, 2006). "Inside Bruce Crowers Six-Stroke Engine". AutoWeek.
http://www.autoweek.com/apps/pbcs.dll/article?AID=/20060227/FREE/302270007/1023/THISWEEKSISS
UE. Retrieved 2007-06-22.
[10] sixstroke.com

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 107

DESIGN OF POLLUTION CONTROL SYSTEM

Rohit Moza, Shubham Aggarwal,Rakesh Bishnoi, Rajat Mathur

Student, Lingayas Institue of Management and Technology, Faridabad.
rohit_moza15@yahoo.co.in, shubham.limat@gmail.com, godara29undefined@gmail.com,
rajatmathur.18@gmail.com
ABSTRACT
C
4
carbon fixation is one of three biochemical mechanisms, along with C
3
and CAM photosynthesis, used in
carbon fixation. It is named for the 4-carbon molecule present in the first product of carbon fixation in these
plants, in contrast to the 3-carbon molecule products in C
3
plants.C
4
and CAM overcome the tendency of the
enzyme RuBisCO to wastefully fix oxygen rather than carbon dioxide in what is called photorespiration. This is
achieved by using a more efficient enzyme to fix CO
2
in mesophyll cells and shuttling this fixed carbon via
malate or asparate to bundle-sheath cells.
Spathiphyllum is a genus of about 40 species of monocotyledonous flowering plants in the family Araceae,
native to tropical regions of the Americas and southeastern Asia. Certain species of Spathiphyllum are
commonly known as Spath or Peace Lillies.
They are evergreen herbaceous perennial plants with large leaves 12-65 cm long and 3-25 cm broad. The
flowers are produced in a spadix, surrounded by a 10-30 cm long, white, yellowish, or greenish spathe.

INTRODUCTION
Air pollution is the introduction of chemicals, particulate matter, or biological materials that cause harm or
discomfort to humans or other living organisms, or cause damage to the natural environment or built
environment, into the atmosphere.
The atmosphere is a complex dynamic natural gaseous system that is essential to support life on
planet Earth. Stratospheric ozone depletiondue to air pollution has long been recognized as a threat to human
health as well as to the Earth's ecosystems.
CRASSULACEAN ACID METABOLISM
Crassulacean acid metabolism, also known as CAM photosynthesis, is a carbon fixation pathway present in
some plants. These plants fix carbon dioxide (CO
2
) during the night, storing it as the four-carbon acid malate.
The CO
2
is released during the day, where it is concentrated around the enzyme RuBisCO, increasing the
efficiency of photosynthesis. The CAM pathway allows stomata to remain shut during the day, reducing
evapotranspiration; therefore, it is especially common in plants adapted to arid conditions.
The majority of plants possessing CAM are either epiphytes (e.g., orchids, bromeliads) or succulent xerophytes
(e.g., cacti, cactoid Euphorbias), but CAM is also found in hemiepiphytes (e.g., Clusia); lithophytes (e.g.,
Sedum, Sempervivum); terrestrial bromeliads; hydrophytes (e.g., Isoetes, Crassula (Tillaea).
TOTAL CO
2
EMISSIONS
: List of countries by carbon dioxide emissions



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 108

Countries with the highest CO
2
emissions
Country
Carbon dioxide emissions per
year (10
6
Tons) (2006)
Percentage of global total
Avg. emission
per Km
2
of its land (Tons)
China 6,103 21.5% 636
United States 5,752 20.2% 597
Russia 1,564 5.5% 91
India 1,510 5.3% 459
Japan 1293 4.6% 3421
Germany 805 2.8% 2254
United Kingdom 568 2.0% 2338


REDUCTION EFFORTS
There are various air pollution control technologies and land use planning strategies available to reduce air
pollution. At its most basic level land use planning is likely to involve zoning and transport infrastructure
planning. In most developed countries, land use planning is an important part of social policy, ensuring that land
is used efficiently for the benefit of the wider economy and population as well as to protect the environment.

The demand of todays world is to minimize pollution as far as possible with the help of natural and non
conventional resources. Sapthiphyllium(peace lillies) as mentioned undergoes C4 mechanism and hence can be
highly beneficial to reduce pollution. It removes acetone, benzene, ammonia and other bio effluents which
constitutes carcinogens. Various other plants such as Spider plants, Begonia absorb all kinds of aldehydes and
carbon monoxide which is highly poisonous.

CAM plants are capable of absorbing carbon dioxide during night while they remain dormant during the day. In
a very clear way CAM plants spinning their wheels using a lot of energy to go on almost nowhere.
Xerophytes are indeed slow growing but are very competitive in hot environment. The advantage in water
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 109

saving by having guard cells open at night are enclosed in day times should not be underestimated. Its
apparently worth in slow growth rate.

These plants can be grown easily in polluted cities such as Delhi, New York, Malaysia and many more. Since
these plants can be grown easily and both CAM and C4 plants form a wonderful combination in combating
pollution. They are highly compact and do not need more space and care. They can be grown easily on the bill
boards along the road sides hence taking almost nil ground space and preventing pollution simultaneously.
Similar processes have already been successfully carried out in Malaysia in a different manner.
MECHANISMS FOLLOWED
The main mechanism followed are known as CAM crassulacean acid metabolism and C4 mechanism.the
mechanisms are explained as under:
CAM {crassulacean acid metabolism} mechanism

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 110

OVERVIEW OF CAM: A TWO-PART CYCLE
CAM plants are adapted to life in arid conditions by conserving water.
2) DURING THE NIGHT
During the night, the CAM plant's stomata are open, allowing CO
2
to enter and be fixated as organic acids that
are stored in vacuoles. During the day the stomata are closed (thus preventing water loss), and the carbon is
released to the Calvin cycle so that photosynthesis may take place.
The carbon dioxide is fixed in the mesophyll cell's cytoplasm by a PEP reaction similar to that of C4 plants. But,
unlike C4 plants, the resulting organic acids are stored in vacuoles for later use; that is, they are not immediately
passed on to the Calvin cycle. Of course, the latter cannot operate during night because the light reactions that
provide it with ATP and NADPH cannot take place without light.
3) DURING THE DAY
The carbon in the organic acids is freed from the mesophyll cell's vacuoles and enters the chloroplast's stroma
and, thus into Calvin cycle.
THE BENEFITS OF CAM
The most important benefit to the plant is the ability to leave most leaf stomata closed during the day. CAM
plants are most common in arid environments, where water comes at a premium. Being able to keep stomata
closed during the hottest and driest part of the day reduces the loss of water through evapotranspiration,
allowing CAM plants to grow in environments that would otherwise be far too dry. C
3
plants, for example, lose
97% of the water they uptake through the roots to transpiration - a high cost avoided by CAM plants.
C4 mechanism is as follows:




One of three biochemical mechanisms, along with C
3
and CAM photosynthesis, used in carbon fixation. It is
named for the 4-carbon molecule present in the first product of carbon fixation in these plants, in contrast to the
3-carbon molecule products in C
3
plants.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 111

C
4
fixation is an elaboration of the more common C
3
carbon fixation and is believed to have evolved more
recently. C
4
and CAM overcome the tendency of the enzyme RuBisCO to wastefully fix oxygen rather than
carbon dioxide in what is called photorespiration. This is achieved by using a more efficient enzyme to fix CO
2

in mesophyll cells and shuttling this fixed carbon via malate or asparate to bundle-sheath cells. In these bundle-
sheath cells, RuBisCO is isolated from atmospheric oxygen and saturated with the CO
2
released by
decarboxylation of the malate or oxaloacetate. These additional steps, however, require more energy in the form
of ATP.
CONCLUSION
With the help of CAM & C4 mechanisms, it would be feasible to control pollution in a considerable amount and
the results will be seen in a short time. Moreover the initial cost required is less and no conventional sources of
energy are used for the control of pollution. This is a wonderful method to control and save the planet earth
which is continuously being hampered by man and machines.

REFERENCES
[1] Slack, CR; Hatch, MD (1967). "Biochem. J." (PDF). The Biochemical journal 103
(3):660PMC 1270465. PMID 4292834. http://www.biochemj.org/bj/103/0660/1030660.pdf. Retrieved
2010-04-08.
[2] Laetsch (1971) Photosynthesis and Photorespiration, eds Hatch, Osmond and Slatyer
[3] Wang, L.; Huang, Z.; Baskin, C. C.; Baskin, J. M.; Dong, M. (2008). "Germination of Dimorphic Seeds
of the Desert Annual Halophyte Suaeda aralocaspica (Chenopodiaceae), a C4 Plant without Kranz
Anatomy". Annals of Botany 102 (5): 75769. doi:10.1093/aob/mcn158. PMC 2712381. PMID 18772148.
[4] Voznesenskaya, Elena; Vincent R. Franceschi, Olavi Kiirats, Elena G. Artyusheva, Helmut Freitag and
Gerald E. Edwards (2002). "Proof of C4 photosynthesis without Kranz anatomy in Bienertia cycloptera
(Chenopodiaceae)". The Plant Journal 31 (5): 649662. doi:10.1046/j.1365-313X.2002.01385.x.
PMID 12207654.
[5] Akhani, Hossein; Barroca, Joo; Koteeva, Nuria; Voznesenskaya, Elena; Franceschi, Vincent; Edwards,
Gerald; Ghaffari, Seyed Mahmood; Ziegler, Hubert (2005). "Bienertia sinuspersici (Chenopodiaceae): A
New Species from Southwest Asia and Discovery of a Third Terrestrial C
4
Plant Without Kranz Anatomy".
Systematic Botany 30: 290. doi:10.1600/0363644054223684.
[6] Holaday, A. S.; Bowes, G. (1980). "C4 Acid Metabolism and Dark CO2 Fixation in a Submersed Aquatic
Macrophyte (Hydrilla verticillata)". Plant Physiology 65 (2): 3315. doi:10.1104/pp.65.2.331.
PMC 440321. PMID 16661184.
[7] Sage, Rowan; Russell Monson (1999). "7". C4 Plant Biology. pp. 228229. ISBN 0126144400.
http://books.google.com/?id=H7Wv9ZImW-QC&pg=PA228.

















Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 112

EXPANSION OF AN IDEAL GAS AS A CLASSROOM EXAMPLE OF
REVERSIBLE AND IRREVERSIBLE PROCESSES
Dr. Vinay Chandra Jha
1
, Rupak Kumar Deb
1
, ,Dr. Iqbal Ahmed Khan
2

1
Lingayas University, Faridabad.,
2
Greater Noida Inst. of Tech. Noida,UP

deb.rupak@gmail.com


ABSTRACT
The difference between reversible and irreversible expansion of an ideal gas in a cylinder having frictionless
piston is analyzed on the basis of quantity. Reversible expansion is achieved by removing infinitely small
masses properly distributed vertically from the piston as it ascends and expansion of the gas takes place. In the
past , this example has been used pedagogically in qualitative form only. Reversibility is demonstrated in
several ways, most notably by showing the equality of the work done by the system and the work done on the
surroundings.
INTRODUCTION
Almost all the concepts of thermodynamics are based on macroscopic approach but one important concept of
reversibility which is based on microscopic approach and continuously makes most stresses to students in
university courses, on this subject irreversibility is the counterpart of reversibility. In the past, various
researchers and authors dealing with engineering thermodynamics have treated this topic qualitatively. They
have made the basis on the background of common sources of irreversibility (friction , unrestricted gas
expansion, heat transfer over a non-zero temperature difference etc.). The criteria of reversibility is noted as:
1. Infinitely slow speed of limitation at which a process takes place. Change must be infinitely slow so that the
system maintains internal thermodynamic equilibrium at all times.
2. Returning the system from its final state to its initial state by the same path must leave both the system and
surroundings unchanged.
3. Work done by (or on) the system must be equal to the work done on (or by) the surroundings.
4. There must be no friction at all.

Other texts describe irreversibility as a feature of a process that increases the entropy of the system and
its surroundings, with equilibrium of an initially non equilibrium vapour /liquid mixture in an isolated vessel
and free expansion of an ideal gas into a vacuum as common examples. A 3
rd
method, favoured by texts
intended for mechanical or chemical engineers, defines irreversibility as the difference between reversible work
and actual work produced. The portion of the entropy change produced by irreversibilities is replaced by
reversible entropy added to the system as reject heat from an imagined ideal heat engine receiving heat from a
reservoir at some arbitrary temperature T. The work done by the imaginary heat engine represents the
irreversibility of the actual process.
The purpose of this paper is to make proper analysis of a simple process used in many textbooks to illustrate
reversible and irreversible path between fixed initial and final states. Here we have developed a quantitative
measurement rather than qualitative measurement approach in the past. The quantitative analysis gives a clear
cut idea about the process (reversible and irreversible).
Wangs extension of the classic Carnot theory of heat engine (1) is a different example of an advanced
thermodynamic analysis intended for Pedagogical use.

ISOTHERMAL EXPANSION OF AN IDEAL GAS
The process referred to above takes place in a cylinder with a frictionless piston. A weight is put on the piston.
The system contained in the cylinder consists of a simple substance such as water or an ideal gas. The weight
put on the piston is balanced by the upward force exerted on the piston by the gas inside the cylinder. This is a
fixed stable thermodynamic initial state. The surroundings provide a fixed external pressure and a means of
changing the force on the piston usually by adding or removing weight. The thermal surroundings may be either
a temperature reservoir for isothermal process or thermal insulations for adiabatic process.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 113

Several authors have used this device as a means of demonstrating the distinction between reversible
and irreversible processes with common initial and final states (2-6). Figure 1 represents the reversible and
irreversible methods of expanding an ideal gas between specified initial and final volumes. In (2-6), the
discussions of these two versions were qualitative.
The objective of the present paper is to quantitatively demonstrate the following features of the
process-
1. In the reversible version the work done by the system is identical to the work done by the surroundings.
2. The work done by the system in the irreversible process is numerically compared to the work
performed in the reversible process.

REVERSIBLE PROCESS



FIGURE 1. ISOTHERMAL EXPANSION OF AN IDEAL GAS
In both cases shown in figure 1, a total mass M is removed from the top of the piston. The only difference
between the two methods is the way that the mass is removed from the piston pedestal. In the reversible process
the mass is divided into a large no. N of small masses m such that M = Nm.
The small individual mass is removed from the piston at different elevations. In the irreversible
version, the entire mass M is removed at once.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 114

In both the cases, the cylinder contains n moles of an ideal gas and is immersed in a constant
temperature bath. The thermal reservoir adds heat as the small weights are removed to maintain the gas at
temperature T during the entire reversible process. In the irreversible process only the initial and final states
are at temperature T. these are equilibrium state of the gas whereas the state depicted in the upper middle
panel of fig 1 is not in equilibrium state .i.e. all intermediate states are in non equilibrium.
REVERSIBLE EXPANSION
Removing a small mass from the pedestal causes the piston to vary and slightly expand the gas in the cylinder.
Each removal of a small mass approximates an infinitesimal equilibrium stage so that the overall process is
quassistatic i.e. reversible. At all points in the expansion process, the conditions of external equilibrium, namely
T = T
surr
and P = P
surr
Kmg/A----------------(1) are satisfied. In equation (1) T
surr
and P
surr
are respectively the
constant temperature and constant pressure of the surroundings. The number of small masses removed from the
piston is denoted by k (0 k N), g is the acceleration due to gravity, m is the mass of one individual molecule
and A is the cross sectional area of the piston. The P-V work done by the ideal gas during the reversible,
isothermal expansion from initial and final volumes V
i
and

V
f
is
W
rev
=

------------------ (2)
The work done by the surroundings due to constant temperature process maintained
W
surr

-------------------------- (3)

From equation 2 and equation 3 shows that
W
rev
= so a criterion of reversibility is satisfied for this process. In addition, the process can be reversed
by sequentially the small masses put on the pedestal of their original elevations when the initial state is
recovered exactly the same amount of heat that was added during expansion process will have been removed
during the compression process. Both system and surroundings will have been restored to their original state.
IRREVESIBLE EXPANSION
When the entire mass M is removed from the piston as shown in fig 1, the system is not in equilibrium with
the surroundings. The piston rapidly ascends, oscillates as the gas acts as a spring and eventually settles to the
final elevation .In this case the potential energy gain Ep is easily determined from the elevation change of the
large mass, which is directly related to the volume change of gas by h=(V
f
V
i
)/A .
The work done by the surroundings in this irreversible process is
W
irr
= P
surr
(V
f
V
i
) - Mg(V
f
V
i
)/A
The final results are a reasonable representation of the work done to the extent that P
f
< p
surr
. At the final
equilibrium state, the force balance of the piston gives p
surr
- P
f
=Mg/A ,combining these two equations and
using the ideal gas law yields.

/A

------------------- (4)
Comparing this equation with equation (2) for V
f
/V
i
= 3 (as an example) shows that the ratio of
reversible work of the expansion to the irreversible work is
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 115






That is work done in reversible process is 1.65 times work done in irreversible process.
CONCLUSIONS
Reversible expansion of an ideal gas in a piston-cylinder apparatus by the distributed weight method has been
analysed quantitatively .the objective was to add an analytical structure to this often used pedagogical tool to aid
students in thermodynamics course to better grasp the concepts of reversible and irreversible process.
For the reversible expansion process, the equality of work done by the system with that done on the
surroundings is demonstrated by the ratio of the reversible and irreversible work requirements for the isothermal
version of the process is determined as a function of the expansion ratio.
REFERENCES
[1] Liqiu Wang, Carnot Theory; Derivation and Extension Int Jour. Eng, educ 14, 6(1998) p 426.
[2] H C Van Ness, In Understanding Thermodynamics,(Dover, ed) (1984) p 19-21.
[3] M. M. Abott and H.C. Van Ness, Thermodynamics with Chemical Applications, 2
nd
Edn, Mc Graw-
Hill (1989) p 6-9.
[4] G. Van Wylen, R.E Sonntag and claus Borgnakke, Fundamentals of Classical Thermodynamics, 4
th

edn, Wiley (1994) p 200-201
[5] A.Shavit c. Guffinger, Thermodynamics Prentice-Hall (1995) P 23-25.
[6] P.K. nag, Thermodynamics 4
th
edn, Tata McGraw Hill.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 116


STRATEGIES FOR USING PIEZOELECTRICITYAS A MODERN
MEANS FOR CLEAN ENERGY GENERATION

Rakesh. R. Nair
1
, Subodh
2

1
Student-Lingayas Institute of Mgt. & Technology, Faridabad. Lingayas University,

Rakeshnair90@yahoo.com


INTRODUCTION

Piezo electricity was first discovered by Jacques and Pierre Curie in 1880. It was seen that pressure
applied to a quartz crystal generates an electrical charge in it; they called this phenomena the piezo effect. This
paper deals with the concept of piezoelectricity as a viable and practical means to generate electricity in
leviathan quantities by using power lost in day to day phenomenon. Its is seen that the current produced by
piezo electric materials is not high enough to be used practically but this paper also deals with suggesting viable
means with which sufficient energy could be generated. Quartz crystals are preferred as piezoelectric materials
due to their sturdiness and immunity against water absorption. They are also immune to temperature changes.
After quartz the next most important and widely used piezoelectric materials are ceramics like Barium Titanate,
Lead Zirconate or Lead Titanate.
The notable aspect of piezoelectric materials is that they generate an electric potential when stressed
and if the output is not short circuited (in which case the energy would get dissipated quickly), hence in order to
run an electric load using the voltage generated by a single or multiple piezoelectric elements the applied stress
on the element should oscillate continuously. This is a limiting factor when it comes to power generation using a
piezoelectric transducer since it can only be used in practical cases where the work done is of an oscillating
nature.
The piezoelectric effect is postulated to occur because of the occurrence of electric dipole moments
inside the piezoelectric crystals due to the application of a deforming force. This moment may be created in
crystals with pre present ions directly part of the crystal lattice sites in asymmetric charge surroundings for
example BaTiO3, this property may directly be carried by molecular groups like cane sugar. Polarization of a
piezoelectric crystal may be calculated using dipole moment analysis of the crystallographic unit cell.

The appearance of an electric field upon the application of an external deforming force may be
explained as a change in the polarization of the crystal lattice causing are configurations of the dipole inducing
surrounding. The force applied on the crystal surface causes a change in the pre present polarization of the
crystal structure which manifests itself as a change in the surface charge density, hence causing an electric field
to be generated around the crystal surface, but the majority of the electric field generated by the crystal and the
subsequent voltage is mainly attributed to the dipole density inside the crystal bulk. For example 1 sq. cm of
quartz with 2KN of correctly applied force can produce a voltage of 12500 volts. This is enough to produce
sparks between 2 closely spaced electrodes. This is the principle used inside the piezoelectric igniter which
generates sparks for small engine ignition systems (spark plugs) and gas grill lighters by using high impulses on
piezoelectric ceramic discs.
Keywords- Piezoelectricity, oscillating load for voltage generation, spark generation,
lighters, spark plugs, energy for nanotechnology, micro-electromechanical systems.

MATHEMATICAL ANALYSIS
The application of a force P causes a deformation X producing a charge Q in the
crystal structure due to the dipole fluctuations discussed previously.
Mathematically,
Q=K* X
Here K is called the charge sensitivity constant.
The crystal behaves as if it were a capacitor, carrying a charge across it.
Voltage e across the crystal is given by:
Here C is the capacitance of the crystal (pF), and K the voltage sensitivity constant
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 117

equal to K/C.
Its is known that,
E is the dielectric constant of the crystal material, A is the area in cm^2, andt is the
thickness (cm). If A is in square meter (m^2), and C is in Farads (F), hence,
C= E*A/ (1.13*10^11* t)
The relation between, force P and deformation X,
E is the Youngs modulus of the crystal material.
The above equations show the precise mathematical relations required for the efficient
usage of a piezoelectric device.

APPLICATIONS

As mentioned previously this paper deals with the various techniques that can be employed for the
clean generation of electricity by the proper use of piezoelectric materials. The methodologies are discussed
henceforth:
A square centimeter of a piezoelectric crystal could produce about 12,500V if a force of2KN is
properly applied onto it. This principle can be used for the generation of high voltages not only in engine
ignition systems but also in areas where periodic or non periodic oscillating forces are continuously applied.
One application of this principle is in the common shoe. If a suitable piezoelectric crystal is embedded into the
sole of a shoe
e= Q/C= (K*X)/C = K*X
C = E*A/3.6*(3.14)*t
P= (E*A*X) / t
then the energy that could be produced through this can be easily used for varied purposes or stored in a portable
battery to be used later. The practical problem with this solution is that the applied force must be an impulse in
the magnitude of 2KN or 204kg.This will not always be a value that can be attained in normal practical life
though it maybe noted that as the surface area decreases the force being applied perpendicularly per unit area
increases, hence a woman wearing high heels can easily exert more pressure per square centimeter than an
elephant.

The practical problem of having to apply a force far greater than what is normally generated is easily
solved by using a high impulse generating compressed spring mechanism. This mechanism will employ a
trigger to actually cause a piston to strike the piezoelectric crystal at the appropriate angle so as to generate the
required amount of voltage. This piston will be accelerated by a compressed spring being released when the
trigger is tripped.

The trigger has to be compressed with a relatively small force as its only function is to cause the piston
to be released which will be accelerated because of the restoring force of a heavy duty coiled spring.
Hence this mechanism can be suitably affixed onto the sole of any sufficiently thick footwear and the pressure
generated while walking will cause the trigger to get pressed, in turn causing the piston to hit he piezoelectric
crystal or ceramic with enough force to generate a high voltage. The double spring mechanism will allow the
trigger to return back to its original position before the next step is taken. This mechanism would have to be
made sufficiently sturdy and suitably inaudible in order to have a practical and aesthetic value and application.
This voltage generated may be used further for varied purposes once it is stepped up or stepped down in
accordance to the requirement. It is to be noted that the voltage generated from piezoelectric crystals and
ceramics is quite high (10-20,000 V) but the current produced is quite low. Hence the practicability of this
instrument in direct current generation is low, but the voltage can be conveniently and suitably used.

One solution to this problem is to use a series of piezoelectric wafers which produce current
individually in very small quantities but if the circuit is suitably designed then the current being produced in
series will get added up to a more practical value (though still low). The charge generated may be stored inside a
small battery or capacitor within the shoe itself and can be used as and when needed, such as to charge small
transmitters for military purposes of tracking, espionage, infiltration etc.

This can also be used for athletes to give out the number of kilometers they ran or the number of steps
that one took which will be visible on digital readouts on their shoes. It is also applicable incases where people
whose physical condition does not allow them to run or walk too far, hence with this device such people can
ensure that the do not over exert themselves by presetting the value of the number of kilometers they plan to run,
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 118

at the end of which the charge that gets stored in the capacitor due to the continuously discharging piezoelectric
material will be used to sound a buzzer or beeper.

More than one such device can also be incorporated into footwear for higher charge generation. The
aesthetical aspect of such an inclusion would be an easily surmountable challenge for the designers.
The next application of such devices is in large scale power generation. Countries like India, China and Pakistan
etc have some of the highest volumes of vehicular traffic seen anywhere in the world. The said technology may
be used effectively in the power generation sectors which in these countries use coal as the major source of
energy.
We can envision a future of mating these two problems i.e. coal (thermal) power generation and
vehicular volume to come up with a unified solution for both. The roads in these countries see the most number
of vehicles in the world (1 new car enters the Indian roads every minute), hence if we can embed the roads in
these countries with piezoelectric crystals or ceramics then the non stop vehicular traffic can be exploited for
power generation purposes.

Busy roads are continuously improved by laying a sheet of rubber on top of them in order to increase
the life of the roads and also facilitate friction developed between the road sand the wheels of the vehicles. But
instead, if these layers are embedded with piezoelectric impulse generating spring loaded mechanisms which
are placed at equal distances throughout the road surface, then the power requirement in these countries can be
managed efficiently.

Every time a vehicle passes over the instrument, it will trigger the device which will cause the piston to
hit the piezoelectric transducer with enough force to generate 12,000-20,000 V. If this occurs twice a minute for
every square meter then using a rough estimate we can calculate the voltage that will be generated per minute
for a distance of 1Kilometer.

If we assume that on an average 1 vehicle passes a given point on the road every 2seconds with an
average speed of 30 km/hr and a piezoelectric device is embedded every square meter of road surface, then on
an average 30 cars will cover a distance of 1kilometer every minute on major roads. This calculates out to 3600
KV generated per kilometer per minute if the piezoelectric devices are designed to generate
approximately12,000 volts when triggered.

Though this was just a rough estimate given only for elucidation purposes, the implied application is
self evident. Even if all the devices are not triggered a suitably rough estimate of the above mentioned value will
be enough to charge many generators or transformers which can thus be easily switched on during power
outages.
These high voltages generated may also be used directly as inputs for Voltage to Current Amplifiers
attached at the base of every transducer embedded on the road (subsequent step down pulse transformers would
have to be used as elaborated later). If this is the case then the grid may only be used for providing external
voltages for the amplifiers i.e.Vcc and Vee inputs, the current generated from the sum total of all the devices in
a 1 km stretch of a busy road will be an equivalent of an approximately 3600 KV supply. Another application of
the piezoelectric effect is in the field of automation. Countries like India, China, Pakistan, Japan, Bangladesh etc
have extensive rail and metro systems already built or in the process of being installed. India has the largest
system of railway tracks in the world and a very high frequency of trains. There is also a famous means of
public transport around, namely cycle rickshaws. In many European countries also there is a famed trend of the
usage of electric scooters and cycles. The rickshaws being driven are manual and are responsible for the income
of many people. Automation in these devices through the usage of battery powered motors has been shown to be
impractical as the economic burden on the owners would be more than they could afford.
One solution to the above problem is by combining the railway/metro system to the rickshaw automation
problem. The major reason why the rickshaws are not automated is because of the cost of recharging the
batteries once they get discharged. Recharging regularly from a special station would be costly and
uneconomical, plus the added load on the grid is unwanted. This problem can be solved by bringing the recharge
stations off the electrical grid system.

If the rail and metro tracks are coated with a piezoelectric material or embedded with the
aforementioned piezoelectric device then the high frequency of the trains can be exploited to advantage. The
energy which is now lost can be actually manifested into something constructive which will be low cost,
effective, efficient and eco-friendly.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 119

This way the recharging station could be set up, off the grid and the rickshaws may be automated. This
will pave the way for a self dependant system which will be low cost and will cause no pollution what so ever.
Experiments conducted with a common igniter have shown that the voltages obtained using the piezoelectric,
spring loaded impulse generating mechanisms would be large, but with vey short time duration. This low
frequency, high amplitude non periodic pulse or spike is unsuitable to be used directly. But if the voltage spikes
are fed into the input terminals of a Step-down Pulse Transformer with a fast-enough response time, the signal
can be used to an acceptable level of practicality. These voltage spikes will pass undeterred through standard
capacitors hence extremely low capacitance rating would have to be used coupled with a high voltage rating.
The capacitance of the subsequent capacitors will be increased suitably. Various response and feedback
measures may also be used to increase the efficiency of the system.


PIEZOELECTRICITY ALSO HAS A WIDE RANGE OF APPLICATION IN THE
FIELD OF NANOTECHNOLOGY AND MICRO-ELECTROMECHANICAL
SYSTEMS

It can be used to power small micro-motors and engines by the application of a small oscillating
voltage on a minute piezoelectric crystal, causing it to change shape and hence actuate minute transducers which
can then be used to run pulse micro motors or engines. In the field of nanotechnology these crystals can derive
power through the minute voltage differences produces in the proteins covering the cell walls of living tissue.
The interior of a living cell is comparatively at a negative potential as compared to its environment. This is
because of the Receptor and Acceptor proteins embedded into the cell walls which change orientation in
accordance to the proximity of a particular entity, which may be nutrition (in which case the proteins move,
depending on the charge, to allow entry into the cell through the cell wall) or a pathogen(in which case entry is
prohibited).Every time the protein molecules reorient in order to let nutrition in, or waste materials out,3
molecules of Na+ are released and 2 molecules of K+ are absorbed. This continuous emission and absorption of
ions ensures that a cells interior is always at a negative potential as compared to its surroundings. This causes a
cell to act as a self charging battery which enables its usage in biological computing by using it as an equivalent
to a programmable microprocessor.

A nanotechnology based device can exploit this charge variance by manipulating the protein molecules
to physically change orientation at a suitable frequency causing the electric field around the device to change or
oscillate as per requirements. This minute charge variance will be enough to cause the highly sensitive
piezoelectric transducer to be able to change shape, causing another transducer to oscillate and in turn generate
mechanical energy as per requirements.

Hence a nanosensor can be permanently parked onto any cell and various functions can be directly
performed on every cell individually. Procedures like wrinkle removal, and eye surgery to major life saving
operations like cancer removal, DNA manipulation (to fight debilitating genetic diseases), gene therapy,
cryogenics etc will all be possible and practical soon using this technology. Hence piezoelectricity can be
suitably developed into a viable means of clean energy generation through a relatively short amount of research
and development. This technology can change the way energy passes through the human societal system, also
paving the way for clean energy generation through a feedback cycle of usage culminating in an equivalent
production of clean energy.



References
[1] [Fundamentals of Piezoelectricity- Oxford Publications.
[2] Electrical and Electronic Measurements and Instrumentation A.K.Sawhney(Dhanpat Rai Publications)
[3] Overview: Piezoelectricity Manfred Kaltenbacher (Department Of Sensor Technology, University
Erlangen, Nuremberg).
[4] Visions- Michio Kaku (Anchor Books)
[5] Biology of Belief Bruce. H. Lipton (Hay House)


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 120






SECTION -2

















Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 121



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 122

FINITE ELEMENT SIMULATION OF TAYLORS TEST FOR HIGH
STRAIN RATE BEHAVIOUR OF AUTOMOBILE STEEL AISI 4340

G.Venkata Rao
1
and S.Mallikarjun
2

1
Professor and
2
Post-Graduate Student
Department of Mechanical Engineering, Vasavi College of Engineering, Hyderabad- 500031

gvrao10@yahoo.com


ABSTRACT

High strain rate behaviour of materials is of utmost importance in the areas of automobile crash analysis, high
speed machining, explosive forming and armoured vehicle penetration studies. In this area, the material
behaviour was earlier being studied based on metallurgical condition and since it is difficult to carry out
experiments on expensive automobiles, forming components and other products, the high strain behaviour is
comprehensively represented by constitutive relationships using results of tests carried out on a large number of
specimens. Such constitutive material models are incorporated in a few of advanced finite element codes for
use in design studies.

The objective of this work is to simulate the behaviour of AISI-4340 steel used widely in the automobile industry
under high strain rates of loading. This is carried out through the simulation of Taylors impact test which
yields the plastic flow stress. The Taylors test is carried out in practice by firing a small specimen of cylindrical
shape of the material at high velocity against a rigid surface to cause the material to deform plastically.

Taylors test of a cylindrical bar is simulated in ANSYS finite element code using a nonlinear analysis. This
simulation enables one to examine the validity of various mathematical models used for the characterization of
materials under high strain rates. The large strain deformation response of AISI4340steel has been evaluated
over a range of strain rates of the order of 10
4
/s. The results have been used to critically evaluate the flow
stress using the Johnson-Cook (J-C) material model.
Keywords: Taylors test, flow stress, high strain rate
INTRODUCTION:
The phenomenon of high strain rates of loading plays a vital role in improving or enhancing the performance of
the automobile body sections, integrity of formed component , ballistic armours in defence and in other areas. A
number of studies have revealed that the hardness has a direct effect on the performance of the materials and an
increase in the hardness of a material can be caused by rate of loading too.
Manufacturing processes such as Forging, Rolling, High speed Machining and Forming are used to fabricate
metal products by deformation or metal removing methods. In order to get the final product shape the material
has to undergo either deformation or metal removal operations. During these operations the work piece is
subjected to loads which are static and dynamic, but the work piece material responds differently for static and
dynamic loads. For static loading, the materials stress-strain curves can be utilized whereas under dynamic
loading the material is observed experimentally to have different stress-strain curves for different strain rates.
When the material undergoes high strain rates, the material becomes more rigid or strengthens and therefore
results in more resistance to the deformation. Also, in the case of high speed machining, due to this sudden
change in the material mechanical behaviour, impulsive loads are generated on the tool which causes tool failure
and also material fragmentation in forming operations like drawing, extrusion, and explosive forming. Hence the
material behaviour under high strain rates of loading plays a predominant role in designing the materials for
various engineering applications.

Therefore it is evident that the high strain rate phenomenon is of paramount importance. The material behaviour
at high strain rates of loading is evaluated using the split Hopkinson pressure bar test and Taylors cylindrical
impact tests, among others.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 123


In this work, finite element method is utilized for the simulation of Taylors cylinder test at high velocities to
find the plastic flow stress and the results are verified using the JC constitutive model. The ability of the FEM
technique to solve for the large strain deformation behaviour of the important automobile structural material
4340 steel is also examined.

PREVIOUS WORK:
Many previous researchers have worked on the high strain rate characterization of materials by both
experimental and numerical methods in the context of the high strain rate phenomenon.
In late 1930s Taylor proposed a relatively simple method to find the dynamic compressive strength of a material
, which consists of firing a solid cylinder against a massive rigid target. The dynamic flow stress of the cylinder
material is estimated from the deformed shape of the cylinder.
Taylor and Whiffin's method of solving these equations consisted of numerically determining the plastic wave
speed consistent with both the measured deformation and their theory. In order to use this more exact method
routinely, Taylor presented it as a set of graphs yielding multiplicative correction factors to equation (1).
Methods like servo-hydraulic machines, drop weight towers etc for low strain rate of loading and the split
Hopkinson pressure bar for higher strain rates require expensive equipment and instrumentation for obtaining
dynamic stress-strain curves.
As compared with these techniques, Taylors test does not require any instrumentation, as can be seen from
Eqn.1. Several experimental studies by Johnson[ 3 ], Hutchings, Paprino et al. have shown that the Taylor model
provides values of the dynamic yield strength of the cylindrical rod material.
Wilkins and Guinan, Batra and Stevens, Stevens and Batra and Celantano, amongst others, have used numerical
methods to analyse thermo-mechanical deformations of a cylindrical rod impacting at normal incidence a
smooth flat rigid target. Whereas Celantano accounted for heat conduction, the other three works did not.
Stevens andBatra found that ASBs form in materials that exhibit enhanced thermal softening and their location
generally agrees with that observed by Dick et al. .

TAYLORS CYLINDER TESTS:

Screw or hydraulic loading systems Split-Hopkinson Pressure Bar apparatus, Pendulum impact machines, light-
gas gun or explosively driven flyer-plate impact
The method consists of firing a solid cylinder of the material against a massive and rigid target.
The dynamic flow stress of the cylinder material can be estimated by measuring the overall length of the
impacted cylinder and the length of the undeformed (rear) section of the projectile ,
( Eqn.1 )


Where is the dynamic yield stress of the material of the projectile, its density, V its impact velocity, and L, X
are undeformed original length and undeformed length in the final shape.
It can be seen that only accurate measurement of different lengths are need and no other instrumentation for
obtaining the flow stress.

Fig.1. Undeformed and deformed states of the cylindrical specimen in Taylors Test
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 124





















Fig.2. Stress-strain curves for 4340 steels at different strain rates

CONSTITUTIVE RELATIONSHIPS

Most metallic exhibit variation in the stress- strain curves at different rates of loading. For example, 4340 steel
which is widely used in automobile and other industries has stress strain curves as shown in Fig.2 at different
rates of strain in the range of .0002 /second to 200/second. In order to overcome the difficulty of which curve to
use as the loading rate keeps changing as in an automobile crash with maximum rate at start of crash to zero
loading ate the end of impact, constitutive relations have been proposed by pioneering researchers like Johnson
and Cook, Zerilli and Armstrong and Cowper and Symonds, among others. These equations incorporate into
them terms of strain and strain rate mostly and also thermal states since materials get heated due to the high
rates of loading.
The formulation for the JC model is empirically based. The JC model represents the flow stress with an
equation of the form
( Eqn.2 )

where is the effective stress, is the effective plastic strain,
*
is the normalized effective plastic strain rate
(where is the effective stress, is the effective plastic strain,
*
plastically normalized to a strain rate of
1.0 s
-1
), n is the work hardening exponent and A, B, C, and m are constants. The quantity T* is defined as
T* = (T-298)/ (T
melt
-298) (2) ( Eqn.3 )
Where T
melt
is the melting temperature and is typically taken as the solidus temperature for an alloy. The strength
of the material is thus a function of strain, strain rate, and temperature. The model assumes that the strength is
isotropic and independent of mean stress.
The values of A, B, C, n, and m are determined from an empirical fit of flow stress data (as a function of strain,
strain rate, and temperature) to equation 2. For high rate deformation problems, we can assume that an arbitrary
percentage of the plastic work done during deformation produces heat in the deforming material. For many
materials, 90-100 per cent of the plastic work is dissipated as heat in the material.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 125

FINITE ELEMENT ANALYSIS OF TAYLORS TEST:
The following data is obtained from the analysis of AISI 4340 Steel in a system code having the following
dimensions and physical properties. Also constants for the J-C equation are given.

SPECIMEN Johnson-Cook Model Constants for 4340 Steel

Length=25.4 mm. A = 792
Diameter=7.62 mm. B = 510
Density=7.83x10
-6
kg/mm
3
. C = 0.014
Youngs Modulus=21x10
4
N/mm
2
. n = 0.26
Poisons Ratio=0.3. m = 1.03
Yield Stress=750 N/mm
2
.
Tangent Modulus=400 N/mm
2
.
The three-dimensional analysis is carried out and the results obtained are as shown in Fig applied in checking
the flow stress for the 4340 steel at different rates of loading using Johnson-Cook constitutive model parameters
of the earlier authors [3]. The strain rate is obtained from the transient analysis and the points on stress- strain
graph of Fig.6 are obtained by using the strain from the finite element analysis in Eqn.2 along with the
constants.The results when compared yielded good agreement as seen from Fig.6.

Fig.3.Deformed Shape of the cylindrical after impact. Fig.4.Von Mises Plastic Stress of the Cylinder 5.
Plastic strain distribution at velocity of 208 m/sec


Fig. 5(b)FEM results comparison with J-C results[ 3 ] Fig.6. A comparison of FE analsis+J-C with Expt

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 126

REFERENCES

[1] Taylor, G.I., The testing of materials at high rates of loading, J. Inst. Civil Eng. 26, 486-519, 1946.
[2] Taylor, G.I., The use of flat ended projectiles for determining yield stress. I:
[3] Theoretical consideration, Proc. R. Soc. Lond. A194 , 289-299, 1948. Johnson, G.R. and W.H. Cook, A
constitutive model and data for metals subjected to large strains, high strain rates and high temperatures,
Proceedings of the 7th International Symposium on Ballistics, The Hague, The Netherlands, 1983, pp. 541-
547.
[4] Zerilli, F.J. and R.W. Armstrong, Dislocation mechanics based constitutive relations for material dynamics
calculations, Journal of Applied Physics, Vol. 61/5, 1987, pp. 1816-1825.
[5] Zocher, M. A., Maudlin, P. J., Chen, S. R., and Flower-Maudlin, E. C. An evaluation of several hardening
models using Taylor cylinder impact data. In Proc., European Congress on Computational Methods in
Applied Sciences and Engineering, Barcelona, Spain, 2000.ECCOMAS.
[6] Klepaczko,J.R., Constitutive relations in dynamic plasticity, pure metals and alloys. Advances in
constitutiverelation applied in computer codes. CISM, Udine, Italy, July 23-27, 2007.
[7] Horita,Z., Superplastic Forming At High Strain Rates after Severe Plastic Deformation, 31 May 2000.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 127

DESIGN OF PIEZOELECTRIC SMART STRUCTURES WITH MULTI
INPUT MULTI OUTPUT(MIMO) SYSTEM FOR ACTIVE
VIBRATION CONTROL
Deepak Chhabra
1*
, Gourve Goyal
2
, Sukesh Babu
3

1
Department of Mechanical Engineering, NIT Kurukshetra, Haryana, INDIA
2
Department of Mechanical Engineering, University Institute of Engineering &Technology
Maharshi Dayanand University, Rohtak,Haryana, INDIA
3
Department of Mechanical Engineering, Lingayas University, Faridabad, Haryana, INDIA
1*
deepaknit10@gmail.com


ABSTRACT
This paper deals with the active vibration control of beam like structures with distributed piezoelectric sensor
and actuator layers bonded on top and bottom surfaces of the beam. The contribution of the piezoelectric sensor
and actuator layers on the mass and stiffness of the beam is considered. The patches are located at the three
region i.e. fixed end & middle, middle & free end, free & fixed end. The study is demonstrated through
simulation in MATLAB for various controllers like proportional controller by output feedback and linear
quadratic regulator (LQR) by state feedback. A smart cantilever beam is modeled with MIMO system. The
entire structure is modeled in state space form using the concept of piezoelectric theory, Euler-Bernoulli beam
theory, Finite Element Method (FEM) and the state space techniques. The numerical simulation shows that the
sufficient vibration attenuation can be achieved by the proposed method.

Keywords: Smart structure, finite element model, state space model, proportional output feedback, LQR,
vibration control.

INTRODUCTION
The developments of high strength to weight ratio mechanical structures are attracting engineers for the
applications in light weight aerospace structures. However, vibration problems of structures have been more
complicated with increase of strength to weight ratio. Till the last decade, passive techniques are among the
most widely used structures. Passive vibration reduction can be achieved by adding mass damping and stiffness
at appropriate locations. However Major drawback of Passive techniques is low response with increase in
weight of structure. Hence, vibration control of high strength to weight ratio mechanical structures can be
achieved using smart structures. The smart structures can be defined as:
The structure that can sense external disturbance and respond to that with active control in real time to
maintain the mission requirements.
The present work considers the application of piezoelectric patches to smart beam-like structures for the purpose
of active vibration control with Multi input multi output (MIMO) system. The finite element method is powerful
tool for designing and analyzing smart structures. Both structural dynamics and control engineering need to be
dealt to demonstrate smart structures. A design method is proposed by incorporating control laws such as
Propotional Output Feedback (POF) and Linear Quadratic Regulator (LQR) to suppress the vibration. Brij N
Agrawal and Kirk E Treanor [1] presented the analytical and experimental results on optimal placement of
piezoceramics actuators for shape control of beam structures. Halim & Moheimani [2] aimed to develop a
feedback controller that suppresses vibration of flexible structures. The controller is applied to a simple-
supported PZT laminate beam and it is validated experimentally. K .M Liew, S Sivashanker[3] derived Finite
element formulations for static and dynamic analysis and the control of functionally graded material (FGM)
plates under environments subjected to a temperature gradient, using linear piezoelectricity theory and first-
order shear deformation theory. S.Narayanana. Balamurugan [6] presented finite element modeling of laminated
structures with distributed piezoelectric sensor and actuator layers. Beam, plate and shell type elements have
been developed incorporating the stiffness, mass and electromechanical coupling effects of the piezoelectric
laminates. Xu & Koko[9] reported results using the commercial FE-package ANSYS. The optimal control
design is carried out in the state space form established on the FE modal analysis and applied to cantilever smart
beam and clamped smart plate structures. T.C.Manjunath, B.Bandyopadhyay[8]presented the modeling and
design of a multiple output feedback based discrete sliding mode control scheme application for the vibration
control of a smart flexible. Malgaca Levent (2010) integrated the control methods into the finite element
solutions (ICFES) with ANSYS. The author analyzed the active control of free and forced vibrations for a smart
laminated composite structure (SLCS) using ICFES simulation and compared with the experiment results.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 128

In most of present researches, FEM formulation of smart cantilever beam usually done in ANSYS and design of
control laws are carried out in Mat LAB control system toolbox. Hence, for designing piezoelectric smart
structures with control laws, it is necessary to develop a general design scheme of actively controlled
piezoelectric smart structures. The objective of this work is to address a general design and analysis scheme of
piezoelectric smart structures with control laws. The LQR optimal control approach using state feedback and
proportional value of gain by output feedback has analyzed to achieve the desired control. Numerical examples
are presented to demonstrate the validity of the proposed design scheme. This paper has organized in to three
parts, FEM formulation of piezoelectric smart structure with designing control laws, Numerical simulation and
Conclusion.
MODELING OF SMART CANTILEVER BEAM WITH CONTROL LAWS

Finite Element Formulation of beam element
A beam element is considered with two nodes at its end. Each node is having two degree of freedom (DOF).
The shape functions of the element are derived by considering an approximate solution and by applying
boundary conditions. The mass and stiffness matrix is derived using shape functions for the beam element. Mass
and stiffness matrix of piezoelectric (sensor/actuator) element are similar to the beam element. To obtain the
mass and stiffness matrix of smart beam element which consists of two piezoelectric materials and a beam
element, all the three matrices added. The cantilever beam is modeled by FEM assembly of beam element and
smart beam element. The last two rows two elements of first matrix are added with first two rows two element
of next matrix. The global mass and stiffness matrix is formed. The boundary conditions are applied on the
global matrices for the cantilever beam. The first two rows and two columns should be deleted as one end of the
cantilever beam is fixed. The actual response of the system, i.e., the tip displacement u(x, t) is obtained for all
the various models of the cantilever beam with and without the controllers by considering the first two dominant
vibratory modes.
A beam element of length l
b
with two DOFs at each node i.e. translation and rotation is considered.


Fig 1.
The displacement u is given by
u(x)=[N]
T
[p] (1)
=

(2)
N
1
(x), N
2
(x), N
3
(x), N
4
(x) are the shape functions and u
1
,
1
and u
2
,
2
are the DOFs at the node1 and node2
respectively
Where


(3)


(4)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 129

(5)

(6)
The kinetic energy and bending strain energy of the element can be expressed as
T=


(7)
V=

(8)
Where,

is the density of beam,

is the Youngs modulus,

is the moment of inertia of cross-section,

is
the area of cross-section
The governing differential equation of motion for the beam element can be represented as
M
b


+C +K
b
p=q (9)
where M
b
, C, K
b
, q are the mass, damping, stiffness, and the force co-efficient vectors of beam element.
The consistent mass matrix and stiffness matrices are obtained as
[M
b
] =

(10)

[K
b
] =

(11)

[M
b
]=

(12)

[M
b
]=

(13)
[K
b
] =

(14)
[K
b
]=

(15)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 130


2.2 Finite Element Formulation of Smart Beam Element
The mass and stiffness matrix for the smart beam element with piezoelectric patches placed at the top and
bottom surfaces as a collocated pair is given by
M
p


+C +K
p
p=

(16)

Where M
p
, C, K
p
, are the mass, damping, stiffness, and

, is the force co-efficient vectors which maps the


applied actuator voltage to the induced displacements of smart beam element,

the voltage applied to the


actuator, develops effective control forces and moments.
The mass matrix of smart beam element is given by
[M
p
]=

+2*

(17)


[K
p
]=

(18)

EI
eq
=

+2

(19)
I
p
=

(20)


2.3 Control Laws
The various control laws such as one control law, which is based on output feedback by assuming arbitrary
value and one optimal control law Linear quadratic regulator (LQR) based on state feedback and one control
law, which is based on pole placement by state feedback has been explained as:-
2.3.1 LQR optimal control by state feedback
LQR optimal control theory is used to determine the active control gain. The following quadratic cost function
is minimized

(21)
Q and R represent weights on the different states and control channels and their elements are selected to provide
suitable performance. They are the main design parameters. J represents the weighted sum of energy of the state
and control. Assuming full state feedback, the control law is given by
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 131

u=-Kx
(22)
with constant control gain


(23)
Matrix S can be obtained by the solution of the Riccati equation, given by

(24)
The closed loop system dynamics with state feedback control is given by
(25)
2.3.2 Control by output feedback
Output feedback control provides a more meaningful design approach in practice. Measured outputs () from
sensors are directly feed back to actuators through
u=-K (26)
The closed loop system dynamics with output feedback control is given by

(27)


2.4 Laminar Sensor Equation
The sensor voltage of piezoelectric element is given by,( Manjunath T.C, Bandyopadhyay B., 2009)

V
s
(t)=K
c
G
c
d
31
E
p

b[ 0 -1 0 1 0 1] (28)
Where G
c
is gain, t
b
, t
a
are the thickness of beam and actuator, K
c
is the controller gain
V
s
(t)=g
T


2.5 Controlling Force from Actuator
Similar to the sensor, the piezoelectric layer which acts as actuator bonded to the structure. The geometrical
arrangement is such that the useful direction of expansion is normal to that of the electric field. Thus, the
activation capability is governed by piezoelectric constant d
31
. With standard engineering notation, the equation
of stress for piezoelectric material given by Premont is

=E
p

- e
31



(29)
where, V is the voltage applied to the piezoelectric material.
The controlling force equation given by

w(t)
(30)
where,

is the distance measured from the neutral axis of the beam to the mid plane of actuator layer, E
p

Youngs modulus of piezoelectric material, d
31
is Piezo strain constant, b is the width of material

=h w(t)
(31)
where, h=


2.6 Model Reduction
After assembly of each element of beam, the final equation for the smart cantilever beam with piezoelectric
patches placed at the top and bottom surfaces as a collocated pair is given by
M

+C +K p=q+


(32)
The external force is taken as unit impulse force.
where M , C, K, are the global mass, Rayleigh damping, stiffness, and further Rayleigh damping co-efficient,
C=[M]+[K] (33)
where, and are the damping constants
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 132

In active vibration control of flexible structures, the use of smaller order model has computational advantages.
Therefore, it is necessary to apply a model reduction technique to the state space representation. The reduced
order system model extraction techniques solve the problem by keeping the vital properties of the full model
only. The frequency range is selected to span first two frequencies of the smart beam in order to find the reduced
order model of the system.
Consider a generalized co-ordinate for reduction as
p=Vz
(34)
where V is the modal vectors corresponding to the first two eigen values. After reduction eqn (38) becomes
M
red


+C
red
+ K
red
z= f
ext
+ f
red
(35)
2.7 State space formulation
In state space formulation, the second order differential equations are converted to first order differential
equations.
First order dynamical system is

w(t) +

r(t) (36)
X=A x(t) +B w(t)+E r(t)
(37)
where, A is known as the system matrix, x(t) is the state vector, matrix B is input matrix, w(t) is a column vector
formed by the voltages applied to the actuators and acting as a control force, E is the external force acting on the
beam
& Y=C x(t)+D w(t)
(38)
Where C is the output matrix, and D is the direct transmission matrix
Y=V
s
(t)=g
T
= g
T
V = g
T
V


(39)
=

(40)
Here
A=


(41)
B=


(42)
C= (43)
D=null matrix (44)
E=

(45)

3. Numerical Simulation
Table 1 Material properties and dimensions of Smart beam
Physical Parameters Beam Element Piezoelectric sensor/actuator
Length(m) l
b
=0.226 l
b
=0.075
Breath(m) b=0.025 b=0.025
Thickness(m) t
b
=0.965e-3 t
a
=0.75e-3
Elastic Modulus(GPa) E
b
=68 E
p
=61
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 133

Density(Kg/m
3
)

=2800

=7500
Piezo strain constant(m/V) d
31
=274e-12
Piezo stress constant(Vm/N) g
31
=10.5e-3
Damping constants =0.001, =0.0001

























Fig 2.Position of sensor/actuator on Cantilever Beam
A cantilever beam with three elements of equal length is considered here .The piezoelectric sensor and actuator
are placed at three different positions .i.e. at fixed end & middle, middle & free end, free & fixed end. The
structure consists of an aluminum beam with PZT-5H sensor and actuators patches. The material properties and
dimensions of the beam and piezo patches are similar to the experiment performed by Xu and Koko(2004). For
analysis, only collocated positions are considered. The physical properties of sensor and actuator have been
given in table 1.
Case1.
In the first case, the responses are taken by giving impulse input. The output feedback controller are designed
by taking the arbitrary value of gain. In practical designs problems all the states are always, not known for
feedback. On the other hand, output feedback control provides a more consequential design. The responses are
also plotted by changing the position of sensor and actuator on the beam i.e. fixed end & middle, middle & free
end, free & fixed end.

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
-0.03
-0.02
-0.01
0
0.01
0.02
0.03
0.04
Impulse Response
Time (sec)
A
m
p
l i t
u
d
e





3







1 2






1 2
Piezoelectric Material at 1st and 2nd
position
3
Piezoelectric Material at 2nd and 3rd
position
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 134

Fig 4 Tip displacement of cantilever beam when piezoelectric materials at 1
st
and 3
rd
position (POF
Controller)

Fig 5 Tip displacement of cantilever beam when piezoelectric materials at 2
nd
and 3
rd
position (POF
Controller)

Fig 6 Tip displacement of cantilever beam when piezoelectric materials at 1
st
and 2
nd
position (POF
Controller)

Case 2.
In second case, an optimal control is designed to minimize the cost function j


Q=1e8*[20 10 0 0
0 10 0 0
0 0 20 0
0 0 0 10] R=100
For this an optimal value of gain is find out by solving Riccati equation.


0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
-0.02
-0.01
0
0.01
0.02
0.03
0.04
0.05
Impulse Response
Time (sec)
A
m
p
l
i
t
u
d
e
0 0.02 0.04 0.06 0.08 0.1 0.12
-0.04
-0.02
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
Impulse Response
Time (sec)
A
m
p
l i t
u
d
e
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 135


fig 1 Tip displacement of cantilever beam with or without LQR controller when Piezoelectric materials at
1
st
and 3
rd
position


Fig 2 Tip displacement of cantilever beam with or without LQR controller when Piezoelectric materials at
2
nd
and 3
rd
position

Fig 3 Tip displacement of cantilever beam with or without LQR controller when Piezoelectric material
1
st
and 2
nd
position

CONCLUSIONS

Present work deals with the mathematical formulation and the computational model for the active vibration
control of a piezoelectric smart structure. A general scheme of analyzing and designing piezoelectric smart
structures with control laws is successfully developed in this study. The present scheme has the flexibility of
designing the system as collocated and non-collocated and user-selected a feedback control law. The active
vibration control performance of piezoelectric cantilever structure is studied by taking arbitrary value of gain
with output feedback and, the linear quadratic regulator (LQR) scheme, which is an optimal control theory
based on full state feedback. It has been observed that without control the transient response is predominant and
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
-0.03
-0.02
-0.01
0
0.01
0.02
0.03
0.04
Impulse Response
Time (sec)
A
m
p
l i t
u
d
e
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
-0.02
-0.01
0
0.01
0.02
0.03
0.04
0.05
Impulse Response
Time (sec)
A
m
p
l i t
u
d
e
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
-0.04
-0.02
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
Impulse Response
Time (sec)
A
m
p
l i t
u
d
e
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 136

with control laws, sufficient vibrations attenuation can be achieved. The study revealed that the LQR control
scheme is very effective in controlling the vibration as the optimal gain is obtained by minimizing the cost
function. Numerical simulation showed that modeling a smart structure by including the sensor / actuator mass
and stiffness and by varying its location on the beam from the free end to the fixed end introduced a
considerable change in the systems structural vibration characteristics. From the responses of the various
locations of sensor/actuator on beam, it has been observed that best performance of control is obtained, when the
piezoelectric element is placed at 1
st
and 2
nd
position.

Table2. Responses of controlled and uncontrolled loop
Different type
of controller
1
st
and 3
rd
position 1
st
and 2
nd
position 2
nd
and 3
rd
position
Settling
time(in
sec.)
peak
response
Settling
time(in
sec.)
Peak
response
Settling
time(in
sec.)
Peak
response
POF for
controlled loop
0.12 0.039 0.042 0.135 0.22 0.04
Uncontrolled
loop
0.27 0.039 0.07 0.135 0.62 0.04
LQR for
controlled loop
0.11 0.039 0.02 0.13 0.42 0.04
Uncontrolled
loop
0.27 0.039 0.07 0.13 0.6 0.03

REFERENCES
[1] Brij N Agrawal and Kirk E Treanor,1999. Shape control of a beam using piezoelectric actuators .Smart
Mater. Struct. 8, 729740.
[2] Halim, D., & Moheimani, R. O,2002. Experimental implementation of Spatial Hinf control on a
piezoelectric laminate beam. IEEE/Asme Transactions On Mechatronics, 7, 346-356.
[3] K .M Liew, S Sivashanker,2003 The modelling and design of smart structures using functionally graded
materials and piezoelectrical sensor/actuator patches. Smart Mater. Struct. 12 , 647655
[4] K. B. Waghulde, Bimleshkumar Sinha, 2010. Vibration Control of Cantilever Smart Beam by using
Piezoelectric Actuators and Sensors. Journal of Engineering and Technology Vol.2(4), 259-262
[5] Levent Malgaca,2010. Integration of active vibration control methods with finite element models of smart
laminated composite structures.journal 92, 16511663.
[6] N.S. Viliani1, S.M.R. Khalili, 2009. Buckling Analysis of FG Plate with Smart Sensor/Actuator. Journal of
Solid Mechanics Vol. 1, No. 3 ,pp.201-212
[7] S. Narayanana, V. Balamurugan, 2003. Finite element modeling of piezo laminated smart structure for
active vibration control with distributed sensor and actuators. Journal of Sound and Vibration 262, 529
562.
[8] T. C. Manjunath, B. Bandyopadhyay,2007. Control of vibration in smart structure using fast output
sampling feedback technique. World Academy of Science, Engineering and Technology 34,
[9] Xu, S. X., & Koko, T. S,2004. Finite element analysis and design of actively Controlled piezoelectric smart
structures. Finite Elements in Analysis and Design, 40, 241-262.
[10] Y Yu, X N Zhang and S L Xie ,2009. Optimal shape control of a beam using piezoelectric actuators with
low control voltage .Smart Mater. Struct. 18, 095006 (15pp)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 137

FAULT DIAGNOSIS OF BEARING USING ACOUSTIC AND VIBRATIONAL SIGNAL

Mohit Bansal
1
, Lalit Rathi
2

1
M.Tech Student Department of Mechanical Engineering, YMCA University, Faridabad, India
2
M.Tech Student Department of Mechanical Engineering, YMCA University, Faridabad, India
bansalmohit01@gmail.com
1
, rathee_lalit2007@rediffmail.com
2



ABSTRACT

In the present study, a bearing and over hanged shaft model capable of describing the theoretical
dynamic behavior resulting from shaft misaligned is developed during run-up motion. A comparison
between experimental and numerical results clearly indicates that validity of the theoretical model
was successfully verified for fault misalignment. The results show that the effect of the evolution of
fault misalignment can be monitored and detected during the machine run-up without passing by
critical speed. Extensive experimental results show that ability and feasibility of the application of
wavelet analysis in the diagnostic of faults inserted in the experimental set-up is very suitable to non-
stationary signal analysis. Results show that the sensitivity and efficiency in the fault diagnostic using
transient response during start-up is higher than steady state response of rotating machinery.

KEYWORDS - fft, acoustics signal, frequency


INTRODUCTION:

Vibration analysis is widely used in machinery diagnostics. There are many analytical techniques, which have
been fully developed and established over the years for processing vibration signals to obtain diagnostics
information about processing bearing faults. Time-frequency is one of the methods. The fault detection
procedure for time-frequency methods is usually based on visual observation of the contour plots. The
propagation of fault can be monitored by observing changes in the features of the distribution in the contour
plots proposed. In our study acoustic signal with the help of mike is recorded and then stored in the computer.
The recorded signal is processed in the Matlab environment. Results suggested that vibration analysis by using
acoustic signals are very effective for the early detection of faults and may provide a powerful tool to indicate
the various types of progressing faults in bearing and in predictive maintenance.

FAULT DIAGNOSIS OBJECTIVES:
In this work we tried to analyze the effect of misalignment of shaft on the bearing on the spectrum of acoustic
and vibration signal and developed a method to identify such defect. The main objective of this project work can
be used in predictive maintenance to proceed further we have to know certain things which is basically the
signal processing.

An accelerometer is a device that measures the vibration, or acceleration of motion of a structure. The force
caused by vibration or a change in motion (acceleration) causes the mass to "squeeze" the piezoelectric material
which produces an electrical charge that is proportional to the force exerted upon it. Since the charge is
proportional to the force, and the mass is a constant, then the charge is also proportional to the acceleration.

In our work we have used a technique to diagnose the faults in rotating components (such as bearings) of
machine. An experimental setup is being developed to acquire the acoustic signal of rotating components.
Misalignment will be introduced in the shaft. Acoustic signal corresponding to the fault will be recorded. These
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 138

Signals will be processed using signal processing techniques in Matlab environment. Depending on the
characteristics of the raw acoustic signal obtained from experiment Fourier Transform will be used to extract the
useful information. One of the great advantages of using wavelet transform is that the time information is not
lost. The problem undertaken has practical importance in operation, on-line inspection, failure prediction and
maintenance of rotating components.

ACOUSTIC SIGNAL

Acoustics is the interdisciplinary science that deals with the study of sound, ultrasound and infrasound (all mechanical waves
in gases, liquids, and solids). The science of acoustics spreads across so many facts of our society - music, medicine,
architecture, industrial production, warfare and more. Art, craft, science and technology have provoked one another to
advance the whole, as in many other fields of knowledge. Acousticians had extended their studies to frequencies above and
below the audible range, it became conventional to identify these frequency ranges as "ultrasonic" and "infrasonic"
respectively, while letting the word "acoustic" refer to the entire frequency range without limit.

3.1. Fast Fourier Transform
Signal analysts already have at their disposal an impressive arsenal of tools. Perhaps the most well-
known of these is Fourier analysis, which breaks down a signal into constituent sinusoids of different
frequencies. Another way to think of Fourier analysis is as a mathematical technique for transforming
our view of the signal from time-based to frequency-based. Fourier analysis is extremely useful
because the signals frequency content is of great importance. Fourier analysis has a serious
drawback. In transforming to the frequency domain, time information is lost. When looking at a
Fourier transform of a signal, it is impossible to tell when a particular event took place. If the signal
properties do not change much over time that is, if it is what is called a stationary signal this drawback
isnt very important. However, most interesting signals contain numerous non stationary or transitory
characteristics: drift, trends, abrupt changes, and beginnings and ends of events. These characteristics
are often the most important part of the signal, and Fourier analysis is not suited to detecting them.


Fig. 3.1, FFT of Signal


METHODOLOGY
4.1. Design the system
For acquisition of acoustic signal. A system has to be developed to record the audio signal in the frequency
range of 20Hz to 20KHz. MIKE will act as sensor and will be interfaced with the computer. The acoustic signal
will be recorded and stored in the computer for the different conditions.
Vibration data signals are recorded over the digital storage oscilloscope with the aid of accelerometer.
4.2. Processing of acquired acoustic signal
The acoustic signal will be processed in and increasing misalignment will be extracted from the signal by
plotting the FFT of the signal .MATLAB environment in order to improve the signalto-noise ratio.
Information of the fault in bearing having no defect
4.3. Analysis of processed signal
The processed signal will be analyzed for fault detection in rotating components.



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 139

EXPERIMENTAL SET UP

In our project we are dealing with different conditions that are without load, with load over the over hanged
portion of the shaft assembly. Then also the arrangement is tested at constant speed conditions. The bearing
which is mounted over the motor shaft is our main area of concern because from here we have to record our
signal. A 200 watt motor drives the bearing arrangement.. A mike (logitec make) is used to record the acoustic
signal generated by the bearing assembly. The complete experimental setup for fault detection is shown in figure
2.1.

Fig 5.1, Experimental Setup

Fig 5.2, Experimental Setup showing mounted accelerometer

RESULT
After recording the signal in computer a low pass equiripple FIR filter is applied up to frequency of 500Hz.
The following result has been taken by the experiments. These results are totally processed through the Matlab
environment. These are mainly the graphs and scalogram of raw signals and processed signals respectively. The
Fast Fourier Transform (FFT) for the experimental arrangement is shown in figure below .In FFT spectrum the
energy vs. frequency domain is drawn.We can see that the energy at 96 Hz starts increases as wee increase the
misalignment. This 96 Hz representing the 4
th
harmonic of the rotational frequency.

(a ) (b)
Fig 6.1, fft graph for (a) no load (b) 1 kg load


(c) (d)
Fig 6.2, fft graph for (c) 2.5kg load (d) 4 kg load
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 140

6.1.RESULTS FOR THE VIBRATION DATA
The results obtained from the vibration signals are in the form of graphs which we obtained on the digital
storage oscilloscope and we have some justified results for the conditions of no load, 1 kg load and 2 kg loads
over the shaft. These are the graphs between the voltage and time which are shown below as:



Fig.6.3 voltage v/s time graph showing distribution of peaks for the no load condition

Fig. 6.4 voltage v/s time graph showing distribution of peaks for the 4kg load condition

So we can say after observing these graphs obtained from the DSO that as the load increases on the shaft the
peaks for the voltage are staggering in the time plot and there is also a significant rise in the frequency obtained
in each vibration signal.

CONCLUSIONS

In this work we have analyzed the effect of bearing fault on the spectrum of acoustic and vibration signal and
developed a method to identify such defects. We have taken help of FFT diagram. From the experiments
following conclusions are drawn.

7.1 It is demonstrated that, although the environment influences acoustic signal for condition
monitoring, it does not significantly reduce the extraction of useful diagnostic information. It has
been demonstrated that acoustic condition monitoring can effectively be used for fault detection in
bearing operation.
7.2 It is clear that the wavelet representation of the acoustic signals reveals the faults in bearing more
precisely.
7.3 The results obtained from the graphs of vibration data are justified for the 2 conditions over which
we can also say that the significant rise in the value of frequency is due to increase in the
vibrations but still we are not in a stage to say that vibration data is better because as more
significant results are available from the acoustic data.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 141

7.4 In vibration monitoring using acoustic signal have certain advantages over the conventional
vibration measuring techniques. Firstly in this sensors do not alter the behavior of the machine due
to its non contact nature. And time based information is not lost in wavelet based method of
diagnosis.

REFERENCES

[1] Chong Lee,2006, Mechatronics in rotating machinery, Proceeding of the International conference on
Roto Dynamics (7th IFToMM), 25-28 September 2006, Vienna, Austria.
[2] Colin et al., 1999, Dynamics modeling for Mechanical Fault Diagnostics and Prognostics, International
conference on maintenance and reliability (MARCON 99), 10-12 May 1999, Gatlinburg, Tennessee.
[3] Hai Qiu et al. , 2006 , Wavelet filter-based weak signature detection method and its application on rolling
element bearing prognostics, Journal of Sound and Vibration, vol 289, pp 1066-109.
[4] HAN Xiao-ming et al. , 2008, Wear fault diagnosis of an emulsion pump crank bearing, Journal of China
University of mining and technology, vol 18, pp 0470-0474
[5] Jian-Da Wu. , 2006, Continuous wavelet transform technique for fault signal diagnosis of internal
combustion engines, NDT&E international, vol 39,pp 304-311.
[6] J.P Dron et al. , 2004,Improvement of the sensitivity of the scalar indicators (crest factor, kurtosis) using a
de-noising method by spectral subtraction: application to the detection of defects in ball bearings, Journal
of Sound and Vibration, vol 270, pp 61-73.
[7] Krzysztof , 2001, Some aspects of acoustic emission signal pre-processing, Journal of Materials
Processing Technology, vol. 109, pp. 242-247.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 142

DESIGN OF HIGH Q DIELECTRIC RESONANCE BAND PASS
FILTER BY USING DGS AND MICROSTRIP LINE
1
Kavita,
2
Sushil Kumar,
3
Md. Rashid.

1,2
AL-Falah School of Engg. & Tech, Faridabad, Dauj, India.,
3
IILM College, Greater Noida,U.P
,India.
1
kavitadagar@rediffmail.com,
2
ersushil007@gmail.com,
3
mdrashidmahmood@yahoo.com.
ABSTRACT
A high performance resonator and the DGS (Defected ground structure) are the important elements in many
microwave circuits such as filters, amplifiers, couplers and antennas for electronic, wireless and microwave
communication systems. This paper presents a design of a bandpass filter using combination of a simple
transmission line and cylindrical dielectric resonator and defected ground structure.Three dielectric resonators
with same permittivity (FR
4
epoxy or duroid) having high permittivity and diameter of 0.72 mm are identified to
be contributed to an ultra-wideband bandwidth of the filter. These band limited bandpass filters can also be
used in T.D.M.A technique in wireless communication. This new approach increases the coupling effect as well
as minimizing the insertion loss in the passband. In order to prove that the new approach contributes more
advantages and is viable at the desired application band, the return and insertion losses of the filter are
analyzed. The availability of high-Q tunable filters may also have a significant impact on utilization of bank of
filters, production cost and delivery schedule in some communication systems. Such systems use multiple filters
that are usually identical with the exception of center frequency and bandwidth.The production cost can be
significantly reduced by building standard filter units that can be easily reconfigured during production phase
to fit the required frequency plan.

Keywords: Microwave strip lines, DGS, Band pass filter and Dielectric resonator

1. INTRODUCTION
Defected ground structure on the back of the filter improves the harmonic suppression characterstics of band
pass filter[6].This extraction method shows how to design a micro strip high-low pass filter by combining an
arrow head shaped defected ground structure with multilayer circuit fabrication techniques. [2]DGS elements
have been shown to provide a mean of shrinking the size of passive ckts such as low pass filter.The key is
determing the size area of a selected
DGS shape by correlating its area to the equivalent circuit. Inductance and capacitance required for a Particular
filter design as might otherwise be relised by conventional microstipline in addition to the smaller size, they
deliver even sharper filter cutoff than conventional microstrip it has a conventional microstrip and stipline
technology and can be used creatively in multilayer fi;ter architecture for further saving in circuit realstate
geometrical

resonators. Dielectric resonator (DR) offers a lot advantages in increasing the performance of RF and
microwave devices which make it as an ideal Ultra-Wideband Dielectric Resonator Bandpass Filter[4].
It is wireless application; low design profile and wide bandwidth Dielectric resonators are mainly designed to
replace resonant cavities in microwave circuits such as filters and oscillators. Like resonant cavities, they
present the resonant modes of frequency determined by the dimensions and high Q-factors. The advantages of
dielectric resonators are more compact, higher temperature stability and easy to use. The dielectric resonators
are also usually shielded to prevent radiation as well as maintain a high-Q that required by filters and oscillators
circuits.As dielectric costant increases, Q-factor decreases.

The DR filters are good for mobile and satellite communications. A typical DR filter consists of a number of
dielectric resonators that are mounted in a planar configuration to obtain a good resonant frequency[5]. The
relative dielectric constant of the materials for constructing DR in microwave filters generally was chosen from
a higher value compared to the base substrate. The primary advantage in using a high dielectric constant is to
miniaturize the filter size. The size of DR filter is smaller than the dimension of waveguide filters operating at
the same frequency. Furthermore, these DR filters are employed to replace waveguide filters in applications
such as satellite communication systems where the planar filters cannot be used because of their inherently high
loss. In this paper, a novel bandpass filter that consists of three dielectric resonators were excited with microstrip
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 143

line that were used to increase the bandwidth of a bandpass filter. The idea of using the three dielectric
resonators is to generate additional frequency which can merge together to produce wideband devices, increase
the transmitting power and reduce the insertion loss. The optimum coupling effect in the filter was obtained
from the matching position of the resonators on the microstrip line.

2. DESIGN METHODOLOGY
The dielectric resonator can increase Q-factor in a circuit. The size, location and shape of the dielectric
were influencing the matching of the circuit. In this project, three dielectric resonators were excited with a
microstrip line in order to obtain the optimum coupling effect. [3,4]The dielectric resonators offer advantages in
increasing the signal transmission performance of RF and microwave devices. The match combination of
dielectric resonators and microwave circuit capable to generate additional coupling effect that can be merged
together to produce a wideband device as well as increasing the transmitting power and reduce the insertion
loss. This combination proficiently produces a low design profile.The dielectric constant is a parameter that
reflects the capability of a material to confine a microwave. The higher this parameter means better in term of
microwave confinement in the substrate. There is an inversely proportional relation between size and dielectric
constant. A high dielectric constant is required to reduce circuit size of a device. A significant miniaturization
can be achieved, thus high quality filters can be realized. The main difference lies in the fact that the wavelength
in dielectric materials is divided by the square root of the dielectric constant,
r
in a function of guided
wavelength = free space wavelength /sqrt of relative permittivity

g
=
0
/
r
where
0
is the free space wavelength at the resonant frequency. Moreover, unlike resonant cavities, the reactive
power stored during resonance is not strictly confined inside the resonator. The leakage fields from the resonator
can be used for coupling or adjusting the frequency. The wavelength inside the DR,
g
is also inversely
proportional to the square root of the dielectric. The resonant frequency and radiation Q-factor can be varied
even dielectric constant of the materials are fixed due to the dielectric resonators able to offer flexibility in
dimensions. It is amenable in integrating to existing technologies by exciting using probes, slots, microstrip
lines, dielectric image guides or co-planar waveguide and DGS.





Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 144






Fig 4,5,6:- Shows the ,V.S.W.R ,group delay& lumped ports vs frequency, without tuning of different dielectric
resonators.

Fig 7:- Shows the different lumped ports at cut-off frequency 10 GHz, with Tuning of different dielectric
resonator.



3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00
Freq [GHz]
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
G
r o
u
p
D
e
l a
y
( L
u
m
p
P
o
r t 2
, L
u
m
p
P
o
r t 2
)
Ansoft Corporation HFSSDesign1 XY Plot 12
Curve Info
GroupDelay(LumpPort2,LumpPort2)
Setup1 : Sweep1
3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00
Freq [GHz]
0.00
0.00
0.00
0.00
0.00
G
r o
u
p
D
e
l a
y
( L
u
m
p
P
o
r t 2
, L
u
m
p
P
o
r t 1
)
Ansoft Corporation HFSSDesign1 XY Plot 10
Curve Info
GroupDelay(LumpPort2,LumpPort1)
Setup1 : Sweep1
3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00
Freq [GHz]
1.00
2.00
3.00
4.00
5.00
6.00
7.00
8.00
V
S
W
R
( L
u
m
p
P
o
r t 1
)
Ansoft Corporation HFSSDesign1 XY Plot 6
Curve Info
VSWR(LumpPort1)
Setup1 : Sweep1
3.00 4.00 5.00 6.00 7.00 8.00 9.00 10.00
Freq [GHz]
-14.00
-12.00
-10.00
-8.00
-6.00
-4.00
-2.00
Y
1
Ansoft Corporation HFSSDesign1
XY Plot 15
Curve Info
dB(S(LumpPort1,LumpPort1))
Setup1 : Sweep1
dB(S(LumpPort1,LumpPort2))
Setup1 : Sweep1
dB(S(LumpPort2,LumpPort1))
Setup1 : Sweep1
dB(S(LumpPort2,LumpPort2))
Setup1 : Sweep1
9.00 9.50 10.00 10.50 11.00
Freq [GHz]
-25.00
-20.00
-15.00
-10.00
-5.00
0.00
Y
1
Ansoft Corporation HFSSDesign1
XY Plot 22
Curve Info
dB(S(LumpPort1,LumpPort1))
Setup1 : Sweep1
dB(S(LumpPort1,LumpPort2))
Setup1 : Sweep1
dB(S(LumpPort2,LumpPort1))
Setup1 : Sweep1
dB(S(LumpPort2,LumpPort2))
Setup1 : Sweep1
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 145

3. CONCLUSIONS
A bandpass filter was designed to operate at starting frequency of 3 GHz, without tuning the dielectric
resonator.If dielectric resonator tuned than it will give more accurate pass band response by utilizing DGS
Structure along with it, very small ripple at the passband insertion loss and able to operate with a wide
bandwidth upto 11GHz or more . The structure of the filter is simple for ease fabrication process. The
measurement values are closely agreed to the simulation results by HFSS.

4. REFERENCES
[1] S. R. Chandler, I. C. Hunter, and J. G. Gardiner, Active varactor tunable bandpass filter, IEEE
Microwave Guided Wave Lett., vol. 3,no. 3, pp. 7071, Mar. 1993.
[2] Bal S. Virdee, Christos Grassopoulos, Folded Microstrip resonator, IEEE MTT-S Int.Microwave
Symp. Dig.,vol. 3, pp. 2126-2164, June 2003
[3] R. J. Cameron, C. M. Kudsia, and R. R. Mansour, Microwave Filters for Communication Systems
Fundamentals, Design and Applications.New York: Wiley, 2007.
[4] Raffat R Mansour High-Q tunable Resonator Filter,IEEE microwave Magazine,Canada,2009
[5] Mohd.F.Ain Ultra-wide dielectric resonator bandpass
[6] Compact Wideband Bandpass Filter Using Open Slot Split Ring Resonator And CMRC
S. S. Karthikeyan and R. S. Kshetrimayum, Department of Electronics and Communication Engineering
Indian Institute of TechnologyGuwahati, Assam 781039, Indiafiter, Malasiya,2010 Progress In
Electromagnetics Research Letters, Vol. 10,39{48,2009.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 146

DESIGN & STATIC ANALYSIS OF REAR AXLE

Lalit Bhardwaj
1
, Amit Yadav
2
, Deepak Chhabra
3

1
Department of Mechanical Engineering, Shri Baba Mastnath College of Engineering
Maharshi Dayanand University, Rohtak,Haryana, INDIA
2
Department of Mechanical Engineering, University Institute of Engineering &Technology
Maharshi Dayanand University, Rohtak,Haryana, INDIA
3
Department of Mechanical Engineering, NIT Kurukshetra, Haryana, INDIA
deepaknit10@gmail.com
3



ABSTRACT
This work considers the static analysis of rear dead axle with the help of computer aided engineering (CAE)
tool. Finite element analysis of rear dead axle carried out with actual design consideration and loading
conditions. Creo elements software is used to design the different parts and assembly of rear axle. Hyper mesh
software is used to make finite element model of axle. Real life examples are continuous, so the basic concept of
FEA to discretization reduces the infinite point to finite nodes & elements (meshing) & result is interpolating in
the whole domain. Radios Linear solver is used to perform static analysis of dead axle. Hyper view is used for
post processing. Von Misses Stress distributions and displacement contours on rear axle are plotted at different
load conditions with static analysis and safe load is determined.

1. INTRODUCTION :

An axle is a central shaft for a rotating wheel. The wheel may be fixed to the axle, with bearings or bushings
provided at the mounting points where the axle is supported. The axles maintain the position of the wheels
relative to each other and to the vehicle body. Dead axle does not transmit power like the front axle in a rear
wheel drives are dead axles. On the dead axle suspension system is mounted, so its also called suspension axle.
Critical parts of rear axle are Pipe / Tube, Seat spring, Shock absorber bracket, Pin lateral rod, Strength, Flange,
Spindle, Flexible hose ,bracket and Bracket trailing arms. This work deals with the designing of rear dead axle
and finite element modeling for static analysis. Jin Yi-Min [2000], implemented finite element methods to
analyze and evaluate minivan body structure. The analysis included static, dynamic, fatigue, crashworthiness,
optimization and design sensitivity analysis. Asheber Techane [2007], conducted dynamic analysis of a locally
manufactured bus body structure. The solid model of the structure was developed in AutoCAD classic
environment and the generation of the FE model and the dynamic analysis were performed using ANSYS. It
was concluded that the roof of the bus structure is prone to higher deflection, and as a recommendation it was
forwarded that there be carried out complete evaluation of the vehicle to prove robustness of the structure in
terms of its mechanical properties Kassahun Mekonnen[2008], used finite element modeling and analysis to
evaluation and assessment of responses of the vehicle to different loads. A vehicle and its structural components
are subjected to loads which cause stresses, strains, deflections, vibration and noise in the components. To
achieve a quality vehicle, i.e. one having longer fatigue life, reduced weight, reduced cost, and so on, it becomes
necessary to use materials of appropriate strength and stiffness property with the most appropriate
geometry(form). Devender Kumar & Amit Kumar (2008), carried out Finite Element Analysis of Rear Engine
Semi Floor (RESLF) city bus body structure with actual design considerations and loading conditions. The CAD
model of the bus body structure has been generated which has been exported to hyper mesh for preprocessing.
FE model has been solved using Radioss Linear. The Von misses stress and displacement contours have been
generated. It was observed that stresses and displacements have been found within prescribed limits and
structure could withstand the load under the given conditions. Mehmet Firat [2011], presented the bending
fatigue test of a rear axle assembly is simulated by using a FE-integrated fatigue analysis methodology. Linear
elastic FE stress analyses are used in the calculation of local fatigue loading.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 147

As can be seen, all the research focus on static, dynamic, fatigue, crash and optimization analysis of vehicles
and their different components. In this work, CAD model of rear axle is made in Creo elements software and the
static analysis of rear dead axle with the help of CAE Software (Hyper mesh) is performed.








Fig1. Parts of Rear dead axle
2. DESIGN OF DEAD AXLE USING CAD TOOL :
We retrieved all the dimensions and geometric properties, with the help of reverse engineering and measurement
of axle of car. Creo element is used to design the different parts and assembly of rear axle. Parts & assembly are
made by solid & sheet metal module. To create parts in Creo software following commands are used; Extrude,
Revolve, Blend, Sweep, Variable section sweep, Chamfer, Bend, Flange wall, Punch & Round. The CAD model
of assembly of rear axle is shown in Fig2.

Fig2. Assembly of axle with Creo Elements

3. MESHING OF DEAD AXLE USING CAE TOOL :

Spindle
TUBE
Flange

Flexible hose bracket
Seat spring

Bracket trailing
arms

Shock absorber bracket

Pin lateral rod

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 148

The CAD model in IGES format is imported in hyper mesh. The mid surface extraction has been carried
out. Geometric clean up have been performed to get better mesh quality. Meshing of mid-surfaces of all
different parts has been carried out. The spindle and pin lateral rod meshed with tetra elements (3D
meshing) and remaining parts are meshed with shell elements (2D meshing).


Fig3.Meshing of right hand side of rear axle
Fig4.Meshing of left hand side of rear axle
Various quality checks like skew, aspect ratio, jacobian, warpage are performed to measure quality index.
Penetration and intersection test is also performed to check the overlap of the material thickness of shell
elements and elements passing completely through one another respectively.
4. LOADING CONDITIONS :

Loads are created on geometry as well as FE entities. Load can be applied on geometry or on the meshing. Load
may be applied to point, line and geometry.

4.1 Crub Weight:
Curb weight (US English) or kerb weight is the total weight of a vehicle with standard equipment, all necessary
operating consumables (e.g. motor oil and coolant), a full tank of fuel, while not loaded with either passengers
or cargo.

4.2 Gross Weight A gross vehicle weight rating (GVWR) is the maximum allowable total weight of a road
vehicle or trailer when loaded including the weight of the vehicle itself plus fuel, passengers, cargo, and trailer
tongue weight
Curb Weight 6.5 KN
Gross Weight 10 KN
Table1.Load conditions on car
Weight on Dead axle due to curb weight 3 KN
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 149

Weight on front axle due to curb weight 3.5 KN
Table2. Load conditions on Rear axle (Curb Weight)
Weight on Dead axle due to Gross weight 5 KN
Weight on front axle due to Gross weight 5KN
Table3. Load conditions on Rear axle (Gross Weight)
Load is divided on Seat spring & shock absorber bracket in 6:4 ratio respectively.
5. RESULT & DISCUSSIONS :
5.1Static analysis of dead axle at different loads
With the help of static analysis, Von Misses Stress distributions on rear axle are plotted at different load
conditions and safe load is determined.

Case1
. When load on axle is considered as 3KN, Von Mises stresses plot are shown as:

Fig5. Von Misses stress at load 3 KN

When load on axle is considered as 3 KN, Displacement plot are shown as:


Fig6. Displacement at load 3 KN




Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 150

Case 2. When load on axle is considered as 30 KN, Von Misses stresses plot are shown as:

Fig7. Von Misses stress at load 30 KN
When load on axle is considered as 30 KN, Displacement plot are shown as:

Fig8.Displacement at load 30 KN





Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 151

Case 3. When load on axle is considered as 60 KN, Von Misses stresses plot are shown as:

Fig9. Von misses stress at load 60 KN
When load on axle is considered as 60 KN, Displacement stresses plot are shown as:


Fig10. Displacement at load 60 KN




Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 152

Case 4. When load on axle is considered as 120 KN, Von Misses stresses plot are shown as:

Fig11.Von Mises stress at load 120 KN
When load on axle is considered as 120 KN, Displacement plot are shown as:


Fig12. Displacement at load 120 KN





Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 153

Table4. Load vs. Stress
Minimum and maximum stress value at different load conditions are shown below.
Load 3 KN 30 KN 60 KN 120 KN
Minimum stress 0.00
KN/
0.00
KN/
0.00
KN/
0.00
KN/
Maximum stress 38.91
KN/
389.1
KN/
7782
KN/
15560
KN/

Table5. Load vs. Displacement
Minimum and maximum Displacement values at different load conditions are shown below.
Load 3 KN 30KN 60 KN 120 KN
Minimum
displacement
0.00
mm
0.00
mm
0.00
Mm
0.00
Mm
Maximum
displacement
0.02959
mm
0.2959
mm
5.918
Mm
11.84
Mm

6. CONCLUSIONS :
CAD model of rear axle is successfully made in the software Creo elements. Finite element analysis of rear dead
axle carried out with actual design considerations and loading conditions. Finite element model of axle is
successfully made in Hyper mesh software. Radioss Linear solver performed static analysis of dead axle,
effectively. The crub weight of car is 6.5KN and gross weight is 10KN. So, the load conditions on rear axle are
designed accordingly. The static load on rear axle is varied from 3KN to 120KN and Von Mises stresses &
displacement contour are plotted at each load. The maximum and minimum values of stresses at each load are
shown in table 4. The elastic failure of rear axle occurred at 60KN & plastic failure happened at 120 KN. So, by
considering factor of safety as 3, the safe design load for rear axle is 20KN at static condition.
REFERENCES
[1] Asheber Techane, Dynamics and Vibration Analysis of Bus Body Structures , M. Sc. Thesis, Addis
Ababa University, Addis Ababa, 2007
[2] Devender Kumar & Amit Kumar, Finite element analysis of a bus body structure using CAE Tools,
HTC 2008.
[3] Jin-yi-min, Analysis and Evaluation of Minivan Body Structure , Proceedings of 2nd MSC
worldwide automotive conference, MSC, 2000.
[4] Kassahun Mekonnen, Static and Dynamic Analysis of commerical vehicle with Van body, M. Sc.
Thesis, Addis Ababa University, Addis Ababa, 2008.
[5] Kim, H. S., Hwang, Y. S., Yoon, H. S., Dynamic Stress Analysis of a Bus Systems,Proceedings of
2nd MSC worldwide automotive conference, MSC, 2000.
[6] M. Fermer, McInally, G., Sandin, G., Fatigue Life Analysis of Volvo S80 Bi-fuel,Proceedings of 1st
MSC worldwide automotive conference, MSC, 1999.
[7] Mehmet Firat

, A computer simulation of four-point bending fatigue of a rear axle assembly, Engg.
Failure Analysis, 2011.
[8] M. Khalid, Assistant Professor and J.L. Smith, Professor, Axle torque distribution in 4WD tractors,
Journal of Terramechanics Volume 18, Issue 3, September 1981, Pages 157-167.
[9] Mikell P. Groover, Emory W. Zimmers, CAD/CAM, Published by PEARSON Prentice Hall.
[10] Nitin S Gokhle, Sanjay S Deshpande, Practical Finite Element Analysis, Published by Finite to Infinte.
[11] Robert L. Norton, Machine Design, Published by PEARSON Prentice Hall.
[12 ]Shen Rong Wu and James Cheng, Advanced development of explicit FEA in automotive
applications,
[13] Computer Methods in Applied Mechanics and Engineering Volume 149, Issues 1-4, October
1997,Pages 189-199.
[14] Shiang-Woei Chyuan, Finite element simulation of a twin-cam 16-valve cylinder structure, Finite
Elements in Analysis and Design Volume 35, Issue 3, 1 June 2000, Pages 199-212. Tirpathi
R.Chandrupatla, Ashok D. Belegundu, Finite Elements in Engineering, Published by PEARSON
Prentice Hall.
[15] William H. Crouse, Automobile Engineering.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 154

IMPROVING THE DURABILITY OF THE E.O.T. CRANE
STRUCTURE BY FINITE ELEMENT ANALYSIS, AND OPTIMIZE
THE HOOK MATERIAL FOR IMPROVING ITS SOLIDITY.

Pragnesh D. Panchal
1
, S.M.Sheth
2



1
Birla Vishwakarma Mahavidyalaya Engineering College, Vallabh Vidhyanagar-388120, Gujarat

2
G..H. Patel College of Engineering & Technology, Vallabh Vidhyanagar-388120, Gujarat, India
pdp_111@yahoo.co.in
1



ABSTRACT :
Modern technological period cant be imagined without various material handling equipments. Cranes are
amongst one of the material handling equipment which finds wide applications in different fields of engineering.
Due to incredible all around economic development, the operation rate of cranes has astonishingly increased
year by year, and many cranes are often used beyond their capacity in India. So Analysis of the main structure
is very important and essential. The purpose of this paper is to analyze the stresses and strains condition in the
power structure of overhead crane, presenting a fast and evaluated computer aided solving method for complex
static indeterminate structures. The analysis of the stresses and strains state of the power structure of overhead
crane bridge for increasing its toughness is made using the NX NASTRAN. The research performed allows the
evaluation of the stresses state, pointing out the critical areas and measures which are imposed in order to
increase the toughness of the power structure for the overhead crane. The results obtained allowed us to make
up a study about the dimension optimization of the power structure in order to design the crane hook. Thus, the
material use could be reduced without more than the limits of the most suitable conflict.
Keywords: model, analysis, power structure, stress-strain, crane hook.

1. INTRODUCTION :

Cranes are amongst one of the material handling equipment which finds wide applications in different fields of
engineering. Cranes are industrial machines that are mainly used for materials movements in construction sites,
production halls, assembly lines, storage areas, power stations and similar places. Their design features vary
broadly according to their major operational specifications such as: type of motion of the crane structure, weight
and type of the load, location of the crane, geometric features, operating regime and environmental conditions.
However, an appraisal of the available literature reveals that routine design of cranes are highly saturated and
standardized in various industrial companies and organizations independent of the crane type. Consideration of
the available technology that is mainly based on the accumulated previous experience is important for better
performance, higher safety and more reliable designs. It is well known that generic features of crane
components are similar for various different types of cranes. Since the crane design procedures are highly
standardized with these components, main attempt and time spent in crane design projects are mostly for
interpretation and implementation of the available design standards. They offer design methods and practical
approaches and formulae that are based on previous design experiences and extensively accepted design
procedures. It is believed that computer automated access to these standards with pre-loaded interpretation and
guidance rules increase speed and reliability of the design procedures and increase efficiency of the crane
designers.

Material handling equipments have been traditionally designed using some standards with factor of safety
included into the design. This can lead to over design of the component. This paper presents a optimized model
of the crane hook determined after the analysis of the whole overhead crane and also safe with respect to the
available standards as well as easy to manufacture.
The Structural analysis was carried out using NX NASTRAN for a 100 T Lifting Beam that had Electric
Overhead Travelling (EOT) Crane of 100 T capacity to lift loads in tandem.

The analysis consists of three major components of the Crane:
Main Longitudinal Girder
cross girder and
Crane (ramshorn) hook Assembly

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 155

This analysis for each of the above three components has been carried out for the loads as specified by relevant
IS Code. The analysis also involved redesign of the structure wherever needed to meet requirements of stresses
and displacements. Online change in design is an advantage not available to this particular industry. The
designers also required to reduce weight at locations wherever the material saving was possible.

1. 1.FINITE ELEMENT MODELING OF THE MAIN STRUCTURE :

The analysis for a complex structure as of an overhead travelling crane with the lifting capacity of 100 KN,
lifting height of 28 m and span is 22 m is a very difficult problem. The difficulties are given by the geometrical
configuration of the power structure and its dimensions. In these conditions, in order to perform a complete
analysis which is able to point out all the aspects regarding the stress-strain state distribution from the structure,
there is required the use of a carefully elaborated calculation model able to eliminate the approximations which
appear in the elaboration of the geometric model and to allow the use of some finite elements suitable as type,
number and implicitly as size.

There is presented the analysis with finite elements of the stresses and strains state from such a power structure,
but where the problem solving is done using finite elements of beam type with rigid nodes. Such an approach
allows a general study of the structure mode of behavior, but without making evident the aspects concerning the
phenomena of stress concentration or a detailed studying of the stress-strain distribution. Taking into account
the constructive requirements and the solutions chosen for the design of the overhead travelling cranes, there
was required the use of the shell type finite elements according to the theory with moments.

The model has been prepared on the Solid Edge ST software and extracts that assembly in the NX NASTRAN
for the analysis. After completing the analysis of the Main Longitudinal Girder, cross girder and hook
assemblys whole parts. The maximum stress has been observed on crane hook and after that the crane hook
analysis has been performed for the further analysis.

The crane hook analysis has been performed for two different materials and three different sections for the
analysis and optimization of the material.

2. RESULTS :
The analysis has been calculated the stresses, displacement results which is occurs in the Main Longitudinal
Girder, cross girder. These results are shows that resulting stresses are well under the permissible stress limits.
So the top plate required reducing thickness from 22 mm to 20 mm and side

plate is also required to reduces the thickness from 12 mm to 10 mm. After Analysis it is shows that the material
saving was possible which shows in figure 1.



Figure 1: Box-Section for main longitudinal girder

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 156


Figure 2: Displacement result of main longitudinal girder


Figure 3: Stress result of main longitudinal girder

But, the maximum stress has been observed on crane hook. So, the analysis of the hook is performed with the
three different material.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 157



Figure 4: Displacement result of ramshorn hook


Figure 5: Stress result of ramshorn hook

After the ramshorn hook analysis is performed for three different materials are En-22, Steel-20 and Forged
Steel.
The result of the trapezoidal section with En-22 material is mentioned in the figure 4 and figure 5.

According to these figures 4 and 5:

Von-misses stress is 87.66 N/mm
2
,
Maximum shear stress is 45.66 N/mm
2
and
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 158

Total deformation generated on the hook is 0.3569 mm.

3. CONCLUSIONS

After completing the all exercise with numerical and analytical work following results are obtained.

Steel-20 Forged Steel EN-22
Von-Misses stress (N/mm
2
) 125.15 159.47 87.66
Shear stress (N/mm
2
) 60.745 81.93 45.66
Displacement (mm) 0.27722 0.5803 0.3569

Table 1: Analysis of ramshorn hook with three different materials.


Forged steel is used in current application, and it is manufactured from the trapezoidal section. For reducing the
stress generated in the ramshorn hook, the analysis is performed with the three different materials.

By the use of three different materials as mentioned in the table 1, the stresses and deflection are less in
the EN-22 than forged steel and Steel-20.

Then the crane hook is checked with the three different sections. From the study of the different sections
trapezoidal section is better than another two round and rectangle section.

According to the study of the table 1, the EN-22 material is most excellent from another two materials and
trapezoidal sections is finest than the round and rectangle section.


REFERENCES :

[1] Camellia Bretotean Pinca, Gelu Ovidiu Tirian, The analysis of the stresses and strains state of the strength
structure of a rolling bridge for increasing its solidity, The 2nd WSEAS International Conference on
engineering mechanics, structures and engineering geology.
[2] Henry C. Huang and Lee Marsh, SLACK ROPE ANALYSIS FOR MOVING CRANE SYSTEM, 13
th

World Conference on Earthquake Engineering, Vancouver, B.C., Canada, August 1-6, 2004, Paper
No.3190
Dilip K Mahanty, Satish Iyer, Vikas Manohar, Dinesh Chaterjee, Design Evaluation of the 375 T Electric
Overhead traveling crane
[3] Takuma Nishimura, Takao Muromaki, Kazuyuki Hanahara, Yukio Tada, Shigeyuki Kuroda, and Tadahisa
Fukui, Damage Factor Estimation of Crane-Hook (A Database Approach with Image, Knowledge and
Simulation) 4th International Workshop on Reliable Engineering Computing (REC 2010).
[4] Design data, PSG College of technology, Coimbatore.
[5] NX Nastran theory manual.





Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 159

NEW OUTLOOK OF DESIGN PROCESS WITH APPLICATION OF
3D SOFTWARE PAKAGE
Manish Saini
Assistant Professor Sobhasariya Engineering College,
Sikar-121004(INDIA)

sainimech08@gmail.com


ABSTRACT
Present design scenario requires implementation of CAD/CAM/CAE TOOLS(Pro-e,UG, CATIA).The entire
design process from idea to idea to prototype is being given a new outlook with introduction of new 3D design
software ,for example PRO-E ,CATIA , UNIGRAPHICS available with large number of specialized modules. This
presented paper deals with presentation of new basic feature of pro-e program package.


1. INTODUCTION
CAD (Computer aided design) can be defined as use of computer to aid design process.CAD is essentially
based on powerful technique called interactive computer graphics which is a practical tool for creation &
modification of picture on display device with the help of computer CAM is a technology provided by
computers which play a direct or indirect role in management & control of manufacturing of a given product.
CAE (Computer aided Engineering) is a technology which covers the use of computer system for CAD
geometry analysis allows the model to simulate the functioning of product. CAE activities entail a
methodological approach in the use of information technologies in simulating the research already at stage
of product development. The simulation is performed is aim of early discovery of possible errors for optimization
of the procedure..There are stages which must be followed in due order, for a simulation to be done. It is
necessary to create the model of system or part of it, and then carry out simulation of the modeled system
or part of system & following that perform an analysis of the obtained result from the simulation. The most
frequently used method of computer analysis of functioning of various construction is certainly the Finite
Elements Method .FEM is used for the determination of the tension states ,deformations, heat transfer ,
fluid flow, acoustic field determination etc .Each achieved by this approach ,makes possible the prolongation of
the exploitation lifespan of the construction & the increase of its reliability. The numeric program enables
the calculation of parallel tension for all kinds of finite elements & global nodes following the huber-hencky-
misese hypothesis. The load, tension, deformation ,energies & kinetic & potential energies distribution enable
very efficient analysis of the state & diagnostics of the hardness of the designed or produced
constructions.CAE offers an analysis & testing of the static,dynamic& heat functioning of the designed part
as well as structure optimization in accordance with the
function of the goal. The engineering analysis within the integrated CAD/CAM system require a feedback
about the quality, all in the aim of advancing the construction & technology.

2. CAD/CAM/CAE SOFTWARE

The intensive development of personal computer, as well as of various available hardware supplements,
has made programs based on PC computers take the primacy over the expensive programs in the recent
years. Computer aided 3 Dimensional interactive application is the full name of the software program
Pro-Engineer, which is listed among the top leading integrated CAD/CAM/CAE program packages .The Pro-
Engineer program covers also other domains related to production processes ,i.e. monitoring of the
production cycle, within a company besides the CAD/CAM field .This software has achieved great
success in the aviation & automotive industry,& it enables the acumination of the need for making
physical models of prototypes, which shortens significantly the production cycle, with its version for the
PC/Windows medium , Pro- Engineer becomes even more accessible to the widest circle of users .


The basic feature of this program is the openness of the system .The open architecture in search of
solutions: from the constructors idea, through the 3D design process, completion of technical documents &
drawing, to analysis by the finite element method & procedures of code making for designing NC
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 160

processing.

Overview of CAD/CAM /CAE software

Field of Application Software INTEGRATION
SYESTEM
CAD-2D drawing CADAM,AutoCAD, Micro ADAM &
VersaCAD
Pro Engineering
CAD Modelling Solid Edge,Solid works,Solid Designer,
Mechanical Desktop
CATIA,I-DEAS

CAM BravoNCG, VERYICUT, DUCT,Camand,
MasterCAM, PowerMILL
I EMS, EUCLID-IS

Also, the program is open to other CAD/CAM tools through many direct translators &neutral
format records, such as DXF, IGES, STEP, SET, etc. Pro-Engineer is a program package for parametric variation
modeling which enables the designers idea to be presented by adding parameters, or dimensions which imitates
the creation of models & alteration on them. the parameterization adds intelligence to the part, presenting
& maintaining the designers idea with the help of the definition of interdependence among elements,
dimensions & parameters of models .one of the most important concepts in Pro-Engineer is the data
organization within the specification trunk, the characteristics of which is that it contains all the
information related to the model & presents a scheme which explain the manner in which the model has been
created. All the mentioned product of the founder company Dassault Aviation together with a IBM
technical support implementation services have been classified now a days under one joint group
described as PLM-Product Life Cycle Management, an expression which entails monitoring support of the
complete cycle of origin & product development & consists of three primary groups

CATIA-CAD/CAM/CAE system
ENOVA/SMARTEAN-VPM/PDM System
DELMIA-system for production process simulation

The basic custom module sets which may be found in the Pro-E program are machine element
designing ,modeling & styling , product synthesis ,equipment &system designing , analysis , NC
production infrastructure etc. The philosophy of this program is based on the concept of the emigration of
digital products in to the product development in the course of lifecycle. One of the basic prerequisites for
successful modeling with the implementation of feature-based modeling is mastering the media for making
sketches. The greatest number of basic technical elements in the Pro-E program has been developed on the
basis of sketches .There are two main characteristics related to the development of sketched profile: use of
constraints (ex parallelism) & geometric configuration of profiles. The natural course of the design will dictate
the necessity for setting additional constraints & inserting additional possibilities for model control .We can
reverent to any stage of model development & set the constraints, considering that the program is flexible.
The set of tool pallets for sketching makes possible the creation 2D elements, which are used for creating 3D
elements. Most 3D solid feature are made based on the previously made profile, which is a sketch. The base
which enables the beginning of the 3D model creation is a sketcher (in which a 2 dimensional base of the
part is given).This part of the program constitutes the core of the pro-e program & can be found in each
module. The idea of designer is materialized in such a manner where by parameters & constraints are given
over the geometry.

The associate nature of modules for making technical drawings enables automatic updating of
drawings based on the changes on 3d models of parts & assemblies .Full associability entails common data basis
of parts & assemblies for all processes ,from design top production .It can thus be possible to apply the
principle Designing through production which entails that the changes on the parts resulting from the
needs & constraints in the production process reflect automatically also on the 3D model & the complete
documents( & vice versa).

Engineers use nowadays the methodology of integrated development supported by computer
technologies & simulation in the development of new products.

Software packages make possible part, assembly, product modeling, which are then analyzed & tested
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 161

in various virtual ways before they are made & put into production .The research, is complex, most
development problems are resolved in the earliest stages, the risk of errors is reduced. The assent is on the
product, quality with a reduction of the required time for mastering the part & of the price. In the
modeling process care is taken of the materials which will be used to manufactured the parts & of there in
terrelation.Thus ,assembles are obtained from parts which have been modeled in detail & precisely
dimensioned ,so as to Analyze whether the new solution is feasible .once the complete digital 3-D model has
been made ,the next step is to make its proto realistic visualization so as to analyze & check the
appearance of the new solution in the actual surroundings .If it is determined that the solution does not
satisfy in some aspect core reaction are made of the digital 3D model until a satisfactory solution is
obtained.

The current CAD program has the possibility of product visualization. The development process
is speeded up significantly using software tools for designing & virtual product development. A large
number of development program package of general character or specialized applications intended for
specific product are used for the development of virtual models. For the development of the virtual
prototype it is necessary that the development medium provides visualization i.e. the possibility of realistic
presentation of the geometry of the part.

3. CONCLUSIONS

Part designing with CAD software packages gives results which are reflected in a fast gene ration of
models of elements and related workshop drawings, as well as in fast changes on the models thanks to the
implementation of parametric modeling, where all changes on the elements are automatically generated on
the assemblies & all other elements resulting there from. All required documents are unified by the
formation of technical drawings from existing 3D MODELS. The complete visualization which is obtained
from CAD software packages facilities & advances the design process significantly. The existing hardware &
software support enable the highest level of communication of the deigned with a 3D object.

Pro-e represents a standard in designing in the automotive industry. A large number of
specialize modules cover the whole design process-from design via calculation to design & manufacturing of
tools of numeric machines.


REFERENCES

[1] CAD/CAM BY P.N.RAO
[2] Pro-Engineer help files & book by CAD CENTRE
[3] Finite Element Method by Timoshenko








Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 162

ANALYSIS OF ELECTRIC OVERHEAD TRAVELLING CRANES
MAIN GIRDER USING FINITE ELEMENT METHOD

Pragnesh D. Panchal
1
, S.M.Sheth
2


1
Birla Vishwakarma Mahavidyalaya Engineering College, Vallabh Vidhyanagar, Gujarat,
India
2
G.H. Patel College of Engg. & Technology, Vallabh Vidhyanagar-388120, Gujarat, India.
1
pdp_111@yahoo.co.in




ABSTRACT:

The main girder is the framework to which the various parts of Crane are mounted. The paper consist of an
engineering structural analysis of main girder, and it is subjected to the loading under static conditions, when
the Crane is at rest on the rail and lifting the load. The main girder must be strong enough to accept the weight
of carrying load and also some what flexible in order to sustain the shock and tension caused by lifting the load.
So that one of the most important steps in redesign of main girder is a stress analysis. Stress analysis using
finite element method can be used to locate area of high stress. In this study the stress analysis is accomplished
by commercial finite element package NX NASTRAN

1.INTRODUCTION :

Cranes are amongst one of the material handling equipment which finds wide applications in different fields of
engineering. Their design features vary broadly according to their major operational specifications such as: type
of motion of the crane structure, weight and type of the load, location of the crane, geometric features, operating
regime and environmental conditions. However, an appraisal of the available literature reveals that routine
design of cranes are highly saturated and standardized in various industrial companies and organizations
independent of the crane type. Consideration of the available technology that is mainly based on the
accumulated previous experience is important for better performance, higher safety and more reliable designs. It
is well known that generic features of crane components are similar for various different types of cranes. Since
the crane design procedures are highly standardized with these components, main attempt and time spent in
crane design projects are mostly for interpretation and implementation of the available design standards. They
offer design methods and practical approaches and formulae that are based on previous design experiences and
extensively accepted design procedures. It is believed that computer automated access to these standards with
pre-loaded interpretation and guidance rules increase speed and reliability of the design procedures and increase
efficiency of the crane designers.

This paper presents an optimized model of the main girder of the crane determined after the analysis of the
whole overhead crane and also safe with respect to the available standards as well as easy to manufacture. The
Structural analysis was carried out using NASTRAN for a 100 T Lifting Beam that had Electric Overhead
Travelling (EOT) Crane of 100 T capacity to lift loads.

The analysis also involved redesign of the structure wherever needed to meet requirements of stresses and
displacements. Online change in design is an advantage not available to this particular industry. The designers
also required to reduce weight at locations wherever the material saving was possible.

2.Finite element modeling of the main longitudinal girder :

The analysis for a complex structure as of an overhead travelling crane with the lifting capacity of 100 KN,
lifting height of 28 m and span is 22 m is a very difficult problem. The difficulties are given by the geometrical
configuration of the structure and its dimensions. In these conditions, in order to perform a complete analysis
which is able to point out all the aspects regarding the stress-strain state distribution from the structure, there is
required the use of a carefully elaborated calculation model able to eliminate the approximations which appear
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 163

in the elaboration of the geometric model and to allow the use of some finite elements suitable as type, number
and implicitly as size.

The model has been prepared on the Solid Edge ST software and extracts that assembly in the NX NASTRAN
for the analysis. After completing the analysis of the Main Longitudinal Girder. The maximum stress has been
observed on middle of the main girder of crane and after that the main girder analysis has been performed for
the further analysis.

3. MESHING :

Meshing is the integral part of computer - aided engineering (CAE) analysis process. The mesh influences the
accuracy, convergence and speed of solution. Furthermore, the time it takes to get result from a CAE solution.
Therefore, the better and more automated the meshing tools of NASTRAN software which having smart size
option is to control size of element and nodes. The meshed model as shown in figure1 and table 1 shows the
result of this meshed model.



Figure1.Meshed model of the main longitudinal girder





Table 1: Result of meshed model of the main longitudinal girder




4. RESULTS :

The analysis has been calculated the stresses, displacement results which is occurs in the Main Longitudinal
Girder. These results are shows that resulting stresses are well under the permissible stress limits. So the top
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 164

plate required reducing thickness from 22 mm to 20 mm and side plate is also required to reduce the thickness
from 12 mm to 10 mm. After Analysis it is shows that the material saving was possible which shows in figure 2.












Figure 2: Box-Section for main longitudinal girder




Figure 3: Displacement result of main longitudinal girder

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 165


Figure 4: Stress result of main longitudinal girder
The locations of maximum stress is at middle side of the main girder with magnitude is 43.64 Mpa. The
displacement of the main girder and location of maximum displacement is shown in Figure 3. The magnitude of
maximum displacement is 1.449 mm.

5. CONCLUSION

Finite element analysis of Electric Overhead Travelling Cranes Main Girder shows that the critical point stress
occurred at Middle of Main Girder. These results are shows that resulting stresses are well under the permissible
stress limits.


REFERENCES :

[1] Camellia Bretotean Pinca, Gelu Ovidiu Tirian, The analysis of the stresses and strains state of the strength
structure of a rolling bridge for increasing its solidity, The 2nd WSEAS International Conference on
engineering mechanics, structures and engineering geology.
[2] Dilip K Mahanty, Satish Iyer, Vikas Manohar, Dinesh Chaterjee, Design Evaluation of the 375 T Electric
Overhead traveling crane
[3] Takuma Nishimura, Takao Muromaki, Kazuyuki Hanahara, Yukio Tada, Shigeyuki Kuroda, and Tadahisa
Fukui, Damage Factor Estimation of Crane-Hook (A Database Approach with Image, Knowledge and
Simulation) 4th International Workshop on Reliable Engineering Computing (REC 2010).
[4] Henry C. Huang and Lee Marsh, SLACK ROPE ANALYSIS FOR MOVING CRANE SYSTEM, 13
th

World Conference on Earthquake Engineering, Vancouver, B.C., Canada, August 1-6, 2004, Paper No.3190
[5] NX Nastran theory manual.






Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 166

VIDEO-BASED EXPERIMENTAL METHODOLOGY FOR
DISPLACEMENT ANALYSIS OF BLAST LOADED STRUCTURES
AND ITS VALIDATION

N. Madanmohan Reddy
1
and G Venkata Rao
2

1
Post- Graduate Student and
2
Professor
CVSR College of Engineering ( Anurag Group Of Institutions) Jedimetla,,Hyderabad

1
gvrao10@yahoo.com


ABSTRACT

Blast loading occurs due to accidents from gas cylinder explosions, detonation of chemical plants, attacks by
anti-social elements and other reasons. Therefore, concerted efforts have been underway during the past three
to four decades to design structures and vehicles so as to resist blast effects.

The difficulty of carrying out experimental tests on blast loaded structures like beams, plates, cylindrical shells,
armoured vehicles etc is that the blast takes place in about a few microseconds ( 1E-7 to 1E-6 seconds). The
resultant peak effects have to be recorded in such short durations. Strain gauge techniques, optical sensors,
high speed photography are a few techniques available for measuring displacements and stresses in order to
assess the structural integrity.

In the present paper, videographic analysis of displacements in three blast loaded structures using digital
videography is presented in detail. The methodology is described and its advantages and limitations are
highlighted.

Applications of this technique for blast analysis of vehicles in a relatively inexpensive manner is also described.

1. INTRODUCTION :

Many authors have made significant contributions to area of blast loaded structures. The available literature
deals with all the aspects like a) transient finite element analyses, b) testing of model as well as full-sized
structures, c) material testing under high rates of strain using sophisticated measurement techniques , d) material
characterization under high rates of strain by formulating appropriate constitutive laws and e) methods for
applying the metallurgical and material data.

Very significant developments have taken place during the past decade in this area. The major challenges posed
in studies involving short time- lasting events is both in the area of numerical simulation of the event as well as
making physical observations during the short duration of the test. Safety is another concern.

In the area of testing of structures, non-contact type of measurements are preferred, although strain gauge
techniques are utilized in association with advanced data loggers to record the strain data.High speed
photography , which can record events every microsecond are utilized in view of the convenience. However,
these high speed cameras are very expensive and out of reach of most laboratories.

In the present paper, videographic analysis of displacements in three blast loaded structures using digital
videography is presented in detail. For this purpose, ordinary digital cameras having video recording facility is
utilized for analyzing the displacements during a blast on a rectangular plate. The methodology is described and
its advantages and limitations are highlighted.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 167

2. LITERATURE REVIEW :

Many researchers have made significant contributions to the area of blast loaded structures , a few among them
being Nurick, Jones, Dietenberg et al. It is rather difficult to carry out exact mathematical or numerical analysis
in this area due to the difficulty of incorporating metallurgical aspects into the mathematical and numerical
analyses. A judicious mix of numerical analyses ( using methods like the finite element analyses ) and
experimental testing becomes necessary.

Jacob et al [ 1 ] have carried out a series of experimental rtests on clamped mild steel quadrangular plates of
different and varying length-to-width ratios (1.02.4) subjected to localized blast loads ofvarying sizes. The
effects of varying both the loading conditions and the plate geometries on the deformation are documented.

Wei et al [ 2 ] have studied the responses of metallic plates and sandwich panels subjected to localized impulse
by using a dynamic plate tests and simulations. The correctness of the simulation approach is assessed by
comparing predictions of the deformations of a strong-honeycomb-core panel with measurements.

Boyd [ 3 ] has carried out tests on square plate, each plate provided with five strain gauges mounted on them.
Two Endevco 7255A piezoelectric accelerometers, two PCB Piezotronic 109A piezoelectric pressure gauges,
and a Novotechnik TI50 LVDT resistive displacement gauge formed the other instrumentation. The gauges have
a range of 0-50000 g and a frequency response of 0-10 kHz. The pressure gauges were mounted in a nylon
holder which then screwed into a steel adapter welded into the plate. This mounting system is designed to isolate
the gauge from forces parallel to the plate surface. The top of the gauge was smeared with silicone grease then
covered with thin reflective tape to insulate the gauge from radiant heat from the explosion. These gauges have a
range of 0-

Williams and Fillian-Gourdeau [ 4 ] have carried out an analysis of a light armoured vehicle subjected to mine
blast, wherin the riding personnel have also been considered to study the effect of impulse on the physical body.

Dean et al [ 5] have studied energy absorption in thin (0.4 mm) steel plates during perforation by spherical
projectiles of hardened steel, at impact velocities between 200 and 600 ms_1. At intermediate projectile
velocities ( 250350 m/sec ), incident and residual velocities were measured using high speed video equipment
(Photo-sonics
Phantom V4.3 high speed video camera) in conjunction with the Photo-sonics Phantom software. In these
studies the inter-frame and exposure times were in the ranges 1036 ms and 69 ms. The projectile velocity was
calculated from the inter-frame time and the
distance travelled . At high projectile velocities (w350600 ms_1), a high speed framing
camera Ultranac FS501 image converter camera is used for capturing 12 sequential
images on Polaroid film

Hasenpouth [ 6 ] in his dissertation has studied elongation of specimens undergoing medium strain rates by
means of an Enhanced Laser Velocity System (ELVS). The ELVS system is composed of a laser that emits a
diverging sheet of light. This sheet is then collimated by a plano-cylindrical lens to make it parallel. A
rectangular aperture ensures that the sheet has a fixed width of 25.4 mm. The sheet is then refocused to a point
by a convex lens and the intensity of the light is measured by a high-speed PIN photodetector.

Field et al [ 7 ] describe in detail the various optical methods used in high strain loded structures .

Adrian et al [ 8 ] have carried out tests on copper specimens at high strain rates to record deformations using a
VISION XS, a high speed camera (3000 frame/second) to capture the show additional impacts undergone by the
specimens.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 168

Mitsuishi et al [ 9 ] have carried out crush tests using high speed video on hydrogen cylinders used in fuel cell
cars since hydrogen fuel is dangerous but required for propulsion. The crushing behavior and detonation
propensity aspects were determined.

3. EXPERIMENTAL WORK :

Tests have been carried out on rectangular plates of 160x 80 x 6 mm rectangular steel plates subjected to
explosive loading. The load has been created by a normal festival cracker of sufficient intensity. The plate is
kept on the top of a steel enclosed box. A digital camera is is installed in a side on position to visualize the
height to which the plate is displaced due to the explosion.

Normal digital cameras ,which do not have high speed capability record the scene at 30fps ( frames per second)
and therefore each frame is a record of the scene ifor a time duration of 1/30 second (i.e.) 0.033 second or 33
microsecond. This has to be viewed with another factor called exposure time which is the activation time of the
photo in that interval. That is why in some cameras , the scene looks hazy while in some it is not hazy. This is
the compromise one has to have to compensate for the high expenditure involved with high speed cameras.

The video when played back using most of the software has a status bar which can stopped to obtain a snapshot
at the desired instant. This technique is adapted in the present work.
Fig.1 shows the specially designed test rig which is rigid not to get moved by the force of detonation. Also
shown in the figure is a cantilever beam used to calibrate the blast pressure by measuring the displacement using
the dial gauge which is again monitored by the above mentioned videographic technique and comparing with
the analytical solution available for the problem [10]. ( Iit is impossible to read the dial during the short duration
event and also a dial indicator with an additional pointer over-riding the measuring pointer to stop at the
maximum displacement was not available in the market).

Detonation is effected by choosing crackers from the same batch manufacture in order to ensure some level of
uniformity of the quality and pressure magnitude. Fig. 3(a) to (c) show the sequence of frames obtained from the
video recording of the blast event.

Pressure

Time
Fig 1. Test rig showing the Fig.2. Finite element simulation of plate displacement
Calibrating beam and the using the pressure-time waveform
plate detonation box

As stated earlier, the time interval between each successive frame is 33 milliseconds from the framing rate used,
namely,30fps.The above results have been validated by carrying out a transient finite element analysis of the
plate using the blast pressure waveform using the software code, ANSYS.( Fig.2).



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 169



Fig.3. Plate displacement after time interval of (a) 0 seconds, (b) 33 milliseconds c) 66
milliseconds


4. CONCLUSIONS :

A fairly good agreement is obtained between the results from testing and those of the FEM analysis. Although
not very accurate (due to the lower frame rate available on the digital camera), the methodology presented above
enables a fairly good assessment of the displacements in vehicles, construction components (like panels), robot
movements. This is a part of an ongoing investigation aimed at determining effects of blasts on safety of
personnel transported in terror infested areas.

REFERENCES

[1] Jacob,N., S. Chung Kim Yuen, G.N. Nurick, D. Bonorchis, S.A. Desai, and D. Tait, Scaling aspects of
quadrangular plates subjected to localized blast loadsexperiments and predictions, International Journal
of Impact Engineering 30 (2004) 11791208
[2] Z. Wei, V.S. Deshpande, A.G. Evans, K.P. Dharmasena, D.T. Queheillalt, H.N.G. Wadley, Y. Murty, R.K.
Elzey, P. Dudt, Y. Chen, D. Knight an K. Kiddy, The resistance of metallic plates to localized impulse,
March,2007,29 pages
[3] Boyd, S.D., Acceleration of a Plate Subject to Explosive Blast Loading - Trial Results, Report No.DSTO-
TN-0270, DSTO Aeronautical and Maritime Research Laboratory, March 2000
[4] Williams,K. and F.Fillian-Gourdeau , Numerical simulation of a light armoured vehicle occupant
vulnerability to anti-vehicle mine blast, 7
th
International LS-Dyna Users Conference,2002.
[5] Dean, J., C.S. Dunleavy, P.M. Brown and T.W. Clyne,Energy absorption during projectile perforation of
thin steel plates and the kinetic energy of ejected fragments International Journal of Impact Engineering, 37
(2009) 19
[6] Hasenpouth , Tensile High Strain Rate Behavior of AZ31B Magnesium Alloy Sheet,
M.S. Thesis, University of Waterloo ,Waterloo, Ontario, Canada, 2010
[7] Field, J.E., S.M. Walley, N.K. Bourne and J.M. Huntley, Experimental methods at high rates of strain,
JOURNAL DE PHYSIQUE IV, Colloque C8, supplement au Journal de Physique HI, Volume 4, septembre
1994 C8-3.
[8] Adrian. R, Mihai,B and T.C. Tudor, Finite Elements Method in Split Hopkinson Pressure Bar developing
process, 6th WSEAS International Conference on SYSTEM SCIENCE and SIMULATION in
ENGINEERING, Venice, Italy, November 21-23, 2007,Pages 263-268,2007
[9] Mitsuishi, H., K. Oshino and S. Watanabe, Dynamic crush tests on hydrogen pressure cylinder, JARI
Research Journal, 2002.
[10] N. Jones, Structural Impact, Cambridge University Press, 1997.



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 170

SIMULATION & MODELING IN THE MODERN ERA

Sharad Shrivastava
1
, Rakesh Uchenia
2
, Kunal Sharma
3
, Pradeep Gupta
4

1
Poornima College of Engineering, Sitapura, Jaipur,
2
Suresh Gyan Vihar University, Jagatpura, Jaipur,
3,4
Rajasthan Institute of Engg. & Technology, Bhankrota, Jaipur,

1
Sharad_bsf@rediff.com,
2
rakeshuchenia@gmail.com,
3
kunalsharma.mnit@gmail.com,
4
pradeepgupta_me34@rediffmail.com


ABSTRACT

Modeling is the process of producing a model.A model is a representation of the construction and working of
some system of interest. A model is similar to but simpler than the system it represents. One purpose of a model
is to enable the analyst to predict the effect of changes to the system. On the one hand, a model should be a
close approximation to the real system and incorporate most of its salient features. On the other hand, it should
not be so complex that it is impossible to understand and experiment with it. A good model is a judicious
tradeoff between realism and simplicity.
A simulation of a system is the operation of a model of the system. The model can be reconfigured and
experimented with; usually, this is impossible, too expensive or impractical to do in the system it represents .The
operation of the model can be studied, and hence, properties concerning the behavior of the actual system or its
subsystem can be inferred. In its broadest sense, simulation is a tool to evaluate the performance of a system,
existing or proposed, under different configurations of interest and over long periods of real time. Simulation
practitioners recommend increasing the complexity of a model iteratively. An important issue in modeling is
model validity. Model validation techniques include simulating the model under known input conditions and
comparing model output with system output.
It is an overview of simulation, modeling and analysis. Many critical questions are answered in the paper. What
is modeling? What is simulation? What is simulation modeling and analysis? What types of problems are
suitable for simulation? How to select simulation software? What are the benefits and pitfalls in modeling and
simulation? The intended audience is those unfamiliar with the area of discrete event simulation as well as
beginners looking for an overview of the area. This includes anyone who is involved in system design and
modification system analysts, management personnel, engineers, military, planners, economists, banking
analysts, and computer scientists. Familiarity with probability and statistics is assumed.

Keywords: Analysis, Model , Problem , System, Software , Specifications,

1.WHAT IS MODELING?

Modeling is the process of producing a model; a model is a representation of the construction and working of
some system of interest. A model is similar to but simpler than the system it represents. One purpose of a model
is to enable the analyst to predict the effect of changes to the system. On the one hand, a model should be a close
approximation to the real system and incorporate most of its salient features. On the other hand, it should not be
so complex that it is impossible to understand and experiment with it. Generally, a model intended for a
simulation study is a mathematical model developed with the help of simulation software. Mathematical model
classifications include deterministic or stochastic, static or dynamic.
A model of a system is anything an "experiment" can be applied to in order to answer questions. This implies
that a model can be used to answer questions about a system without doing experiments on the real system.
Instead we perform a kind of simplified experiments on the model, which in turn can be regarded as a kind of
simplified system that reflects properties of the real system. Models, just like systems, are hierarchical in nature.
Different kinds of models depending on how the model is represented:
Mental modela statement like "a person is reliable" helps us answer questions about that person's behavior in
various situations.
Verbal modelthis kind of model is expressed in words.
Physical modelthis is a physical object that mimics some properties of a real system, to help us answer
questions about that system. It is common to construct small physical models with same shape and appearance
as the real objects to be studied
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 171

Mathematical modela description of a system where the relationships between variables of the system are
expressed in mathematical form. Variables can be measurable quantities such as size, length, weight,
temperature, unemployment level, information flow, bit rate, etc. Most laws of nature are mathematical
models in this sense.
Analyzing Models:-
1 Sensitivity Analysis
2 Model-Based Diagnoses
3 Model Verification and Validation

2. WHAT IS SIMULATION?

A simulation of a system is the operation of a model of the system. The model can be reconfigured and
experimented with; usually, this is impossible, too expensive or impractical to do in the system it represents.
The simulation is a tool to evaluate the performance of a system, existing or proposed, under different
configurations of interest and over long periods of real time. Simulation is used before an existing system is
altered or a new system built, to reduce the chances of failure to meet specifications, to eliminate unforeseen
bottlenecks, to prevent under or over-utilization of resources, and to optimize system performance. A simulation
is an experiment performed on a model.
The iterative nature of the process is indicated by the system under study becoming the altered system which
then becomes the system under study and the cycle repeats. In a simulation study, human decision making is
required at all stages, namely, model development, experiment design, output analysis, conclusion formulation,
and making decisions to alter the system under study. The only stage where human intervention is not required
is the running of the simulations, which most simulation software packages perform efficiently. Experienced
problem formulators and simulation modelers and analysts are indispensable for a successful simulation study.

2.1 Reasons for Simulation :

There are a number of good reasons to perform simulations instead of performing experiments on real systems:
2.1.1 Experiments are too expensive, too dangerous, or the system to be investigated does not yet exist. These
are the main difficulties of experimentation with real systems.
2.1.2 The time scale of the dynamics of the system is not compatible with that of the experimenter.
2.1.3 Variables may be inaccessible. In a simulation all variables can be studied and controlled, even those that
are inaccessible in the real system.
2.1.4 Easy manipulation of models. Using simulation, it is easy to manipulate the parameters of a system model,
even outside the feasible range of a particular physical system.
2.1.5 Suppression of disturbances. In a simulation of a model it is possible to suppress disturbances that might
be unavoidable in measurements of the real system.
2.1.6 Suppression of second-order effects.
The steps involved in developing a simulation model, designing a simulation experiment, and performing
Simulation analysis are:
Step 1. Identify the problem.
Step 2. Formulate the problem.
Step 3. Collect and process real system data.
Step 4. Formulate and develop a model
Step 5. Validate the model.
Step 6. Document model for future use.
Step 7. Select appropriate experimental design.
Step 8. Establish experimental conditions for runs.
Step 9. Perform simulation runs.
Step 10. Interpret and present results.
Step 11. Recommend further course of action



3. HOW TO DEVELOP A SIMULATION MODEL?

Simulation models consist of the following components: system entities, input variables, performance measures,
and functional relationships. Almost all simulation software packages provide constructs to model each of the
above components. Simulation modeling comprises the following steps:
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 172

Step 01 Identify the problem. Enumerate problems with an existing system. Produce requirements for a
proposed system.
Step 02 Formulate the problem. Select the bounds of the system, the problem or a part thereof, to be studied.
Define overall objective of the study. Define performance measures quantitative criteria on the basis
of which different system on figurations will be compared and ranked. Decide the time frame of the
study or over a period of time on a regular basis. Identify the end user of the simulation model,
Step 03 Collect and process real system data. Collect data on system specifications ,input variables, as well
as performance of the existing system. Identify sources of randomness in the system. Select an
appropriate input probability distribution for each stochastic input variable and estimate
corresponding parameter(s).Software packages for distribution fitting and selection include Expert
Fit, Best Fit, and add-ons in some standard statistical packages. Empirical distributions are used
when standard distributions are not appropriate or do not fit the available system data. Triangular,
uniform or normal distribution is used as a first guess when no data are available.
Step 04 Formulate and develop a model. Develop schematics and network diagrams of the system. Translate
these conceptual models to simulation software acceptable form. Verify that the simulation model
executes as intended.
Step 05 Validate the model. Compare the model's performance under known conditions with the performance
of the real system. Perform statistical inference tests and get the model examined by system experts.
Step 06 Document model for future use. Document objectives, assumptions and input variables in detail.

4. HOW TO DESIGN A SIMULATION EXPERIMENT?

A simulation experiment is a test or a series of tests in which meaningful changes are made to the input
variables of a simulation model so that we may observe and identify the reasons for changes in the performance
measures. The number of experiments in a simulation study is greater than or equal to the number of questions
being asked about the model. Design of a simulation experiment involves answering the question: what data
need to be obtained, in what form, and how much? The following steps illustrate the process of designing a
simulation experiment.
Step 07 Select appropriate experimental design. Select a performance measure, a few input variables that are
likely to influence it, and the levels of each input variable.
Step 08 Establish experimental conditions for runs. Address the question of obtaining accurate information
and the most information from each run. Determine if the system is stationary or non-stationary.
Generally, in stationary systems, steady-state behavior of the response variable is of interest. Select
appropriate starting conditions. Select the length of the warm-up period, if required. Decide the
number of independent runs each run uses a different random number stream and the same starting
conditions -by considering output data sample size. Sample size must be large enough to provide the
required confidence in the performance measure estimates. Identify output data most likely to be
correlated.
Step 09 Perform simulation runs.

5. HOW TO PERFORM SIMULATION ANALYSIS?

Most simulation packages provide run statistics (mean, standard deviation, minimum value, maximum value) on
the performance measures, e.g., wait time, inventory on hand . Not withstanding the facts that there are no data
collection errors in simulation, the underlying model is fully known, and replications and configurations are user
controlled, simulation results are difficult to interpret. An observation may be due to system characteristics or
just a random occurrence. Analysis of simulation output data consists of the following steps.
Step 10 Interpret and present results. Compute numerical estimates (e.g., mean, confidence intervals) of the
desired performance measure for each configuration of interest. To obtain confidence intervals for
the mean of auto correlated data, the technique of batch means can be used. In batch means, original
contiguous data set from a run is replaced with a smaller data set containing the means of contiguous
batches of original observations. Test hypotheses about system performance. Construct graphical
displays (e.g., pie charts, histograms) of the output data. Document results and conclusions.
Step 11 Recommend further course of action. This may include further experiments to increase the precision
and reduce the bias of estimators, to perform sensitivity analyses, etc.



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 173

6. WHAT MAKES A PROBLEM SUITABLE FOR SIMULATION MODELING AND
ANALYSIS?

In general simulation is the tool of choice, situations in which simulation modeling and analysis is used include
the following:
6.1 It is impossible or extremely expensive to observe certain processes in the real world, e.g., next year's cancer
statistics, performance of the next space shuttle, and the effect of Internet advertising on a company's sales.
6.2 Problems in which mathematical model can be formulated but analytic solutions are either impossible or too
complicated.
6.3 It is impossible or extremely expensive to validate the mathematical model describing the system, e.g., due
to insufficient data. Applications of simulation abound in the areas of government, defense, computer and
communication systems, manufacturing, transportation (air traffic control), health care, ecology and
environment, sociological and biosciences, epidemiology, services (bank teller scheduling),economics and
business analysis.

7 . HOW TO SELECT SIMULATION SOFTWARE?

Although a simulation model can be built using general
Purpose programming languages which are familiar to the
analyst, available over a wide variety and less expensive,
most simulation studies today are implemented by
simulation package. The advantages are reduced
programming requirements; natural framework for
simulation modeling; conceptual guidance; automated
gathering of statistics; graphic symbolism for
communication; animation; and increasingly, flexibility to
change the model. The two types of simulation packages are
simulation languages and application-oriented simulators.
Simulation languages offer more flexibility than the
application-oriented simulators. On the other hand,
languages require varying amounts of programming
expertise. Application-oriented simulators are easier to learn
and have modeling constructs closely related to the
application.

8.BENEFITS OF SIMULATION MODELING
AND ANALYSIS

According to practitioners, simulation modeling and analysis is one of the most frequently used operations
research techniques. The simulation modeling and analysis makes it possible to:
8.1 Obtain a better understanding of the system by developing a mathematical model
8.2 Test hypotheses about the system for feasibility.
8.3 Study the effects of certain informational, organizational, environmental and policy changes on the operation
of a system by altering the system's model;
8.4 Experiment with new or unknown situations.
8.5 Identify bottlenecks in the flow of entities (material, people, etc.) or information.
8.6 Use multiple performance metrics for analyzing system configurations.
8.7 Employ a systems approach to problem solving.
8.7 Develop well designed and robust systems and reduce system development time.


REFERENCES

[1] Banks, J., J. S. Carson, II, and B. L. Nelson. 1996.Discrete-Event System Simulation, Second Edition,
Prentice Hall.
[2] Bratley, P., B. L. Fox, and L. E. Schrage. 1987. A Guide to Simulation, Second Edition, Springer-Verlag.
[3] Fishwick, P. A. 1995. Simulation Model Design and Execution: Building Digital Worlds, Prentice-Hall.
[4] Kleijnen, J. P. C. 1987. Statistical Tools for Simulation Practitioners, Marcel Dekker, New York.
[5] Law, A. M., and W. D. Kelton. 1991. Simulation Modeling and Analysis, Second Edition,McGraw-Hill.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 174

[6] Law, A. M., and M. G. McComas. 1991. Secrets of Successful Simulation Studies, Proceedings of the
1991 Winter Simulation Conference, ed. J. M.Charnes, D. M. Morrice, D. T. Brunner, and J. J.Swain, 21-
27. Institute of Electrical and Electronics, Engineers, Piscataway, New Jersey.
[7] Montgomery, D. C. 1997. Design and Analysis of Experiments, Third Edition, John
[8] Naylor, T. H., J. L. Balintfy, D. S. Burdick, and K. Chu.1966. Computer Simulation Techniques, John
Wiley.
[9] Nelson, B. L. 1995. Stochastic Modeling: Analysis and Simulation, McGraw-Hill.
[10] www.excelsoftware.com/system_model
[11] www.cs.sunysb.edu/cse529/chap1
[12] www.stevens-tech.edu/ccomanic/cpe345_1
[13] www.wikipedia.org/wiki/simulation
[14] www.isd.mel.nist.gov/document/amatucci/summaryofmodel
[15] www.fs.fed.us/rm/pub/rmrs_gtr143
[16] www.springerlink.com/index/t03501461824n61



















Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 175

STRUCTURAL TOPOLOGY OPTIMIZATION OF CNC TURNING
CENTER SUB- ASSEMBLY USING FINITE ELEMENT BASED
SOLVER

KunalGajjar
1
, Jaimin Shah
2

1, 2
- Post Graduate Students of BVM Engineering College, V.V.Nagar-388120, India

knl.gajjar@gmail.com


ABSTRACT:

This paper discusses the finite element structural optimization phase for CNC Turning Center sub-assembly. A
machine tool structure stiffness has great influence on the precision of machine tool's operations. This can be
effectively achieved with the help of the advanced computer aided optimization techniques. The present work
involves the optimization of turret pad of CNC Turning Center sub-assembly using optistruct solver for more
stiffened and light weight structure. The finite element model for the CNC Lathe sub-assembly is prepared,
static analysed using optistruct and structural optimization is carried out for the minimization of compliance.

Keywords : CNC Lathe Turret-Pad, Topology, Optistruct, Compliance, Volume Fraction.

1. INTRODUCTION :

Optimization process of the turret pad of CNC Turning Center is the part of the design process for the project of
developing the CNC Turning Centre for college workshop. After the conceptual design process of machine tool
structure for CNC Turning Centre, the structural optimization requires various techniques viz. topology,
topography, shape and size optimization [1, 2]. Topology optimization is a mathematical process that optimizes
material layout within a given design space, for a given set of loads and boundary conditions such that the
resulting layout meets performance targets. Using topology optimization, the best conceptual design can be
found that meets the design requirements [1,2].

1.1.Process Methodology
Finite element meshed model of the sub-assemblies prepared using hyper mesh. The components of the model
which are fastened with bolts areFE realized with RBE2 elements and sliding contacts between parts are
realized with sliding contact elements. The maximum loads possible at heavy cutting conditions are applied at
the cutting tool tip and the appropriate constraints of boundary conditions are applied to the assembly model.

1.2.Topology Optimization:
Present work adopts topology optimization, which includes element density as a design variable.
OptiStruct solves topological optimization problems using the density method, also known as the SIMP method
in the research community.
With the density method, the material density of each element is directly used as the design variable, and varies
continuously between 0 and 1; these represent the state of void and solid, respectively. Intermediate values of
density represent fictitious material. The stiffness of the material is assumed to be linearly dependent on the
density.

It is performed by following steps.
1. Defining designable and non-designable space in the model.
2. Creating the responses required to define global objective and constraints.
3. Defining design objective as Minimization of the Compliance. The compliance is considered as the
strain energy of structure and is the reciprocal measure for the stiffness of the structure.
4. As regards to the constraints the Target Upper Bound of Volume Fraction of designed space is taken as
0.7 (30% reduction of material).




Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 176








Figure 1: Finite Element Mesh with boundary conditions




2.Results of Topology optimization:

The density plot of turret pad obtained from analysis by reducing the 30% volume of the designed space of the
turret pad. The blue color region in the plot shows least density required in that region of component that
indicates the material in the said area can be completely removed. The red color region indicates the material
essentially required in the component.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 177



Figure 2: density contour plot

Figure 3: Iso value plot (above 0.7) Figure 4: The final determined Geometry
3.CONCLUSION :
The present work illustrates how the topology optimization tools can be used in the structural design of machine
tool components. The technological tools are very efficient and productive in the product design process and
provide strength and stability to the components. By the application of the topology optimization, optimum
material layout for maximum stiffness with reduction of 30% weight of the CNC Lathe turret pad is obtained.
REFERENCES

[1] Martin P. Bendsoe, "Optimization of structural topology, shape, and material" Springer, 1995.
[2] Evolutionary Topology Optimization of ContinuumStructures: Methods and Applications by Xiaodong
Huang, Wiley(2010).
[3] An Introduction to Structural Optimization (Solid Mechanics and Its Applications) by Peter W.
Christensen, Springer(2008)
[4] Optistruct manual.



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 178

MODELLING OF THE BEHAVIOUR OF VISCO ELASTIC CORED
SANDWICH BEAMS USING FINITE ELEMENT METHODS

V.B.S.Rajendra Prasad
1
, .Raja Narender Reddy Pingili
2
, Naresh
3

1
Asst. Professor,
2,3
PG Student
Department of Mechanical Engineering, Vasavi College of Engineering, Hyderabad

rajanrender69@gmail.com

ABSTRACT

The response of the sandwich beam with visco elastic core has been of prime importance in the paper. For long
the researchers are studying the behavior of the sandwiched beam with visco elastic cores between two elastic
core layers. The applications of these layers were found in many structural Engineering designs and specially in
aerospace applications. There are numerous analytical models developed to characterize the behavior of these
structures, however much less work has been seen in using numerical methods for the analysis of the basic
behavior of these structures. Efforts are made in the paper to present the general behavior of sandwiched beams
under harmonic excitation taking into consideration the literature available.

1.INTRODUCTION

Vibrations in a dynamic system can be controlled and reduced by a number of means. They are
classified as active means, passive and semi active means. In the active means of vibration control, a wide
variety of elements such as speakers, actuators and microprocessors are used to produce an Out of Phase signal
to cancel the disturbance. In passive methods of control some absorbers, mufflers and silencers are used to
reduce the vibrations. In some cases by altering the system stiffness or mass, the resonant frequencies can be
altered and thereby the unwanted vibrations can be reduced for a fixed excitation frequency. However the
vibrations need to be isolated or dissipated by using isolators or damping materials.

In semi active methods of controlling vibrations, a combination of active methods with passive
elements is used to enhance their damping properties. Examples are electro-rheological damping, magneto-
reheological systems and Active constrained layer damping (ACLD). Damping can be applied to any system by
using special class of visco - elastic material as a part of Passive Vibration control in most of the machines of
present day.

Damping refers to the extraction or dissipation of mechanical energy from a vibrating system generally
by converting into heat. Damping in general is of two types first being material damping and second being
structural damping. Material damping involves the inherent property of the materials to dampen out the
vibrations and Structural damping involves the damping of vibrations at various locations like base, joints etc...

Passive damping in recent years has been of significant importance in non- commercial aero space
industry. Advances in material technology along with newer and more advanced analytical and experimental
methods for modeling the dynamic behavior of materials have led to many applications such as rubber dampers
for eco-friendly generator sets and mechanical presses. Multilayer damped laminates consisting of a visco
elastic core embedded in two metallic layers also be better considered for the application of vibration damping
with an advantage that they can be easily manufactured also. This Paper is aimed at illustrating the applications
of the visco elastic damping in general applications.

If a Visco elastic material becomes strained due to harmonic stresses, the strain is not in phase but lags
behind by an angle, say which is a measure of the damping in the material. A common method of
representation of damping is by the loss factor of the material which is equal to tan. is also equal to the
ratio of the energy dissipated to that stored in the material. The ratio of stress to strain is a visco elastic material
under harmonic excitation conditions is represented by complex moduli E (1+ i) and G (1+i) in direct and
shear strain respectively. These properties are seen to be dependent on frequency, temperature and strain. The
temperature - frequency superposition principle forms the basis of reduction of the three dimensional relation
between the in-phase modulus (or loss factor), frequency and temperature to a two dimensional one.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 179


This involves a reduced frequency or reduced temperature which combines the effects of frequency and
temperature by the use of factors known as shift factors. These factors are often found empherically as
suggested in the references listed below.

2.BASIC CONCEPTS OF VISCO ELASTICITY

An elastic material returns to its original shape when stretched and released, whereas a viscous fluid
retains its extended shape when pulled. A viscoelastic material combines these two properties, it returns to its
original position after being stressed, but does it slowly enough to oppose next cycle of vibration. The properties
of viscoelastic materials depend significantly on environmental conditions such as environmental temperature ,
vibration frequency, preload, dynamic load , environmental humidity etc..
2.1.Damping Treatments:
Visco elastic materials have been used to enhance the damping in a structure in three different ways:
free-layer treatment, constrained-layer or sandwich- layer damping and tuned visco elastic damper.
Improvements in the understanding and application of the damping principles, together with advances in
materials science and manufacturing have led to many successful applications. The key point in any design is to
recognize that the damping material must be applied in such a way that it is significantly strained whenever the
structure is deformed in the vibration mode under investigation.
If a linearly Visco Elastic test specimen is subjected to 1-D loading =
0
sin pt... The resulting steady
state strain will be =
0
sin (pt ), a sinusoidal response of the same frequency but out of phase with the
stress by the lag angle or the stress leads strain by hence the stress in given by =
0
sin(pt + ).
The ratio of stress and strain amplitudes defines the absolute dynamic modulus
0

/

0
and the absolute dynamic
compliance as
0 /

0.

The In phase and out of phase components of stress and strain are used to define:

a) Storage Modulus: E =
0
cos /
0
b) Loss Modulus: E =
0
sin /
0


And hence the ratio E/ E = Tan = (loss factor)

A generalized Visco Elastic behavior is achieved by expressing in complex form as:
* =
0 e

ipt
and strain = * =
0

i ( pt )

Therefore the complex modulus * / * = E* (ip) = (


0

/

0 )
e i
= E + iE
On considering a piece of non metallic material exposed to harmonic strain oscillations , of maximum value

0
and at frequency p rad/sec.
Strain =
0
sin pt and therefore the stress =
0
sin (pt + ) upon expanding and rewriting the complex
modulus of elasticity can be written as / = E
*.
Which implies E
*
= E+ iE where E is the elastic or storage modulus and E is the loss modulus and it
follows from the above equation that E/E = tan = where is often called as the loss factor, hence the
above equation can be further rewritten as

E = E ( 1+i) and for a piece of material in plane compression or tension the stiffness can be expressed as
the complex quantity as K
*
= K(1+i) and the same can be applied to the shear strain leading to the complex
shear modulus. It is observed that the loss factor is the same in direct and shear strain for many materials.

To maintain a harmonic strain in the piece of material will require the application of a force F. For a
harmonic displacement x at the point of application of the force and with the above deviation of complex
stiffness it follows that F = K
*
x and with x = x
0
sin pt, a little algebraic manipulation will allow the time
variable to be eliminated such that

F = Kx ( x
0
2
x
2
) and it can be shown that the relation between the force F and the
displacement x forms an ellipse . The full curve is the measured relationship for a practical visco elastic
material. The area of the ellipse is the force moved through the displacement through out one cycle of
oscillation and represents the energy dissipated per cycle. The same elliptical form of relation between force
and displacement can be obtained for a spring with a purely viscous damper in parallel except that the area and
thus the energy dissipated per cycle would be dependent upon frequency. The materials which exhibit the
force-displacement relation as
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 180

F = Kx ( x
0
2
x
2
) can be termed as Visco Elastic.

Analysis of a Structure that incorporates viscoelastic dampers normally requires an analytical modeling
of the rheological behavior of the dampers. Different approaches to the analytical modeling of the rheological
behavior of linear visco elastic system are available in the literature. A classical approach uses a mechanical
model comprising a combination of linear springs and dashpots. The stress-strain relation ship for a linear
visco elastic system represented by a spring dashpot mechanical model is commonly expressed in a
differential operator form, and the domain material functions derived from such a model is expressed by a
series of decaying exponential often referred to as a prony series.[S.W.Park]

A modeling approach based on fractional calculus has also received considerable attention and been
used in characterizing the rheological behavior of linear visco elastic systems by a number of a authors .This
approach uses the frame work of standard spring dashpot mechanical model except that the regular
differential operator are replaced by fractional order differential operators. A review of literature indicates that
the fractional derivative model (FDM) has predominantly been used for viscoelastic dampers. However owing
to the computational efficiency Standard Mechanical Model (SMM) has proved to be highly efficient and is a
better alterative to prevailing models in Visco elastic damper characterization.

3. Modeling Procedure
A typical visco elastic structure is composed of three layers, from top to bottom they are the
constrained layer, the visco elastic layer (also called as the damping layer) and the base layer.
To model such a structure two kinds of finite element discrete methods are usually used. The first
method represents the upper and lower layers with plate elements and the middle layer with solid elements.
The other approach obtains an integral sandwich element by taking these three layers as a whole part and its
number of degrees of freedom is determined by analyzing the deformation layers. Contrasting with the first
method the second one requires fewer degrees of freedom and avoids discontinuities between layers, but has
lower accuracy in predicting the loss factor.
A three layer sandwich beam is selected with eight degrees of freedom and the analysis is made:
Supposing that the two face layers are having the same flexural deformation and different tensile deformation,
which is linearly distributed across the thickness of the layer. The deformation of the element is described by
eight nodal displacements and they are;
U
e
= ( u
1i
u
3i
w
i

i
u
1j
u
3j
w
j

j
)
T
The displacement vector of any point in one element can be described by the product of the nodal displacement
vector and the shape function matrix:
U = ( u
1
u
3
w )
T
= N u
e

Whose shape function matrix is given by
N = [ N
1
N
3
N
f
]
T
=





Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 181

1-

0 0 0

0 0 0
0 1-

0 0 0

0 0
0 0 (-6 +6
2
)/l 1-4 + 3
2


0 0 (6 -6

2
)/l
-2 +3
2


0 0 (-6 +6
2
)/l 1-4 + 3
2


0 0 (6 -6

2
)/l
-2 +3
2


Where = x/l is the local coordinate.
The axial strain and the linear strain can be expressed by the shape function and the nodal displacement vector
and they are :

2
= (
1
+
3
) / 2 = 0.5 (

) U
e

2
=

+ ( h
0
/ h
2
)

where h
0
is the mean thickness of the layer
Now the principle of minimum potential energy is used to deduce the element stiffness matrix.
The strain energy in general c an be written as;
U
e
= U
e
1
+ U
e
2
+ U
e
3
where the first term indicates the tensional energy in the constrained layer and the base
layer and the second term is the bending strain energy and the third one is the shear strain energy.
The expression for these parts can be written as:
U
e
1
= 0.5 E A
2

U
e
2
= 0.5 E A

2

U
e
3
= (1/2k
*
) G A
2

Where E A G corresponds to youngs modulus, area of cross section, shear modulus of the damping layer and
K
*
is the factor that corrects the non uniformity of the shear force in the transverse direction to the rectangular
section which is defined as 6/5.
Hence final strain energy can be written as U
e
= 0.5 (u
e

T
K
e
u
e
)
The element stiffness matrix and the element viscous matrix (K
e
= K
t
+ K
b
) and K
v
= GK
e
v

Where

K
t
= element tensional stiffness matrix
K
b
= element bending stiffness matrix
K
e
v
= element equalent stiffness matrix
The element mass matrix is given by: M
e
=
T
N (integral from 0 to l)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 182

Through the above parameters the final equation can be written as
M + K
e
+

K
e
v
*(complex modulus function) = f
And the equation can be used to solve the behavior of the visco elastic vibration of a constrained layer damping
treatment.
4.CONCLUSIONS
The present work is aimed at presenting the finite element modeling of visco elastic behavior of the constrained
layer damping. The work reflects the summary from various references indicated below and it can be extended
for the specific application based analysis. Although a brief overview of the very complex task is presented here,
the actual application orientation is strictly based on the above formulation only.
REFRENCES
[1] Jonson c, Keinholz, and Rogers 1982, the finite element prediction of Damping is sandwiched structures
with constrained viscoelastic layers... AIAA j... 20(9).
[2] Ungar and Kerwin 1962. Loss factor s of visco elastic systems in terms of energy concepts J.Acoustic
society Am, 34
[3] Park C , H Inman and Lam M.J. 1999 Model reduction of Visco elastic Finite Element Models JSV
219(4)
[4] J.Zang and G.T. Zheng The Biot model and its application is Visco elastic composite structures 2007
Journal of Vibration and Acoustics Vol.129
[5] S.W.Park., Analytical Modeling of Visco elastic dampers for structural and vibration control. International
journal of solids and structures 38 (2001).
[6] B.C.Nakra Vibration control in machines and structures using visco elastiDamping, Journal of sound and
vibrations (1998) 211 (3).
[7] P.J.Torvik Damping applications for vibration control ASME AMD 38
[8] R.M. Christensen Theory of Visco elasticity Academic press Newyork
[9] Grootenhuis The control of Vibrations with visco elastic materials J.sound and vibration (1970) P 11 (4).
[10] Mohan .D.Rao --Recent applications of Visco elastic damping for noise control in automobiles and
commercial aero planes J.Sound and Vibration 262 (2003).
[11] Lijian Pan, Boming Zhang., A New Method for the determination of damping in occurred composite
laminates with embedded visco elastic layer J.Sound and Vibration 319 (2009)
[12] Lifshitz, Optimal Sandwich Beam Design for Maximum Visco Elastic dampingIntl.Journal of Solids
structure 1987 Vol.23
[13] G.R.Tomlinson., The Use of Constrained Layer Damping in Vibration Control Intl.J.Mech.Sci vol.32
No.3.










Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 183

SMART DUST: DESIGN AND APPLICATIONS USING MEMS
Rakesh. R. Nair
1
, Snigdha Sarkar
2
Student, Dept. of Mechanical Engg. Lingaya's Institute of Management & Technology, Faridabad.

1
rakeshnair90@yahoo.com,
2
snigdhasar@gmail.com

ABSTRACT

The world of MEMS (Micro-Electromechanical systems) is a new and fast growing field with numerous
applications. The following paper deals with the different modes and situations where MEMS technology can be
utilized to revolutionize the world we live in. The various fields touched upon range from mechanical,
automobile, civil and biological. A particular facet of MEM technology is its application in building Smart
Dust. The following paper also deals with the fabrication, design and possible short term and long term
applications for the same. The applications range from military (tracking and espionage) to biological
as well as energy generation purposes. The world of MEMS technology is already a 30 billion dollar
industry worldwide and the scope for research and development in this field is tremendous. The
possible applications of this technology if fabricated in an energy efficient manner would be
revolutionary. This paper deals with methodology for fabrication as well as possible applications in the near
as well as far future. Tiny, ubiquitous, low cost, Smart Dust motes have not had a successful commercial
realization yet, but some fairly small motes are commercially available

Keywords: Piezoelectricity, trickle charging, weather prediction, micro-structured artificial limbs, advanced
prosthetic tool attachments, clean energy, Smart dust, motes, robotic swarms, Sky hooks.

INTRODUCTION

This paper evaluates the potential applications for the technology if the present energy issues are
satisfactorily dealt with. Hence follows a brief history of the smart dust technology and a discussion of
notable advances already made in the fabrication and design of the equipment. Viable improvements in
design and working have also been suggested afterwards. The paper concludes with a look at the far
reaching applications of the present effort to actualize the smart sensor MEM technology.
Smart Dust was conceived in 1998 by Dr. Kris Pister of the UC Berkeley (Hsu, Kahn, and Pister 1998;
Eisenberg 1999). He set out to build a device with a sensor, communication device, and small computer
integrated into a single package. The basic problem of powering the device has been a thorn in the way of
progress ever since. This paper deals with possible solutions to the present scenario.
The earliest approaches to solving the energy problem were the usage of optical sensors in order to reduce the
demand for charge inside the mote. Other approaches included the use of passive devices; this approach used a
series of mirrors controlled by an electric charge instead of an onboard light source, encoded messages were
received from the motes when a laser was directed at them which caused a rapid realignment of the mirrors
inside hence giving a digital pulsed output which could be decoded.
The above methods though ingenious did not fit well with the immediate requirement of the motes to be
inconspicuous as well as virtually undetectable. The satisfactory solution to this problem could ensure fast and
exponential expansion of the smart dust technology.

POSSIBLE SOLUTIONS
One solution to the above problem can be easily achieved while allowing for the requirement of the dust mote to
be small enough to be inconspicuous. This involves the use of minute Micro-electromechanically
designed piezoelectric wafers to be imbedded into the surface of the motes.
The most basic usage for the military in using smart sensor technology is in espionage and in gathering
information from the enemy in a fairly undetectable
manner. The sensor can safely do so with the help of using the energy exhumed by the ambient vibrations in the
wall on which it is placed. The vibrations on the walls can be minimized by damping, but only to a certain
extent after which it becomes increasingly difficult to do so and eventually impossible as the laws of quantum
mechanics dictate. The size of such devices ranges from 10 Micrometers (10-
6
m) to about 1 millimeter (10-
3

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 184

m), at such scales the laws of inertia and momentum are superseded by Quantum-mechanical effects such as
electrostatics and wetting. The sensor can convert the vibrations from the walls into electrical signals which
can be safely decoded from a distance. Now the problem of actually transmitting those signals to a safe
location arises. There are 2 ways to tackle this problem.
One way is to use miniature flip-flops based on nanotechnology to record the pulses from the minute
piezoelectric transducer, decoding those signals will help one ascertain the exact words being spoken as every
syllable has a distinct frequency with which it induces minute vibrations in the walls. But this approach
will increase the bulk of the device considerably. A second more practical option is to exploit another form
of ambient energy source to charge the MEMS based battery of the mote to transmit this information in live
time to a receiver. This will be done by the use of omnipresent radio waves traversing throughout the world
thanks to cell phones and universal radio transmissions. The mote would have to be designed in such a way so
as to be able to receive the incoming ambient radio waves which can be easily transmitted at a preset
frequency if needed, from the vicinity of the place. The incoming radio waves will then be converted into
electrical signals using the minute charge provided by the piezoelectric wafer. These signals will then be used
to trickle charge the onboard battery. Once sufficient charge has been stored in the batteries it will be
programmed to discharge once and start transmitting the real life electrical pulses generated by the minute
piezoelectric transducer. These weak signals can be easily received by a receiver attuned to the
particular frequency and amplitude of transmission which can be calculated using computer software, once
it is fed the ambient noise levels and the distance from the source.
Hence this system only incorporates the use of minute piezoelectric wafers (which can be used in series to
increase the effective output), a radio receiver and transmitter working on a highly tuned circuit. Initially the
receiver will work on the charge produced by the piezoelectric transducer; the minute current developed by the
receiver will be used to trickle charge the battery. Once the MEM microcontroller circuit recognizes the charge
on the battery to be sufficient the charge from the piezoelectric wafer being used to charge the receiver circuit
will be rerouted to the transmitter circuit now working on the trickle charged battery.
Hence this system if built on the basis of the above principle will ensure that the smart dust mote performs the
functions of a receiver and transmitter which can be easily used for various purposes. The battery if properly
designed can also be used for the actuation of several other devices and instruments, according to the
requirement, mounted on the dust mote such as gas sensors, smoke sensors, temperature sensors, humidity
sensors, charge sensors, vibration sensors etc. they can even be used to count the number of people inside an
enclosure from the heat radiated by warm blooded animals, using infrared sensors.
The advantage of this approach to smart sensor design and fabrication will be to reduce the size of the mote
considerably without worrying about the bulk involved. The sensor can also be used in dark isolated areas
as the device does depend on the ambient lighting to charge itself. It can also be designed to switch itself
off for predetermined periods of time so as to conserve energy and charge its batteries. This on/off working of
the mote will ensure an increase in efficiency and energy consumption.

APPLICATION AREAS OF SMART DUST
The applications of this technology are truly vast. The smart sensors when properly designed will be minute
enough so as to be within the same size scale as dust particles (100 micrometers to just under a millimeter 10
-
3
m). The applications of such a conglomeration of intelligent dust motes are almost unprecedented. The effective
size of these motes can also be brought down by designing function specific motes individually, i.e. every mote
can be designed to carryout a specific function and every mote can be designed to be connected to other
related motes using medium range radio waves. Since every smart dust mote is effectively a minute
microprocessor, the information being relayed by the large numbers of interconnected micro-processing entities
can be fed into a single data base capable of processing the vast quantity of information being relayed. This
system can then be used for various technological feats such as accurate weather prediction, chaos study, fluid
dynamics and onset of turbulence in fluid flow. The motes can be suspended into the troposphere and the
stratosphere where the weather changes are postulated to occur.
The actual mechanism of weather is hardly understood apart from the fact that the cyclic pressure differences
caused by uneven heating of the various parts of the Earth's surface causes it to change cyclically. These
dispersed dust motes can be tracked as a single swarm using the principles of Chaos Theory and Swarm
Robotics hence giving the scientists and weather forecasters a clear idea of the stratospheric and
tropospheric weather conditions in a real time environment. This will make weather prediction considerably
more accurate and reliable. The motes can be dispersed into any environment unsuitable for human
interference. They can
- Be used to check for any signs of life on alien planets. Look for water and relay information to
the orbiting satellite on the soil content and the atmospheric composition as they settle onto the
surface of the planet.
- Be used to relay information of the condition and extent of forest fires by acting as temperature
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 185

sensors dispersed into the air.
- Help in tracking vehicles and people, once they are sprayed with smart dust by continuously
relaying information of their location to receiving stations or even satellites with GPS coordinates
working as a swarm.

Applications in biological sciences
The most tremendous application of smart dust technology can be found in the field of Biological
sciences. Once the actual working of the neurons in our brain is understood with association to memory
and thought, the minute smart sensors can then be remotely directed to the sections of the brain that have been
damaged to act as pseudo neurons. It has been shown that once electrical pathways are provided to the
damaged portions of the brain, it begins to heal itself. Hence once the functioning of the brain is
properly understood the smart dust technology can be conveniently associated with nanotechnology to
build primordial neurons (i.e. neuron equivalent electrical pathways). Memory banks, i.e. sections of hardware
which can actually store information that was previously stored in a biological brain would then be the next
logical development. Hence it will be possible to store memories of a person posthumously just as photographs
are saved today. Quantum Chaos theory when associated with smart dust technology will also enable scientists
to be able to control the motion of individual dust motes suspended in the air via the manipulation of
electrostatic fields associated with the swarm (Newtonian concepts of momentum and inertia would no longer
apply at the quantum level where electrostatics and possibly quantum electrodynamics,QED, would dominate),
this when done properly will enable scientists to be able to assemble and disassemble huge swarms of
smart dust motes according to their requirement. This will thus pave the way for micro-structured artificial
limbs and advanced prosthetic tool attachments for human beings, allowing them to be able to work more
efficiently in more drastic environments with added power and accuracy. This technology will truly pave the
way for bionic Homo sapiens bridging the gap between man and machine to a more integrated
interdependence.

Another application will be in the field of education. The dust motes being conducive towards memory
storage due to the inbuilt micro processing capabilities can be further employed to revolutionize the education
and learning system being followed presently. Once complete understanding of the human memory
processing and storage in the brain are successfully mapped out, and smart sensor technology successfully
modified to integrate with and substitute as subliminal neurons in the brain, human learning and memory will
have the potential to be increased at an exponential rate. An average person need not memorize the
tremendous amounts of knowledge that we humans have accumulated over the years. More information is
uploaded into the internet everyday than what a human being can go through in an entire life time. True
knowledge can easily be fed into the brain and stored into the dust motes from where all the knowledge of
the entire human civilization will be available and accessible to one and all without divisions, financial or
otherwise. Moral issues concerning this technology will have to be considered and corrective steps taken
likewise. It may be noted that though one may have access to humanity's collective knowledge, but it does not
entail one to have the wisdom to use it or understand it completely as well. Hence the primordial education
system will still have to be continued at some level where humans would have to unlock their true potential
of unlimited knowledge through training and experience.
Eventually it is safe to envision an era where the internet does not exist in a virtual environment but as a
conscious living entity connecting the human race beyond the trivialities of space and time as part of the
collective human psyche. Though fabrication and design at this level is far from conceivable, but the theory
remains sound.

Applications in energy generation:
A more practical though equally revolutionary application of smart dust MEM technology is in the field of clean
energy generation. The proposal given in this paper now deals with practical and practicable methodologies to
generate leviathan quantities of energy in space. This application will be especially suitable for deep space
exploration missions. Firstly the technology is already at a level to be directly put to the use suggested in this
paper henceforth with only minor research and development required (which would mainly deal with
vulnerability issues to radiation and electrical interference caused by solar flares and electromagnetically
charged particles).Smart dust motes if safely injected into the extraterrestrial region beyond the Earths
atmosphere in such a way that they acquire a stable orbit around the planet could be exploited for energy
generation purposes. This can be done as follows:
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 186

If the smart dust motes are programmed using swarm control programming then the best way to do so would
be developing a library of "group behavior building blocks" that can be combined to form larger, more complex
applications. The motes would use these behaviors to communicate,cooperate, and move relative to each other.
Some
behaviors are simple, like following, dispersing,and counting. Some are more complex, like dynamic task
assignment, temporal synchronization, and gradient tree navigation.Though promising research is still being
done, an actual coded algorithm suitable for controlling a chaotic mass of billions of motes circling the Earth in
a way that they do not collapse is still far from done.
But once this problem is solved the motes can be successfully dispersed into a suitable orbit where the
trillions of such smart particles will act as a single entity made up of a conglomeration of individual motes
communicating with their immediate neighbors to form a ring circling the Earth. These bots will be suitably
affixed with minute solar panels, enough for each to produce minute amounts of voltage within. These minute
voltages when added up will result in a gigantic integration of stored energy surrounding the Earth in the form
of an artificial ring similar in physical laws to that of planets like Saturn and Jupiter.
The principle of a sky hook has been known for a long time now, it is a scientifically proposed cable
suspended from space down to Earth. The cable rotates along with the rotation of the Earth.Objects fastened to
the cable will experience upward centrifugal force that opposes some, all,or more than the downward
gravitational force at
that point. Along the length of the cable, the actual (downward) gravity minus the (upward) centrifugal force is
called the apparent gravitational field.

MATHEMATICAL ANALYSIS
The apparent gravitational field can be computed this way:
g - - G.M/r
2
+w
2
.r
Where g is the acceleration along the radius (m s-2), G is the Gravitational constant (m3 s-2 kg-1), M is the mass
of the Earth (kg), r is the distance from that point to Earth's center (m), and is Earth's rotation speed
(radians/s).
Near the earth's surface the acceleration g0 at radius r0 is given by:
g
o
= G.M/r
0
2

(the other term is negligible), so that:
G.M= g
0.
r
0
2
This gives the G M constant depending on the ground acceleration and planet radius. At some point r1 above
the equator line, the two terms (gravity and centrifugal force) equal each other; the tether then carries no
weight. This occurs at the level of the stationary orbit:
r
1
=(g
0
.r
0
2
/w
2
)
1/3


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 187

Hence, G.M/r
1
2
=w
2
.r
1
, this gives the value of r1.
The main technical problem is the long cable's own weight. The cable material combined with its design must be
strong enough to hold up structurally throughout the 35,000 km (22,000 mi) of its length.The primary design
factor other than the material is the taper ratio, that is, the taper rate of the cross sectional area of the cable as it
goes from GEO to ground level. The solution is to build it in such a way that at any given point, its cross section
area is
proportional to the force it has to withstand, that is; the section must follow the following differential
equation:
.dS=g..S.dr , where

g is the acceleration along the radius (ms2),S is the cross-area of the cable at any givenpoint r, (m2) and dS its
ariation (m2 as well), is the density of the material used for the cable (kgm3). is the stress a given area can
bear without splitting (Nm2=kgm1s2), its elastic limit.The same mathematical principles hold true for
any planet or satellite upon which this technology is seen fit to be implemented. Hence if such a suspended
cable is brought in close proximity to the energy ring around the Earth, one can practically induce a tremendous
potential difference between the two ends of the cable and that stored up energy in the ring can be safely
transferred back to Earth via the onductive
carbon nanotubes from which the cable will have to be made. This process will be similar to the mechanism
causing lightning which hits the ground from time to time during thunderstorms. Hence entire power stations
could be built around the tube end which will convert the tremendous power produced by the motes in orbit into
usable energy on a global scale. Hence smart dust motes can be easily incorporated into energy production
purposes by creating a man made Saturn like ring around planet Earth, which will consist of motes programmed
as a mega swarm based on principles of quantum Chaos Theory and swarm robotics. This swarm will also be
programmed to continuously face the sun while taking revolutions of the Earth in a polar orbit. Problems
pertaining to interference with the present satellite orbits, solar radiation and susceptibility to damages caused
by debris floating in space would have to be addressed satisfactorily. This is where the present research into self
healing mechanisms in robotic swarms and swarm healing methodology would come into play.
CONCLUSIONS
Smart dust motes can easily be the front runners in the oncoming technology boom into the next phase of
scientific development which will incorporate consciousness, biology, Quantum Mechanics all incorporated into
minute hardly noticeable particles,each being masterpieces of human understanding and ingenuity in
themselves. The world can be
revolutionized with a little effort and ingenuity from the elite brains in the scientific research and development
circles. This paper has presented examples of numerous applications of smart dust using MEM technology from
the current technological standpoint as well as the scope of the technology in the near and far future. These
examples have ranged
from using smart dust for military applications to using them as a means to produce radical new sources for
clean energy generation in space on a global scale.The future seems bright for this radical, relatively new
approach to engineering and technology and as smaller, cheaper and longer lasting motes become available the
applications will continue to grow,eventually becoming an integral part of almost every device and application
conceivable, including ourselves.

REFERENCES

[1] Bigelow, S.J. (2004), Microscopic Monitors: ANew Breed of Wireless Sensors Can Bring Senses to
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 188

Networks Processor,
[2] http://www.processor.com/editorial/PrntArticle.asp?prnt=1&article=articles%2Fp2629%2F09p29%2F0929.
asp
[3] Pister, K. S. J. (2001) Smart Dust: Autonomous Sensing and Communication in a Cubic Millimeter,
(Smart Dust Project Web site),http://wwwbsac.eecs.berkeley.edu/labnotes/0403/glaser.html
[4] Sailor, M. J., Bhatia, S. N., Cunin, F. (2002),UCSD Researchers Fabricate Tiny Smart DustParticles
Capable Of Detecting Bioterrorist and Chemical Agents, UCSD News Release,
[5] http://ycsdnews.ucsd.edu/newsrel/science/mcsmartdust.htm
[6] Eng, P. (2004), Wireless networks made of smart dust, ABC News Internet Ventures,
[7] http://abcnews.go.com/technology/cuttingedge/story?id= 97905&page =1 (Sept 13, 2004),(Sept. 13, 2004).
[8] Goode, B. (2004), Sensors for security, Sensors
[9] http://www.sensormag.com/businesssense/bs0704/main.shtml (July 2004).
[10] Hill, J.L. (2003), System Architecture for Wireless sensor networks, University of California,Berkeley
Doctoral Dissertation, www.jhlabs.com /jhill_cs/jhill_thesis.pdf (May 2003)
[11] Hill, J. L. (2005), Spec takes the next step toward the vision of true smart dust,
[12] (Spec project web site),http://www.jlhlabs.com/jhill_cs/spec (March 2005).
[13] Hollar, S. (2000), Macro- Motes, University of California, Berkeley Web Site,
http://www.bcac.Eecs.berkeley.edu/archive/users/holler-seth/marco_motes/marcomotes.html (Fall 2000)
[14] Horton, M., Culler, D., Pister, K., Hill, J.,Szewczyz, R., Woo, A. (2002). MICA: The Commercialization
Of Microprocessor Motes, Sensors, http: //www.sensorsmag.com/articles/ 0402/40/main.shtml (April2009)















Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 189



SIMULATING THE RESPONSE OF STRUCTURES TO IMPULSE
LOADINGS


Yatin Kumar Singh
1
,

Dharmendra Singh
2
, Nishant Mani
3

1
Assistant Professor, Lingayas University,
2
Senior Lecturer, LIMAT, Faridabad,
3
Lecturer,LIMAT, Faridabad
Email:
1
yatin_23@rediffmail.com,
2
rubinchak@rediffmail.com

ABSTRACT

The need to cope with the new problems which are coupled with progress and its challenges has been causing
new design and analysis methodologies to appear and develop; thus, beside the original concept of a structure
subjected to statically applied loads, new criteria have been devised and new scenarios analyzed. From fatigue
to fracture, vibrations, acoustic, thermo mechanics, to remember just a few, many new aspects have been
studied in course of the years, all taking place in connection with the appearance of new technical or
technological problems, or even with the growing of the consciousness of the relevance of such aspects as
safety, reliability, maintenance, manufacturing costs and so on.

Keywords: impulse loading, Finite Element Methods, degree of freedom.

INTRODUCTION

One of the problems which in the recent years has been increasingly considered as a relevant one is that of the
behaviour of structures in the case of impact loading; there are many reasons for such a study: for example, the
requirement to ensure a never-too-satisfactory degree of safety for the occupants of cars, trains or even aircrafts
in impact conditions, preventing any collision with the interiors of the vehicle, is just one case. Another case to
be mentioned is that connected with mechanical manufacturing or assembling, which is often carried out with
such an high speed as to induce impulse loadings into the involved members; in such cases the aim is to obtain a
sound result, even a robust one, in the sense that the same result is to be made as independent as possible from
the conceivable variations of the input variables, which, in turn, can be only defined on a probabilistic basis, due
for example to their manufacturing scatter and tolerances. Two main aspects arise in such problems, the first
being that related to the definition of the mechanical properties of the materials; the analysis of members
behaviour under impulsive loading, for example, requires in general the knowledge of the characteristic curves
of materials in presence of high strain rates, which is not usually included in the standard tests which are carried
out, so that new experimental tests have to be devised in order to obtain the required items. But at the same time
new material families are generated daily, for which no test history is available; in the case of plastics and
foams, for example, the search for a reliable database is often a very hard task, so that the analyst has to become
a test driver, designing even the test which is the most efficient to obtain effectively the data he needs.
The second problem is the one related to the complication of the geometry and that is adding on the complexity
of the analysis of the load conditions. In such cases it is just natural and obvious to direct the own attention to
numerical methods, increasing capabilities of computers and commercial codes, and, first of all, to Finite
Element Methods (FEM).FEM, as everything else, is no longer what it used to be in the 70s, when it could
scarcely afford to deal with rather easy problems in presence of static conditions, at least from a practical point
of view and apart from theory. Nowadays there are commercial codes which can deal with some millions of
degrees of freedom (dofs) in static as well dynamic load conditions. The development of numerical procedures
which, applying lagrangian and eulerian formulations for finite strains and stresses, allow the analysis of non-
linear continua, the use of particular routines for time integration and the progress of the theory of constitutive
law for new materials are just a few of the elements, which not only let today researchers investigate rare and
particular behaviours of structures, but also allow the birth of rather easy-to-use codes which are increasingly
adopted in industrial environments.
Even with such capabilities, the use of the classical implicit finite element method encounters many
difficulties; therefore, one has to use other tools, and first of all the explicit FEM, which is well fitted to
study dynamic events which take place in very short time intervals. That doesnt mean that analysts dont find
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 190

relevant difficulties when studying the behaviour of structures subjected to impulsive loads; for example, one
has usually to use very short steps in time integration, which causes such analyses to be very time-consuming,
even more as one has to overcome serious problems in the treatment of the interface elements used to simulate
contact and to represent external loads; at last, only first-order elements (four-node quadrilaterals, eight-node
bricks, etc.) are available in the present versions of the most popular commercial codes, what requires very fine
meshes to model the largest part of members and that in turn asks for even shorter time steps.

MAIN ASPECTS OF EXPLICIT FEM

Finite element equations can be written according to Lagrangian or Eulerian formulations; in the former the
material is fixed to the finite element mesh which deforms and moves with the material; in Eulerian space the
finite element mesh is stationary and the material flows through this mesh, what is well suited for fluid
dynamic problems. As most structural analysis problems are expressed in Lagrangian space, most commercial
codes develop their finite element formulation in that space, even if all of them include algorithms based on
Arbitrary Lagrangian-Eulerian (ALE) formulation to face fluid-like material simulation.

To solve a problem of a three-dimensional body located in a Lagrangian space, subjected to external body
forces bi(t) (per unit volume) acting on its whole volume V, traction forces ti(t) (per unit area) on a portion of
its outer surface St, and prescribed displacements di(t) on the surface Sd, one must seek a solution to the
equilibrium equation:



Satisfying the traction boundary conditions over the surface St
And the displacement boundary conditions over Sd:
x
i
(Xa,t)=d
i
(t) (3)

where ij is Cauchy's stress tensor, is the material density, nj is the outward normal unit vector to the traction
surface St, X

=(=1,2,3) and x are the initial and current particle coordinates and t is current time. These
equations state the problem in the so-called strong form, which means that they are to be satisfied at every
point in the body or on its surface; to solve a problem numerically by the finite element method, however, it is
much more convenient to express equilibrium conditions in the weak form where the conditions have to be
met only in an average or integral sense. In the weak form equation, an arbitrary virtual displacement xi that
satisfies the displacement boundary condition in Sd. Multiplying equilibrium equation (1) by the virtual
displacement and integrating over the volume of the body yields:

by operating simple substitutions and applying traction boundary condition, eq. (4) can be reworked as:

which represents the statement of the principle of virtual work for a general three-dimensional problem. The
next step in deriving the finite element equations is spatial discretization. This is achieved by superimposing a
mesh of finite elements interconnected at nodal points. Then shape functions (N

) are introduced to establish a


relationship between the displacements at inner points of the elements and those at the nodal points:

This task governs all numerical formulations based on the finite element method, whose equations are obtained
by discretizing the virtual work equation (5) and replacing the virtual displacement with eq. (6) between the
displacements at inner points
in the elements and the
displacements at the nodal
points:

where M is the total number of elements in the system and Vm is the volume of an element. In matrix form, eq. (7) becomes:
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 191


where [M] is the mass matrix, x is the acceleration vector, and {F} is the vector summation of all the internal
and external forces. This is the finite element equation that is to be solved at each time step.
The time interval between two successive instants, tn-1 and tn , is the time step tn = tn-tn-1; in numerical
analysis, integration methods over time are classified according to the structure of the time difference equation.
The difference formula is called explicit if the equation for the function at time step n only involves the
derivatives at previous time steps; otherwise it is called implicit. Explicit integration methods generally lead to
solution schemes which do not require the solution of a coupled system of equations, provided that the
consistent mass matrix is superseded by a lumped mass one, which offers the great advantage to avoid solving
any system equations when updating the nodal accelerations.
In computational mechanics and physics, the central difference method is a popular explicit method. The
explicit method, however, is only conditionally stable, i.e. for the solution to be stable; the time step has to be
so small that information do not propagate across more than one element per time step. A typical time step for
explicit solutions is in the order of 10
-6
seconds, but it is not unusual to use even shorter steps. This restriction
makes the explicit method inadequate for long dynamic problems. The advantages of the explicit method are
that the time integration is easy to implement, the material non-linearity can be cheaply and accurately treated,
and the computer resources required are small even for large problems. These advantages make the explicit
method ideal for short-duration nonlinear dynamic problems, such as impact and penetration. The time step of
an explicit analysis is determined as the shortest stable time step in any deformable finite element in the mesh.
The choice of the time step is a critical one, since a large time step can result in an unstable solution, while a
small one can make the computation inefficient: therefore, an accurate estimation has to be carried out.
Generally, time steps change with the current time; this is necessary in most practical calculations since the
stable one will change as the mesh deforms. This aspect can make the total runtime unpredictable, even if some
tuning algorithms implemented in the most popular commercial codes try to avoid it; for example, as that
change is required if high deformations are very localized in the model, one can add some masses to the nodes
in the deformed area, but not so much to influence the global dynamic behaviour of the structure. The same
tuning process, which leads to added mass to the initial model in those areas where the element size is smaller,
can be used to allow an initial time step which is longer than the auto-calculated one. As stated above, the
critical time step has to be small enough such that the stress wave does not travel across more than one element
at each time step.
This is achieved by using the Courant criteria:

where te is the auto-calculated critical time step of an element in the model, l is the characteristic length, and
c is the wave speed. The wave speed, c, can be expressed as:
where E, and are the Young's modulus, density and Poissons ratio of the material respectively. Therefore,
increasing results in an artificial decrease of c and in a parallel increase of te, without varying the
mechanical properties of the material.
The time step of the system is determined by taking the minimum value over all elements:

where M is the number of elements. For stability reasons, the scale factor is typically set to a value of 0.9 (the
default in the most popular commercial code, as for example in the LS-Dyna

code) or some smaller value.


Another aspect to be strongly considered when we deal with explicit finite element method is the contact
definition, which allows to model the interactions between one or more parts in a numerical model and which is
needed in any large deformation problem. The main objective of the contact interfaces is to eliminate any
`overlap` or `penetration` between the interacting surfaces. Depending on the type of algorithm used to remove
the penetration, both energy and momentum are preserved.

The contact algorithms can be mainly classified into two main branches, one using the penalty methods, which
allow penetration to occur but penalize it by applying surface contact force models; the other uses the Lagrange
multiplier methods which exactly preserve the non-inter-penetration constraint.

The penalty approach satisfies contact conditions by first detecting the amount of penetration and then applying
a force to remove them approximately; the accuracy of approximate solutions depends strongly on the penalty
parameter, which is a kind of stiffness by which contact surfaces react to the reciprocal penetration. This
method is widely used in complex three-dimensional contactimpact problems since it is simple to use in a
finite-element solving system. However, there are no clear rules to choose the penalty parameter, as it depends
on the particular problem considered. On the other hand, the penalty method affects the stability of the explicit
analysis, which is only conditionally stable, when the penalty parameter reaches a certain value with reference
to the real stiffness of the material of the interacting surfaces.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 192


Unlike the penalty method, the Lagrange multiplier method doesnt use any algorithmic parameters and it
enforces the zero-penetration condition exactly. Thus, this method can give out very accurate displacement
fields in the analysis of static contact problems; however, for dynamic contact problems it requires the solution
of implicit augmented systems of equations, which can become computationally very expensive for large
problems and therefore it is rarely used in solid mechanics field. Effectively, a contact is defined by identifying
what locations are to be checked for potential penetration of a slave node through a master segment. A search
for penetrations, using the chosen algorithm, is made every time step. In the case of a penalty-based contact,
when a penetration is found a force proportional to the penetration depth is applied to resist, and ultimately to
eliminate, the penetration. Rigid bodies may be included in any penalty-based contact but if contact force are to
be realistically distributed, it is recommended that the mesh defining any rigid body are as fine as those of any
deformable body. Though sometimes it is convenient and effective to define a single contact to handle any
potential contact situation in a model, it is admissible to define a number whatever of contacts in a single model.
It is generally recommended that redundant contacts, i.e., two or more contacts producing forces due to the same
penetration (for example near a corner), are avoided, as this can lead to numerical instabilities. To enable
flexibility for the user in modeling contact, commercial codes present a number of contact types and a number
of parameters that control various aspects of the contact treatment. But, as already stated, unfortunately, there
are no clear rules to choose these parameters, depending from users experience and, in any case, their values
are often obtained by means of trials and error iterative procedure. Anyway, the best way to start a contact
analysis by using a commercial explicit solver is to consider default settings for these parameters, even if often
non-default values are more appropriate, to define the same element characteristic lengths to model interacting
surfaces and, overall, to avoid initial geometrical co-penetrations of contact surfaces. Thus, the selection of
integration time step and of the contact parameters are two important aspects to be considered when analysts
deal with simulation of the response of structure to impulse loading. The last important topic examined in the
present section and which can result in additional CPU costs as compared to a run where default parameters
values are used, regards shell elements formulation. The most widely adopted shells in commercial codes belong
to the families of the Hughes-Liu or of the Belytschko-Tsay shell elements. The second one is computationally
more efficient due to some mathematical simplifications (based on co-rotational and velocity-strain
formulations), but results in some restriction in the computation of out of plane deformations. But the real
problem is that, in order to further reduce CPU time, analysts generally aims to use under integrated shell
elements (i.e. with a single integration point), and this causes another numerical problem, which also arises with
under-integrated solid elements. This numerical problem concerns the hourglassing energy: single integration
point elements can shear without introducing any energy, therefore an added numerical energy is generated to
take it into account. High hourglassing energy is often a sign that mesh issues may need to be addressed by
reducing element size, but the only way to entirely eliminate it is to switch to formulations with fully-integrated
or selectively reduced integration (S/R) elements; unfortunately, this approach is much more time expensive and
can be unstable in very large deformation applications, therefore hourglassing energy is generally controlled by
considering very regular meshes or by considering some corrective algorithms provided by commercial explicit
solvers. In any case, these algorithms ask for an analysts much experienced on their formulation, otherwise other
numerical instabilities can arise following their use.

SOME CASE STUDIES FROM MANUFACTURING

Some case studies are now presented to introduce the capabilities and peculiarities of the analysis of structures
subjected to impulsive loadings; they are connected with some of the relevant problems of manufacturing and
will let the reader to grasp the basic difficulties encountered, for example, when dealing with contact elements
which model interfaces. The first one deals with the case of riveted joints and shows how to simulate the
riveting operation and its influence on the subsequent bulging coming from an axial load, while the second one
comes from metal forming and deals with the stretch-bending process of an aluminium C-shaped beam.

The analysis of the riveting process

The load transfer mechanism of joints equipped with fasteners has been recognized for a long time as one of the
main causes which affect both static resistance as well as fatigue life of joints; unfortunately, such components,
which are often considered as very simple, exhibit such a complex behaviour that it is far from being deeply
understood and only in recent times the coupling of experimental tests with numerical procedures has let
researchers begin to obtain some knowledge about the effects which come from assuming one of the available
designs.
Starting from the very simple hypothesis about load transfer mechanisms which are used in the most common
and easy cases, a real study of such joints has started just after Second World War, mainly because from those
years onward the use of bolted or riveted sheets has been increasingly spreading and several formulae were
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 193

developed with various means; also in those years the neutral line method was introduced to study the
behaviour of the whole joint, with the consequence that the need of a sound evaluation of fasteners stiffness and
contribution to the overall behaviour was strictly required. A wide spectrum of results and theories have
appeared since then, each one with some peculiarities of its own and the analysis of bolted and riveted joints
appears now as to be analysed by different methods.
The requirement of a wide range of different studies is to be found in the large number of variables which can
affect the response of such joints, among which we can quote, from a general but not exhaustive standpoint:
general parameters: geometry of the joint (single or several rows, simple- or double-lap joints, clamping
length, fastener geometry); characteristics of the sheets (metallic, non metallic, degree of anisotropy,
composition of laminae and stacking order for laminates); friction between sheets, interlaminar resistance
between laminae, possible presence of adhesive;
parameters for bolted joints: geometry of heads and washers; assembly axial load; effective contact area
between bolts and holes; fit of bolts in holes;
parameters for riveted joints: geometry of head and kind of fastener (solid, blind or
cherry and self-piercing rivets, besides the many types now available); amplitude of clearance before
assembly; mounting axial load; pressure effects after manufacture.
From all above it follows that today a great interest is increasingly being devoted to the problem of load transfer
in riveted joints, but that no exhaustive analysis has been carried out insofar: the many papers which deal with
such studies, in fact, analyze peculiar aspects of such joints, and little efforts have been directed to the
connection between riveting operation and response of the joint, especially with regard to the behaviour in
presence of damage. Therefore, the activity which we are referring to dealt with modeling of the riveting
operation, in order to define by numerical methods the influence of the assembly conditions and parameters on
the residual stress state and to the effective compression zone between sheets; another aspect to be investigated
was the detection of the relevant parameters of the previous operation to be taken into account in the analysis of
the joint strength. As we wished to analyse the riveting operation and its consequences on the residual stresses
between plates, the obvious choice was to use a dynamic explicit FEM code, namely Ls-Dyna, whose
capabilities make it most valuable to model high-speed transients without much time consumption. As a
drawback, we know that that code is very sensitive to contact problems and that a finer mesh requires smaller
integration time intervals: therefore the building of a good model, parametrically organized in order to make
variations of input parameters easy, took a long time. The procedure we followed was to use ANSYS 10.0
PDL (parametric design language) capabilities to be coupled with Ls-Dyna solver to obtain a global procedure
which can be summarized in the following steps:
Write a parametric input file for ANSYS PDL, where geometry, behaviour of materials, contact surfaces
and conditions, load cases were specified; it gives a first approximate and partially filled Ls-Dyna input
file;
Complete the input file for Ls-Dyna, in order to introduce those characteristics and instructions which are
required, but which are not present in Ansys code, mostly control cards and some variations on materials;
Solve the model by Ls-Dyna code;
Examine the results by Ls-PrePost or by Ansys post-processor module, or by Hyperview software,
according to the particular requirements.
In fig. 1 one can see the basic Ls-Dyna model built for the present analysis, with reference to a solid rivet; the
model is composed of seven parts, among which one can count three solid parts, made of brick elements, and
four parts composed by shells: three of these are required to represent the contact surface, while the last
composes a plane rigid wall that represents the riveting apparatus.

















Fig. 1. The model used to simulate the joint
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 194












Fig. 2. The model of the rivet

A finer mesh with a 0.2 mm average length was adopted to model the stem of the rivet (fig. 2) and those
parts of the sheets which, around the hole and below the rivet head, are more interested by high stress gradients;
a coarser mesh was then adopted for the other zones, as the rivet head and the parts of the sheets which are
relatively far from the rivet.

The whole model was composed, in the basic reference case, of 101,679-109,689 nodes and 92,416-100,096
brick elements, according to the requirements of single cases, which is quite a large number but also in that case
runtimes were rather long, as they resulted to be around 9-10 hours on a common desktop; more complex cases
were run on a single blade of an available cluster, equipped with 2 Xeon 3.84 GHz - 4 GB RAM - and of course
comparatively shorter times were obtained. The main reason of such times is to be found in the very short time-
step to be used for the solution, about 1.0E-08 s, because of the small edge length of the elements. The solid part
of rivet and sheets were modeled following a material 3 from Ls-Dyna library, which is well suited to model
isotropic and kinematic hardening plasticity, with the option of including strain rate effects; values were
assigned with reference to 2024 aluminum alloy; the shells corresponding to the contact surfaces were then
modeled with a material 9, which is the so-called null material, in order to take into account the fact that those
shells are not a part of the structure, but they are only needed to soften out contact conditions; for that material
shells are completely by-passed in the element stiffness processing, but not in the mass processing, implying an
added mass, and for that reason one has to manually assign penalty coefficients in the input file. Some
calibration was required to choose the thickness of those elements, looking for a compromise between the
influence of added mass which results from too large a thickness and the negative effect with regard to
contact, which comes in presence of a thickness too small, as in that case Ls-Dyna code doesnt always detect
penetration. The punching part was modeled as a rigid material (mat. no. 20 from Ls-Dyna library); such a
material is very cost effective, as they, too, are completely bypassed in element processing and no space is
allocated for storing history variables; also, this material is usually adopted when dealing with tooling in a
forming process, as the tool stiffness is some order larger than that of the piece under working. In any case, for
contact reasons Ls-Dyna code expects to receive material constants, which were assumed to be about ten times
those of steel. For what concerns the size of the rivet, it was assumed to be a 4.0 mm diameter rivet, with a stem
at least 8.0 mm long; as required by the general standards, considering the tolerance range, the real diameter can
vary between 3.94 and 4.04 mm, while the hole diameter is between 4.02 and 4.11 mm, resulting in diametric
clearances ranging from 0.02 to 0.17 mm; three cases were then examined, corresponding to 0.02-0.08-0.17 mm
clearances.
The sheets, also made of aluminum alloy, were considered to range from 1.0 to 4.0 mm thickness, given the
diameter of the rivet; the extension examined for the sheets was assumed to correspond to a half-pitch of the
rivets and, in particular, it was assigned to be 12.5 mm; along the thickness, a variable number of elements could
be assigned, but we considered it to be the same of the elements spacing along the stem of the rivet: that was
because contact algorithms give the best results if such spacing is the same on the two sides of the contact
region. In general, we introduced a 0.2 mm edge length for those elements, which resulted in 5 elements along
the thickness, but also case of 10 and 20 elements were investigated, in order to check the convergence of the
solution.

At last, for what concerns the loads, they were applied imparting an assigned speed to the rigid wall, and
recovering a posteriori the resulting load; that was because previous experiences suggested not to directly apply
forces; besides, all applicable loads accepted by Ls-Dyna are body forces, or one concentrated force on a rigid
body, or nodal forces or pressure on shell elements: the last two choices dont guarantee the planarity of the
loaded end after deformation, which can be obtained by applying the load on the tool, but that use in past
experiences revealed to be rather difficult to be calibrated.

Therefore, we assumed a hammer speed-time law characterized by a steep rise in about 0.006 s up to the
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 195

riveting speed, which remains constant for a convenient time, then subduing an inversion also in about 0.006 s
after the wanted distance has been covered; considering that the available data mention 0.2 s as a typical
riveting time, the tool speed has been assumed to be 250 mm/s, even if the effects of lower velocities were
examined (200, 150 and 50 mm/s).

Therefore, summarizing the analyses carried out insofar, the variables assumed were as follows:

Initial clearance between the rivet stem and the hole;

Thickness of the sheets;

Speed of the tool.

The results obtained can be illustrated, first of all, by means of some countour plots, beginning from fig. 3 and
4, where the variation of von Mises equivalent stress is illustrated for the cases defined above, concerning the
clearance amplitude between rivet and hole; it is quite evident, indeed, that the general stress state for the max
clearance case is well below what happens when the gap decreases, also considering the scale max values: the
mean stress level in sheets increases, as well as the largest absolute values, which can be found in
correspondence of the folding of the rivet against the edge of the hole.











Fig. 3. Von Mises stress during riveting for max clearance












Fig. 4. Von Mises stress during riveting for min clearance

While the previous results have been illustrated with reference to the time when the largest displacement of the rigid wall
occurs, others can be best observed considering the final time, when the tool has left the rivet and possible stress recovery
determined. For example, it can be useful to look at the distribution of pressure against the inner surface of the hole for the
same cases above. The results observed can be summarized considering that in presence of the max clearance the rivet can
fill the hole completely and that the second sheet is only partially subjected to internal load and then all the load is
absorbed from the first edge of the hole, which is therefore overstressed, as a part of the wall doesnt participate to balance
load; also the external area of the first sheet interested by the folding of the rivet is quite large. When clearance
reduces it can be observed that gradually all the internal surface of the hole comes in contact with the rivet and
therefore it can exert a stiffening action on the stem, which folds in a lesser degree and therefore cant transmit a
very large load on the edge of the hole, as it can be observed in fig. 5 as the volume of the sheet which is
subjected to significant radial stresses.








Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 196















Fig. 5. Residual pressure for min clearance


Also the extension of the volume interested by plasticity increases; in particular we obtained that in presence of
a larger gap only a part of the first sheet is plastically deformed, but, at the same time, that the corresponding
deformation reaches higher values, all in correspondence of the external edge or immediately near to it; as
clearance reduces the max plastic deformation becomes smaller, but plasticity reaches the edge of the second
sheet and that effect is still larger in correspondence of the min clearance, where a larger part of the second sheet
is plastically deformed; at the same time the largest values of the plastic deformation in correspondence of the
first edge becomes moderately higher for the constraint effect exerted by the inner surface of the hole and above
noted.

It is interesting to notice that the compression load is no much altered by varying the riveting velocity, as it can
be observed from fig. 6 for 1.00 mm thick plates; what is more noteworthy is the large decrease from the peak to
the residual load, which is, more or less, the same for all cases.

On the other hand, the increase of thickness produces larger compression loads (fig. 7), as it was to be expected,
because of the larger stiffness of the elements. It must be noted, for comparison reasons, that for the plots above
the load is the one which acts on the whole rivet and not on the quarter model.


















Fig. 6. Influence of velocity on compression load


Aiming to evaluate the consequences of the riveting operation on the behaviour of a general joint, because of
the residual stress state which has been induced in the sheets, the effect of an axial load was investigated,
considering such high loads as to cause a bulging effect. As a first step, using an apparatus (Zwick Roell Z010-
10kN) which was available at the laboratories of the Second University of Naples, a series of bearing
experimental tests (ASTM E238-84) have been carried out on a simple aluminium alloy 6xxx T6 holed plate
(28.5 x 200 x 3 mm3, hole diam. 6 mm), equipped with a 6 mm steel pin (therefore different from that for
which we presented the results in the previous pages) obtaining the response curves shown in Fig. 8. In the
same graph numerical results have been illustrated, carried out from non linear static FE simulations developed
by using ANSYS

ver. 10 code. As it ispossible to observe the agreement between numerical and experimental
results is very good. This experimental activity allowed to setup and develop the FE model (Fig. 9) of each
single sheet of the joint and, in particular, their elastic-plastic material behaviour.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 197



In order to investigate on the influence of the riveting process, the residual stress-strain distribution around the
hole coming from the riveting process above was transferred to the model of the riveted joint (sheets dim. 28.5 x
200 x 1 mm3, hole diam. 6 mm). The transfer procedure consisted in the fitting of the deformed rivet into the
undeformed sheets and in the subsequent recovery of the real interference as a first step of an implicit FE
analysis. After the riveting effect has been transferred to the joint the sheets were loaded along the longitudinal
direction and the distribution of Von Mises stress around the hole of one sheet of the joint in presence of the
maximum value of the axial load value is illustrated in Fig. 10. The results in terms of axial load vs. axial
displacement have been compared (Fig. 11) with




















Fig. 7. Influence of thickness on compression load

















Fig. 8. Results from experimental and numerical bearing tests








Fig. 9. FE model of a single joint sheet

















Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 198














FIG. 10. Bulging of the riveted hole coming from implicit FEM

those previously obtained from the analysis of the same joint without taking into account the riveting effect: it
is possible to observe that the riveting operation effects cause a reduction of the bearing resistance of the joint
of about 10%. On the same plot also the results obtained by analysing also the axial loading by means of the
explicit codes are illustrated: this procedure obviously proved to be very time consuming compared to the use
of an explicit to implicit scheme, without giving relevant advantages in terms of results and therefore it is clear
that the explicit-implicit formulation can be adopted for such analyses.


Fig. 11. Effect of the residual stress state on the behaviour of the joint
CONCLUSIONS:

Today available explicit codes allow the analyst to study very complex structures in presence of impulsive
loads; the cases considered above show the degree of deepening and the accuracy which can be obtained, with a
relevant gain in such cases as manufacturing, comfort and safety. Those advantages are in any case reached
through very difficult simulations, as they require an accurate modeling, very fine meshes and what is more
relevant, a sound knowledge of the behaviour of the used materials in very particular conditions and in presence
of high strain rates. The continuous advances of computers and of methods of solution let us forecast in the near
future a conspicuous progress, at most for what refers to the speed of processors and algorithms, what will make
possible to perform more simulations, yet reducing the number of experimental tests, and to deal with the
probabilistic aspects of such load cases.

REFERENCES:
[1] Clausen, A.H.; Hopperstad, O.S. & Langseth, M (2001). Sensitivity of model parameters in stretch
bending of aluminium extrusions, J. Mech. Sci. Vol. 43, p. 427. doi:10.1016/S0020-7403(00)00012-6
[2] Fitzgerald, T.J. & Cohen, J.B. (1994). Residual Stresses in and around Rivets in Clad Aluminium Alloy
Plates, Materials Science & Technology, vol. AI88, pp. 51-58, ISSN 0267-0836;
[3] Kirkpatrick, S.W.; Schroeder, M. & Simons, J.W. (2001). Evaluation of Passenger Rail Vehicle
Crashworthiness, International Journal of Crashworthiness, Vol. 6, No. 1, pp. 95-106, ISSN 1358-8265;
[4] Kradinov, V.; Barut, A.; Madenci, E. & Ambur, D.R. (2001). Bolted Double-Lap Composite Joints under
Mechanical and Thermal Loading, Int. J. of Solids & Structures, vol. 38, pp. 7801-7837, ISSN 0020-7683;
[5] Langrand, B.; Fabis, J.; Deudon, A. & Mortier, J.M. (2003). Experimental Characterization of Mechanical
Behaviour and Failure Criterion of Assemblies under Mixed Mode Loading. Application to Rivets and
Spotwelds, Mcanique & Industries, vol. 4, pp. 273-283, ISSN 1296-2139;
[6] Madenci, E.; Shkarayev, S.; Sergeev, B.; Oplinger, O.W. & Shyprykevic, P. (1998). Analysis of composite
laminates with multiple fasteners, Int. J. of Solids & Structures, vol. 35, pp. 1793-1811, ISSN 0020-7683;
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 199

A DIFFERENTIAL SERIES PARALLEL HYBRID DRIVE SYSTEM
Simrat pal singh
1
,Subodh
2
,Gautam Sharma
2

1Student-Lingayas institute of management and tech. Faridabad
2 Student-Lingayas University
simrat666@gmail.com

ABSTRACT

The following describes the working of an alternate series / parallel hybrid drive using a differential gear box .A
hybrid vehicle also termed a hybrid gas-electric vehicle uses two power sources to provide energy for
propulsion: an internal combustion engine and an electric drive system. The two power sources are combined
into one vehicle power train. The internal combustion engine and electric drive system work together under
computer control to propel the vehicle and operate its electrical accessory systems.

Key Words: Differential gear box, Hybrid
INTRODUCTION

Types Of Hybrids
There are several types and classifications of hybrid drive systems. For example, hybrids can be classified by
their drive train configuration and when they employ their internal combustion system. You should understand
their differences and similarities.

Series Hybrid
A series hybrid has a separate generator and traction motor. It does not use a motor-generator. The traction
motor is the only method used to apply torque to the vehicles drive train. The engine has no mechanical
connection to the drive train. At least one major auto manufacturer is developing a series hybrid for mass
production. The series hybrid can operate on all-electric mode when the battery pack charge is suffi cient to
propel the vehicle from a standstill and in reverse. The gas engine can remain off. When the battery pack
becomes drained, the internal combustion engine starts and turns on the generator to charge the battery pack.

Parallel Hybrid
A parallel hybrid uses both the internal combustion engine and motor-generator to apply mechanical torque to
the drive train. The engine and motor-generator work in parallel, or at the same time. During rapid acceleration,
the internal combustion engine and the motor-generator apply full motor torque
to the drive train. At low speeds and when stopped with the internal combustion engine completely shut down,
the parallel hybrid operates in full-electric mode.
A parallel hybrid drive train is the most common design used in modern passenger vehicles. It has the
advantages of an all-electric vehicle in city driving but can also perform like an engine-powered vehicle under
full acceleration or when highway driving.

Series-Parallel Hybrid
The series-parallel hybrid merges the advantages of the parallel hybrid with those of the series hybrid. The
internal combustion engine can drive the wheels mechanically but can also be disconnected from the drive train
so that only the motor-generator propels the vehicle. This is the most common type of high-efficiency hybrid
used by several manufacturers. A series-parallel hybrid can use two or three motor generators in the
transmission case or the rear differential case. They can be found on full-size vehicles and SUVs to help reduce
fuel waste while driving all four wheels and tires.

Full Hybrid
A full hybrid uses all electric energy to initially accelerate and propel the vehicle. The internal combustion
engine only runs when the HV battery pack becomes almost fully discharged. Full hybrids can accelerate
normally (not full throttle) without consuming fuel or emitting exhaust emissions.
The full hybrid is propelled by the motor-generator until the HV battery pack only has about 30% charge
remaining. Then the hybrid drive ECU starts the gas engine to more quickly propel the vehicle and recharge the
HV battery pack.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 200

Assist Hybrid
The assist hybrid can only move from a standstill when the internal combustion engine is running. A small
motor-generator assists the gas engine in accelerating and propelling the vehicle from a standstill to about 10 to
20 mph. This increases gas mileage slightly, but not as much as the full hybrid.

Plug-In Hybrid
A plug-in hybrid can be connected to a 120-volt AC home wall outlet and extension cord to fully recharge the
HV battery pack at night. If the hybrid has not used regenerative braking to fully recharge the HV battery pack,
home wall outlet power is used to recharge the HV battery pack. Then, when the vehicle is driven the next day,
it can operate in the all-electric mode without starting the gas engine.

DIFFERENTIAL SERIES,PARALLEL HYBRID DRIVE SYSTEM
First we need to look into working of the differential. Torque is supplied from the engine, via the transmission,
to a drive which runs to the final drive unit that contains the differential. A spiral bevel pinion gear takes its
drive from the end of the propeller shaft, and is encased within the housing of the final drive unit. This meshes
with the large spiral bevel ring gear, known as the crown wheel. The crown wheel and pinion may mesh in
hypoid orientation, not shown. The crown wheel gear is attached to the differential carrier or cage, which
contains the 'sun' and 'planet' wheels or gears, which are a cluster of four opposed bevel gears in perpendicular
plane, so each bevel gear meshes with two neighbours, and rotates counter to the third, that it faces and does not
mesh with. The two sun wheel gears are aligned on the same axis as the crown wheel gear, and drive the axle
half shafts connected to the vehicle's driven wheels. The other two planet gears are aligned on a perpendicular
axis which changes orientation with the ring gear's rotation. Most automotive applications contain two opposing
planet gears. Other differential designs employ different numbers of planet gears, depending on durability
requirements. As the differential carrier rotates, the changing axis orientation of the planet gears imparts the
motion of the ring gear to the motion of the sun gears by pushing on them rather than turning against them (that
is, the same teeth stay in the same mesh or contact position), but because the planet gears are not restricted from
turning against each other, within that motion, the sun gears can counter-rotate relative to the ring gear and to
each other under the same force.












Before proceeding we should know about power splitters
A power splitter system as the name suggests is a device (usually a gearbox) that help in switching to the power
source such as b/w electric motor and internal combustion engine.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 201

The following figures illustrate this

The following figure illustrates power switching to all electric mode that is a zero emission drive


It also is capable of transmitting the power from the wheel to the motor such as in case of regenerative breaking.
This occurs when no power is supplied to the power splitter and the car is decelerating. At the particular time
power is transmitted from the wheels to the motor which at this time act as a generator.
The following picture illustrates this.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 202

In the modern cars such as the Toyota Prius, a planetary gear box is installed that does the job
The idea here is to use a differential setup for this purpose. The shaft connected to the bevel gears is to be used
as power input shafts. One is to be connected to the internal combustion engine output and other to the electric
motor.
However there is a small modification to be made.
One can face a problem here that at the time of regenerative breaking or all electric mode, the power can be
enough to turn the engine and hence effective power transfer will not take place. For this, we have to hold the
shaft we want to be stop using brakes. In other words the power splitter will work with engaging or disengaging
of brakes.
The following shows design in a very simple form-


The following figure gives a schematic in a vehicle


The working is shown by a series of diagrams (Note that the coloured path is the direction and components
through which power is transferred)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 203

When only I.C. engine works, for switching, brake 1 is applied by the controlling unit. And hence all the power
is transferred to the power shaft going to the rear axle. The following figure illustrate this . In this figure the
power transferred to the rear differential can be seen(in color)













In the second case the when only electric motor works, for switching, brake 2 is applied and brake 1 is released
by the controlling unit. And hence all the power is transferred to the power shaft going to the rear axle.
The following figure illustrate this
In this figure the power transferred to the rear differential can be seen(in color)











Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 204

The third case is when we want extra power (like in case of acceleration) we can have both the electric motor
and I.C. engine giving their power. The control unit will disengage both brakes 1&2 and the whole power will
be transmitted too the real differential.
A very important thing to note is that the torque applied by the electric motor as well as the engine should be
individually more than the minimum torque required to drive the rear differential.
An I.C. engine as we know can not turn in the opposite direction. But usually electric motors might. Hence if the
motor torque is less than the minimum torque required, than the engine here will be driving the motor in the
reverse direction and this will damage the motor
Therefore an important point is that the motor should be selected properly or the usage will create a problem.
In this figure the power transferred to the rear differential can be seen (in colour)











The fourth case is use of regenerative braking. Here while deceleration the power can be trasferred from the
rear differential to the electric motor.
In this figure the power transferred from the rear differential can be seen (in colour)











Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 205

This can be also used in front wheel drive cars. The following figure can explain this-

Here shaft 1 is from electric motor and shaft 2 is from transmission gearbox of I.C. engine. A small modification
is done here i.e. a pinion gear is removed and the ring gear of both the differentials are replaced with spur gear
or helical gears(both coloured in the above figure). This can help in the development of front wheel drive hybrid
cars.

CONCLUSIONS

We can switch between different drives and the switching pattern is tabulated below

This information can be used in designing the power splitter control unit.
This is a simple setup and the hydraulic braking can be used
The above concepts of full hybrid technology etc are been used in vehicles such as the Honda Civic Hybrid,
Toyota Prius, Honda insight and many more
The above explained concept is new and has not been used till now.


REFRENCES

[1] www.wikipedia.com.
[2] Toyota hybrid drive system.
[3] Honda hybrid cars.
sno NAME OF DRIVE SYSTEM BRAKE 1 BRAKE 2 POWER FROM
POWER
TO
1 full hybrid or all electric drive disengaged engaged electric motor wheels
2 I.C. engine drive engaged disengaged I.C. engine wheels
3
full acclrn mode(both electric &
I.C.E) disengaged disengaged electric motor & I.C.engine wheels
4
Deceleration or regenerative
breaking disengaged engaged wheels
electric
motor
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 206









SECTION 3
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 207


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 208

TRIBO MECHANICAL SYSTEMS-
CONCEPT, APPLICATIONS AND FUTURE RESEARCH POTENTIAL

U.K Gupta
1
and Dr Surender Kumar
2
1
Assistant Professor, Department of Mechanical Engineering, Lingaya University, Fraidabad.
2
Professor, Department of Mechanical Engineering & Dean (R&D), GLA University, Mathura

umakant_ranchi@yahoo.co.in; Kr.surender@gmail.com

ABSTRACT
Our mechanism and processes, whether basic or complex, are made up of simple mechanism. Whenever there is
any movement in simple device or in a complex technological system, a relative motion would be formed.
Tribology is a science which deals with lubrication, friction and wear of surface in a relative moment. With the
increasing mechanization and automation and with the greater use of the capital intensive equipments with
higher productive rate, the breakdown and mechanical failures are detrimental and thus increasing the
expenses. Keeping in view the above, paper explains the concept and application of Tribiology in mechanical
systems .It also highlights the future research potentials in tribo- mechanical systems needed for economic
growth of the society and the country as whole .In the end it may be concluded that in capital scarce country
like India, it is imperative for all the industries big or small to reduce the friction wear by better material
selection and better effective lubrication of the systems.
Keywords: tribology, lubrication, wear, friction, mechanical failures

CONCEPTUAL FRAME WORK :
The term Tribology is quite new but the subject it deals with, are not so. It is a distinctive name given
to friction, wear and lubrication, friction and wear being natural phenomena, are as old as the nature itself,
whereas, lubrication, the art of it, not the science, is perhaps as old as human civilization. The word Tribology
was coined by a working party set up by Lord Bowden, the Minister of state for education and science, in
December, 1964 to probe the background to some ominous failures in British Machinery caused by faulty
lubrication. Mr. H. Peter Jost the working Partys chairman, Director of centralube K.S Paul Products and other
companies as much sought after lubrication consultant to industry, explained that a new word was needed
because lubrication was too firmly linked in peoples minds with oil cans and grease gun . They were thinking
of a much wider concept. The word they finally found, with the help of English dictionary department of the
oxford university press, is derived from the Greek word tribos(rubbing), and covers, is now understood by
lubrication, plus friction, wear and bearing design and extends even further to include the science and
technology of interacting surfaces in relative motion and of the practices related thereto. It was not until 1966,
at the instigation of the Jost committee in England, a new world , derived from the Greak tribos (rubbing),
was adopted. Since then the term tribology has gained wide acceptance. Tribology is the science which deals
with lubrication, friction and wear of surface in relative motion. The reliability of operation and the required
time between overhaul (TBO) are significantly affected by the above mentioned three factors.

All the various subjects under wear, friction and lubrication have been brought under the new name
tribology, thus embracing physics, metallurgy, chemistry, mechanical engineering and mathematics. Tribology
is now, therefore, a branch of science dealing with force transference between surfaces moving relative to each
other. Lubrication engineering has thus risen from a position of relative obscurity to what is now considered a
key factor in industry. With increasing mechanization and automation and with greater use of capital-intensive
equipment and higher production rates, breakdowns and mechanical failures are becoming increasingly
expensive both directly and consequentially. Among the principal reasons for the normal and general occurrence
of maintenance breakdowns and mechanical failures in industry, breakage, corrosion and wear feature
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 209

prominently, the latter being the predominant group of cause for maintenance repairs and also significant causes
for failures. Efforts to prevent such maintenance breakdowns and mechanical failures due to reasons of wear
have led to the fundamentals of tribology, i.e the fundamentals of the interacting surfaces in relative motion.
According to some experts, tribology means a state of mind and an art: the intellectual approach to a flexible
cooperation between people of widely differing background. It is the art of applying operational analysis to
problems of great economic significance, namely reliability, maintenance, and wear of technical equipment,
ranging from spacecraft to household appliances. The work of the tribologist is truly interdisciplinary,
embodying physic, chemistry mechanics, thermodynamics, heat transfer, fluid mechanics, metallurgy, materials
science, rheology, lubrication, machine design, applied mathematics, reliability and performance.

LUBRICATION IS THE THIRD EYE OF TRIBOLOGY. IN FACT TRIBOLOGY IS A SYSTEMATIC
AND LOGICAL EXTENSION OF THE SCIENCE OF LUBRICATION PRACTICED SINCE THE ADVENT
OF RELATIVE MOTION. IT HAS NOW BECOME A SYSTEM IN WHICH LUBRICATION PLAYS A
DOMINANT ROLE, BE IT AT THE STAGE OF DESIGN, SELECTION OF MATERIALS, OPERATION OF
EQUIPMENT AND MACHINERY OR MAINTENANCE THEREOF. WITHOUT PROPER LUBRICATION
ALL EQUIPMENT AND MACHINERY WOULD GO OUT OF ORDER OR COME TO A GRINDING
HALT, DISRUPTING PRODUCTION AND ECONOMY. SUCH BEING THE IMPORTANTS OF
LUBRICATION , A PERIODICAL REVIEW OF THE POLICIES AND PRACTICES CONNECTED WITH
LUBRICANTS BECOMES ESSENTIAL.
a) by using proper lubricants for a given application
b) by developing lubricants which allow longer drain periods.
c) By avoiding unnecessary leakages through seals, joints and fittings.
d) By reducing storage and handling losses,
e) By recycling of used oils.

EACH INDUSTRY MAY ALSO DO WELL TO SET UP PERFORMANCE STANDARDS TO MEASURE
THE EFFECTIVENESS OF TRIBOLOGY PROGRAMMES. TRUE MEASURE OF A LUBRICATION
DEPARTMENTS PERFORMANCE CAN BE POSSIBLY BE DETERMINED BY CONSIDERING EACH
OR SOME OF THE FOLLOWING VIZ (1) EXPENDITURES FOR LUBRICANTS (2) COSTS OF
STORING, TRANSPORTING AND APPLYING LUBRICANTS ( 3) COST OF MAINTAINING
LUBRICATING EQUIPMENT ( 4) THE COST OF DOWN TIME ( 5) THE COST OF REPLACING
LUBRICATING PARTS ETC.

It is obvious that wear is something we all have to fight-ladies have been known to fight aging (which is a from
of wear) by spending large sum of money, preferably that of their husbands, on cosmetics. In a way the
application of these cosmetic principles in industry is tribology allowing a component to be used longer
essentially by altering the surface properties. The analogy tribology between tribology in life itself is not only
applicable to cosmetics as I have just indicated, but also to household items such as cooking vessels where
Teflon coating have been introduced to reduce the consumption of oil, thereby heart attacks as well. Copper
based stainless steel utensils which are now in vogue, combine the requirements of external aesthetics and
control of adulteration of food stuff, with the requirement of high conductivity so that, you and I, do not have to
wait-for-ever for our wives lunch (of course, if they are in a good mood!). Even in the areas of densitry and the
choice of appropriate implants should our bones break because of an unfortunate accident, or simply wear out
by repetitive use, the principles of tribology apply. The material has to be correctly chosen so as to control
friction in the medium in which it operates be it in our mouth or in our hips.
To study tribological problems, it should be necessary to have the knowledge of the following:
1) Nature of the surfaces: its elasticity, plasticity, roughness, hardness etc., affinity , chemical or otherwise
with the lubricant, the effect of surrounding environment etc.
2) Nature of the thin film of the external material between the surfaces: Solid, liquid or gas: a
Combination of these as suspension or emulsion: its affinity to the surface regarding absorption,
adhesion, chemical reaction etc.
3) If the lubricant is a liquid , its viscosity and its dependence on temperature, pressure, concentration of
additives such as polymers etc.
4) External and internal forces (shear rate etc.) and boundary conditions.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 210

Fig-1
From the above, it is obvious now to visualize that a knowledge of various areas in science and engineering
are need to have the total picture of this problem. In particular if this system forms a part of the human body
joint such as knee joint, then knowledge of the bioengineering is essential. Similarly, to study the
randomness in topography of the surface the knowledge of statistics is desirable. Finally, to form a
mathematical model of this problem and to find its solution the help of mathematician is essential. Thus the
tribology is meant to truly interdisciplinary approach to various aspects of friction, wear and lubrication of
machinery and processes and fluids used in control and production processes. This approach calls for
concentrated application of specialized knowledge in such diverse subjects as metallurgy, surface
chemistry, physics, engineering design, simulated techniques of performance evaluation etc.
Tribology has not received in India the recognition due to it . Tribology is just considered as incidental to
the production activity. Engineering education hardly plays any emphasis on this aspect, which is surprising
since more than 50% of the breakdowns of the machinery, excessive lubricant consumption, high
replacement rate of spares and loss of power are some of the fairly common occurrences in industry. It must
be realized that no industrial unit either big or small, can generally afford to keep on its payroll, all the
specialists required to tackle problems of Tribological nature. Often times, it is difficult to even identify the
causes of a machinery breakdown which could happen due to any one of such diverse factors as inadequate
design, improper materials used for construction, improper lubricants and hydraulic fluids used,
mishandling of machinery etc. Some typical problems which could be solved or even obviated at the design
stages, through the application of principles of tribology are : pitting of gears, scuffing of pistons in I.C
engines; cavitation and corrosion in cylinder liners; metal transfer in machine tools , fatigue and pitting in
bearings economical manufacture of products etc. .
It has been the experience of the developed countries that investment in tribology research and its
application to practical problems quickly pays off in terms of high production and reduced costs For example,
an investigating committee in the united kingdom estimated that losses can be cut down and saving effected to
the tune of 510 million pounds per year in the British industry in general, by adopting better design and
practices in the sphere of tribology. In an industrially backward country like India, much more scope exists for
saving. Tribology studies, therefore, should be assigned the recognition they deserve on a priority basis, in our
educational institutions.

TRIBO MECHANICAL SYSTEM:
Tribology is customarily thought to encompass the specific problems of friction, wear and lubrication,
although the following opinion is increasingly prevalent:
Tribology is a general concept embracing all aspects of the transmission and dissipation of energy and materials
in mechanical equipment including various aspects of friction wear lubrication and related fields of science and
technology.
A bribo-mechanical system is defined as an entity whose functional behavior relates to the interaction between
surfaces in relative motion. In manufacturing processes, the transmission and dissipation of energy and mass
takes place on machines in numerous tribo-mechanical systems that are part of the production environment.
Machining processes and the equipment on which they are executed contain numerous types of tribo-
mechanical systems. Physically, all tribo- mechanical systems (TMS) can be classified into four basic
groups in which:
Machining / forming is carried out.
Guided motion of system elements takes place.
Transmission of energy and motion is performed.
Information transmission takes place.

Each of the above groups has several subgroups. For
example, the first group could be said to contain four
subgroups of TMS in which material is formed by the
various methods of cutting, rolling, sheet metal formation and forging. Fig-1


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 211

However, regardless of classification and the main problem of the transmission and dissipation of mass and
energy remain the same for all TMS, and can be discussed generically in an analysis of so-called basic TMS.
The structure of all tribo-mechanical system is similar , and as a rule includes two solid elements in relative
motion in an environment filled with lubricant. Fig-2.





Fig.2
In all kinds of TMS two processes persist: the friction process which causes dissipation (expenditure) of energy
and the wearing process (mass transfer) which cause machine tools, accessories and other equipment to lose
their utility (i.e. to wear out). Both these processes are tribological and have considerable effect on the
productivity of manufacturing processes and of industrial systems in general.
The phenomenon of friction encountered in the contact layers of numerous TMS gives rise to energy
expenditure which lowers process productivity:
P = Goods and services / Material + Capital + Energy + Labor + Other
4) P = k / Energy
Reduction of energy expenditure through friction control can potentially produce a considerable increase in
manufacturing process productivity in all types of tms.

The phenomenon of equipment wear is essentially the
result of motion and mass dissipation in TMS Fig-3.
During the processes, the basic TMS element
particles are set in motion so that particles originating
from element (1) move toward the elements (2 )and
(3), and the particles from the element (2) go toward
the element (1) and (3) etc. This separation of the
particles from the elements (1-3) and the consequent
motion result in gradual change in the mass and
shape of the element. If element (1)is a cutting tool,
the change of its tip shape leads to tool dullness, and decreases in its cutting effectiveness.
The wear process ( i.e. . the wear resistance of TMS elements) determines the reliability of the process and the
lives of the machine, accessories and other equipment. When critical wear is reached on TMS elements, the
process is interrupted for the element: some time the complete TMS is replaced.
High wear resistance of the TMS elements increases the productivity of the process.

TRIBOTESTING
Tribotesting is one specific aspect of tribometry .The term tribometry includes all experimental techniques in
tribology such as metrology, Tribotesting, Simulative testing and machinery condition monitoring. Tribotesting
is primarily concerned with friction and wear evaluation of materials and tribological performance studies of
lubricants.
In the past few decades there has been an increasing emphasis on tribological research and application of
tribological practices to various' industrial problems. In spite of significant enhancement in our understanding of
the complex tribological processes, we do not have models, which can help predict the tribological behavior of
materials and lubricants. It is for this reason that one has to rely more and more on tribotesting to characterize
the tribological behavior of materials and lubricants.



Fig.3
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 212

The main objectives of tribotesting are :
- to obtain fundamental understanding about the tribological processes.
- to characterize friction and wear behavior of materials under different conditions.
- to evaluate the tribological performance of lubricants
- to simulate service

The tribological evaluation of materials and lubricants can best be studied under actual service conditions but
this has many limitations. actual tests or full-scale tests are very expensive and hazardous particularly when
these have to be continued till failure occurs. in many instances information may often be required in advance
and sometimes measurements may not be possible in operating environment. also in actual tests the control of
variables and isolation of some variable, which is highly desirable in fundamental studies, may be almost
impossible. in view of these reasons tribological tests have to be conducted on specially designed machines,
which apparently may not have any resemblance with the actual machines for evaluation of materials and
lubricants. it is pertinent to mention here that tribological characteristics are not their intrinsic properties but are
influenced by various factors related to the tribological systems. unlike other mechanical properties friction and
wear characteristics of materials are highly system dependent and there is no simple measurement available
which can qualify a material suitable for a given tribological application. for this reason tribotesting is still
considered an art rather than science. for meaningful tribological evaluation one has to select the most
appropriate tribometer and carefully choose the experimental parameters keeping in mind the intended use of the
material and lubricant.
As with any scientific discipline, generating proper test data is essential in determining how a system will react
to certain prescribed operating parameters.
In any tribology system, four "tribocomponents" are involved:
Triboelement 1
Triboelement 2
Interfacial Element
Environmental Medium
Triboelements 1 and 2 refer to the two interacting surfaces in question. The Interfacial element may be either a
lubricant or dust particle, and the Environmental Medium may be either clean or corrosive atmosphere. These
tribocomponents are subject to a set of operating conditions, which include type of motion (sliding, rolling, spin,
impact, etc.), the load at the contact area, the relative velocity between the two triboelements, the temperature at
the interface, and the time / duration of the motion. Ideally, these operating conditions must match the
conditions of the real-world system to ensure proper test results for a meaningful tribosystem evaluation.
Finally, any tribosystem should consider the interaction parameters between the two triboelements. The
interaction parameters include the mechanics that is involved within the contact of two triboelements in the
presence or absence of any lubrication. Interaction parameters include issues such as contact area, contact
stresses, and contact deformation. Also, the interaction parameters address the lubrication modes of a given
tribo-system:
Boundary Lubrication: Tribological characteristics are governed by solid-on-solid contact and friction and wear
.
Mixed Lubrication: Tribological characteristics are governed by a mix of partial solid-on solid contact and slight
fluid film separation.

Fluid Film Lubrication: Tribological characteristics are governed by the rheologic properties of the lubricant and
address issues such as thin-film and thick-film lubrication. In this case, the conditions are such that the two
triboelements are actually separated by a film of lubrication. Depending on the operating conditions and the
properties of the lubricant, friction & wear may be significantly reduced due to physical separation of the two
surfaces.

Real-world tests can be performed to evaluate how a tribo-system would actually react under true operating
conditions. However, the variability inherent in a real-world test is such that a guarantee cannot be made that the
test will be repeatable and reproducible from test to test. As a result, an appropriate tribotest must be performed
with proper instrumentation to generate good results. A tribotest addresses most of the structural parameters,
operational parameters, and interaction parameters mentioned before as well as minimises the variability
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 213

inherent in a real-world test. The tribotest must be designed so that the only variables are the triboelements
and/or lubricants being tested. A researcher or end-user would be able to use the test data to create an optimal
tribosystem and use this optimized system to improve the lubricant or product material and provide a
satisfactory solution to a real-world Tribology problem.
RESEARCH POTENTIAL
The purpose of research in tribology is understandably the minimization and elimination of losses resulting from
frication and wear at all levels of technology where the rubbings of surfaces are involved. Research in tribology
leads to greater plant efficiency, better performance, fewer break downs, and significant savings.
Though tribological processes on the contact surfaces cannot be stopped, they can be slowed down, if
tribological factors are considered during the design of manufacturing processes. Tribological processes in tribo-
mechanical systems depend on many factors relating to the contact interfaces, the tribo-mechanical system
configuration and the tribological characteristics of the components. Reduction of energy expenditure through
friction control can potentially produce a considerable increase in manufacturing process productivity in all
types of TMS ( tribo-mechanical system).
It has been the experience of the developed countries that investment in tribology research and its application to
practical problems quickly pays off in terms of high production and reduced costs. In case of metal forming
industries, successful production methods were for a long time based on empirical information and it was only
recently that actual process began to be examined scientifically. Progress in this direction, especially tribological
aspects, has however been slow, mainly because the subject does not fit conveniently into any of the established
classical scientific disciplines.
Friction is a necessary evil and in many instances may determine the success or failure of a particular
process or operation. Lubrication is one of the most controversial subject, probably because the effect of
lubricant is difficult to understand. Above all the problems of accurate determination of deformation forces,
pattern of deformation under practical processing conditions is equally important both for designers and for
engineers operating the machines. Few explanatory notes in connection with tribological factors are given
below and must be taken care off:
a) Interfacial friction: The friction phenomenon is not completely understood and for actual cases an exact
formulation of frictional condition cannot be made in the analysis. Probably, the most important problem is
that of the adhesion between the workpieces and the tool.

b) Non Newtonian reheological behavior of lubricants : For a variety of reasons additives are put into
lubricating oils. Some of these for :
i) To reduce viscosity changes over a range of temperatures.
ii) To reduce wear.
iii) To inhibit corrosion.
iv) To improve load carrying capacity.

The effect of these additives is to change the basic parameters of the tribo system such as friction, load
carrying capacity, oil flow, film thickness etc. This means that all former calculations, if any, which were
based on using a Newtonian fluid should be reconsidered using appropriate constitutive relations.
Therefore, the rheological characteristics of these lubricants must be found by experiment. Thus the very
practical and useful application of these complex fluids must be worked out be a combination of theoretical
and experimental approaches.

c) Lubrication by emulsions:
In order to maximize cooling ability while still providing lubricity, oil-in-water emulsions are used as
lubricants in wide varity of processes .It is reported that oil-in-water emulsions have much better film
forming capability than might be expected on the basis of conventional rheological models of emulsions.
The film forming ability is often attributed to the separation of oil from the emulsion, the so called plating
out effect Research is needed to understand plating out (if it occurs) and to use this information to develop
improved emulsion technology which can have an impact on the economics of wide variety of high
production volume operations.



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 214

d) Soft solid coatings:
Soft solid coatings of waxes, soaps, polymers, reaction products such as oxalates and phosphates, and even
soft metals are used as lubricants under severe conditions. These coatings form tenacious films, which
separate the workpiece and cool the surfaces. The mechanics of the film formation process in hydrostatic
extrusion and drawing has already been investigated and reported in the literature but more research is
needed to understand film transport and friction and to extend the work to other processes.

e) Roughness of contacting surfaces:
The tribology of processes is greatly effected by the roughness of contacting surfaces. The roughness of the
workpiece changes in the course of deformation, non-uniform yielding of individual grain results in roughening
with solid film and in the development of hydrodynamic or hydrostatic pockets in the presence of liquid
lubricants. A workpiece with an initial rough surface carries more lubricant especially if roughness is oriented
perpendicular to the sliding direction or is random.

f) Process Environment :
Analysis of processes be carried out considering the exact behavior of the lubricant. Thermal, pressure and
sliding velocity effects and their interactions on lubricant properties, elastic deformation of the workpiece, as
well as the surface roughness, effect of the asperity height distribution of the workpiece surface are of special
significance is the generation of large portions of new surfaces and the consequent enhancement of tribo-
chemical reaction.

g) Wear modeling :
Of the three elements of tribology , wear is usually the determining factor in the life of the equipment. All forms
of wear are encountered in processes. Adhesion wear leads to transfer of material (pickup). Detached pickup and
oxide particles etc cause abrasive wear of the tool/ die. Chemical reactions with oxygen and lubricant additive
results in corrosive wear sometimes accelerated by electro chemical effects. Thermal fatigue may occur because
of repeated loading, stress reversal and periodic heating. Life is not only related to replacement cost but also to
safety and dependability of the process. Wear is the least understood process in mechanical contacts subject to
relative motion . The rate of wear depends on the initial geometry, deformational and other characteristics of the
surfaces and near surface region, lubricants and other factors. Successful computer aided design and subsequent
computer aided manufacturing of surfaces will require critical mathematical modeling Research needed to this
and would require both analyses and experiments.

h) Very high Contact pressures:
Manufacturing tribological situations are characterized by very high contact pressures where in a lubricant may
or may not be present. A number of special problems appear as speed is increased. Current metal forming
surface speeds range from 1500 m/min. to as high as 300,000 m/min. very little is known about the phenomena
of friction and wear under these rather extreme conditions of sped and load (pressures of order of 7,000 to
35,000 kg/cm
2
) The problem include how to model these phenomena both from metallurgical and thermal
viewpoints, the characterization of surfaces and subsurface for these environments and the prediction of the rate
of wear. In the upper speed range, a curious phenomena of bow wave has been reported which appears to be
analogous to a surface / raleigh wave.
These barrier questions need resolution both as scientific problems and to form a base for eventual modeling of
wear for purposes of computer automated manufacturing. Present knowledge of these high speeds / high-
pressure phenomena are very limited and provide little guidance for the engineering effort.

i) Lubricant Film Breakdown and pickup:
In all processes the real area of contact range from a few to one hundred percent of the apparent area of contact.
Sliding of the workpiece over the tool is resisted by adhesive joints, asperiety deformation, plowing and
molecular forces, leading to a high interface shear strength which is then reduced by lubricants.
The primary purpose of any lubricant is to prevent metal to metal contact and the potential welding. This
phenomenon usually results in the transfer of materials from workpiece to tooling-pick up with consequent
damage to the product and the necessity to change or refurbish the tooling. A wide variety of names are used for
this condition depending on the material, process and appearance of the surface. There appears to be little
fundamental research in this area and an increased understanding is required as to how lubricant film breakdown
and pick up is influenced by workpiece and tooling composition and surface topography, lubricant physical
properties and chemical composition, thermal environment etc. significant benefits should be obtained through
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 215

the design of system which are less prone to pick up with consequent reductions in scrap and tooling change
cost.

CASES AND EXAMPLES:
The equipment of a steel plant have to operate under very rough working conditions like dusty surroundings,
high temperature and shock loads etc. Consequently, their wear is of very high order. As a result of this wear the
requirements of spares and changeable are very high. For example, a steel plant with a rated capacity of 2.5
million tons per year requires 22,000 tons of spares per year; 13750 tons of changeable per year ( excluding
ingot moulds which comes to another 55000 tons per year) . This is a huge requirement in terms of money. By
adopting better tribological practices, even if 5%saving could be affected, it will mean a lot of saving for a
capital scarce country like India.
In metal forming industries, probably the most important single problem to be solved is that of
adhesion between the work piece and the tool, which may appear either as adhesion wear or as pick up. The
former may set a limit to the life of a tool, but the latter may set a limit to the process itself. It is well recognized
in other field that hydrodynamic or hydrostatic lubrication offers almost ideal conditions of low friction and zero
wear. In general it is not possible to establish these regimes in metal forming through valuable advances have
been made in hydrostatic extrusion, wire drawing and in hydrodynamic wire drawing. There is considerable
scope for both basic and applied research in metal forming industries. Perhaps the most important need is to
discover the fundamental characteristic of adhesion between the tool and the work piece and the ways of
improving the conditions, that will assist in reducing wear, through wear is additionally influenced by other
factors, particularly abrasion in hot working. Improved lubricant selection technique are needed, which can be
demonstrated to be directly relevant to the actual conditions and processes in which the lubricant will be used.
There is a necessity here for cooperation between lubricant producers and users since metal forming equipment
are very expensive and can not be satisfactorily simulated, and the quantity of lubricant involved, for example in
a rolling mill, may be very large, Full scale trials under controlled conditions can be extremely expensive. It
would clearly be advantageous to include careful development of application techniques in any development
program for metal forming lubricants. Finally, it is becoming increasingly recognized that the countrys
resources are limited. Oils will not be available independently, nor can the pollution of natural water by
lubricant residues be allowed to increase, or even to continue at present levels. Techniques requiring little
lubricant or making so little demand that lubricant can be independently recycled will have to be encouraged.
The grinding wheels e.g. (1)Ceramic bonded (verified) and (2)Organic bonded (thermosetting phenol
formaldehyde resin is used as a bond ) are being used to generate size and to generate surfaces or to remove
stock. With organic bonded products it has been noticed that the bond strength gets affected depending upon the
type of coolant used. The alkalinity of the coolant solution influences to a greater extent the degree of
detoriation of bond. As the bond become weaker the form holding property of the wheel gets affected, telling
upon abrasion efficiency and finish. In the case of vitrified wheels coolant attack on bond is NIL but heat
generation is much greater which causes metallurgical damage of the ground, especially in the case of heat
sensitive materials.
Lubricants not only lessen the muscular effort, but also diminish the wear and tear of tits working parts. One of
the major problem is related to lubrication of the rheodynamic {grease packed} rolling element bearings. A
lithium based multipurpose grease fortified with oxidation inhibitors, bearings has been used in the speed range
from 600 rpm. to 3000 rpm. In particular some of the important problems, which have been experienced with
lubricants during normal operations, are as follows :
a) The actual grease life in most cases is less than that calculated on the bases of shell criteria.
b) Separation of oil has very frequently observed even when the operating temperature of the lubricant lies
between 55
0
C to 80
0
C This separation of oil from grease has also been observed during idle period of
bearings.
c) Despite the use of antioxidant in manufacturing processes, the grease in many cases turns black after certain
hours of operation, after expiry of only a fraction of its life expectancy
d) Thermal deterioration of the lubricant is very rapid when the operating temperature of rolling element
exceeds 90
0
C.
The above problems collectively have given rise to bearing failure in operation.
Some of the typical examples include:
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 216

1. Lubrication requirements of CNG engine, as these engine get into city bus service in our capital city of
Delhi, while it is a viable technology for emission reduction, it will need development of a new class of
lubricant, for optimism performance.
2. Operation of large and heavy machines high bearing temperature, vibrations, bearings failures etc.-Long
term and immediate solutions.
3. Power loss in bearings and seals
4. New generation of machine require lubricant with longer life.
5. Even vehicles introduced 5 to 10 years age with advanced Japanese technology, for example, have to stand
up to fresh scrutiny in terms of pollution control.

CONCLUSIONS AND RECOMMENDATIONS:
The development of tribology has always followed the needs of industry and it is highly probable that in the
next phase of a continuing industrial revolution , metal forming will generate a continuing series of tribological
problems. It is also apparent that many aspect of the theory underlying metal forming tribology and indeed
tribology are at present imperfectly understood and will continue to excite scientific curiosity . Therefore , a
challenge will be presented to the tribologists , both to develop and to apply new concepts in the field of metal
forming tribology , with the further development of metal forming processes, in the years to come .

Further, any tribology program should be alert to new and noval concepts which may appear from time to time.
Over the last 100 years the field of tribology has been changed significantly several times by concepts now
considered to be part of the conventional wisdom but which are rejected by many in the field when first
introduced. Such concepts include hydrodynamic lubrication, solid lubricants and elastohydrodynamic
lubrication to mention a few. Further, novel concepts cannot be predicted but should be tolerated and given
serious consideration for funding because of the potential for major impact on the field.
Some of the information are well-developed into useful design information but are not readily available to the
design engineer because they are scattered through out the literature. The expertise knowledge that exists in the
field to bribology could be programmed into an expert system that would make available to the designer
through computer aided design software, the problem solving ability of the field of tribology . The system would
be interactive, require the designer to participate in the decision making process and make the designer aware of
the consequences of the decision. Moreover, it would allow the design engineer to draw conclusions from the
stored knowledge base of metal forming tribology.
The root causes of problem plaguing the productivity and reliability of manufacturing processes are often
tribological processes on the contact surface of basic tribo-mechanical systems. Though tribological process
cannot be stopped, they can be slowed down, if tribological factors are considered during the design of
manufacturing processes. A tribological process in tribo-mechanical system depends on many factors relating to
the contact interfaces, the Tribo-mechanical system configuration and the tribological characteristics of the
components. The productivity of manufacturing processes and industrial systems is largely determined by the
rate at which tribological process affect manufacturing elements. Energy expenditure can be reduced by
decreasing friction and wear on machines, tools and other equipment.
It has been recognized that a better understanding of this field, as applied in forming processes, could yield
tremendous economic benefits and that tribology can made significant contributions to energy conservation, too.
Tribological considerations are often decisive in determining the very feasibility of the process itself and greatly
affect the economy of operation and the quality of the issuing product.
In the end, It may be concluded that in a capital scarce country like India, it is imperative for all industries big or
small to reduce friction and wear by better material selection, better tribological design and effective lubrication.
It will be much appreciated if the study of the above-mentioned problems, generally faced by Indian industries,
may be made with a view to improving the performance. This will not only reduce the cost of repair material but
will also reduce the downtime of equipments, which are vitally required for production in any industry. It is
hoped that within a few years the Government, the industries, both private and public, and the technical
institution will pay greater attention to the crying need of the study of tribological problems faced by the
country.



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 217



REFERENCES :

[1] Kumar, Dr.Surender Principles of Metal working , Oxford and IBM publication co, New Delhi;1991.
[2] Schey, Johan A; Metal deformation Processes : Friction and Lubrication Marcel Dekker, 1970.New
York.
[3] John, A. Schey, Tribology in Metal working , ASME, OHIO, USA.
[4] Cheng, H., Plasto-Hydrodynamic Lubrication, A.S.M.E. , symp. On Friction and Lubrication in Metal
Processing, New orleans, June,1996
[5] Kumar, Surender and Chandra Umesh, an Analysis of Plane Strain Forging Process, Wear, Vol 71, No 3,
pp 293-306. Pergamon Press, New York.
[6] U.K Gupta and Surender Kumar. Modelling of Processing Parameters during Precision folding of Sintered
performs under Axis Symmetric Conditions. Proceeding of National Conference on Recent Developments
in Mechanical Engineering . Bahal, Haryana, February 19-20 2010.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 218

THERMOPLASTIC COMPOSITE MATERIALS: AN INNOVATION
Arbind Prasad
1
, Pargat singh
1*
, Amir Shaikh
2

1
Research scholar,
2
Professor, Department of Mechanical Engineering, Graphic Era University,
Dehradun248002,uttarakhand,India.
1*
business.pargat@gmail.com


ABSTRACT
Polymer composites are increasingly gaining importance as substitute materials for metals in applications
within the aerospace, automotive, marine, sporting goods and electronic industries. Their light weight and
superior mechanical properties make them especially suited for transportation applications. Current active
research areas in polymer composite materials Provide some insight into technologies that will mature to
practical applications in both the short and long terms. While modern polymer composites are in their early
stage of progress, there is several worldwide ongoing research themes which are developed based on todays
emerging technological challenges in characterization, fabrication and application plotting the future directions
for research activities in polymer composite materials. This paper in particular deals the innovations that made
through the thermoplastic composite materials in various fields.

Keywords: Thermo plastics,Glass fibre reinforced composites,PEEK,PA-6,TWINTEX

INTRODUCTION TO POLYMER COMPOSITES
Fibrous composite materials typically have two or more distinct phases, which include high strength / stiffness
reinforcing fibers and the encapsulating matrix material. Fibers can be either discontinuous (chopped) or
continuous. Polymer matrices typically fall into two categories: thermoplastic and thermosetting polymers.
Thermoplastic polymers are distinguished by their ability to be reshaped upon the addition of heat (above the
glass transition temperature of the amorphous phase or the melting temperature of the crystalline phase). This
cycle can be carried out repeatedly. Thermosetting polymers, on the other hand, undergo chemical reactions
during curing which crosslink the polymer molecules. Once cross linked, thermosets become permanently hard
and simply undergo chemical decomposition under the application of excessive heat. Thermosetting polymers
typically have greater abrasion resistance and dimensional stability over that of thermoplastic polymers, which
typically have better flexural and impact properties. Throughout the prior two decades, fiber reinforced
composite materials were principally fabricated using thermosetting matrices. Disadvantages stemming from the
use of thermosets include brittleness, lengthy cure cycles and inability to repair and/or recycle damaged or
scrapped parts. These disadvantages led to the development of the thermoplastic matrix composite system.
Compared with thermosets, composites fabricated from thermoplastic materials typically have a longer shelf
life, higher strain to failure, are faster to consolidate and retain the ability to be repaired, reshaped and reused as
need arises. However, as in many polymers composite systems, these materials frequently suffer from a lack of
adequate fiber-matrix adhesion.

THERMOPLASTIC COMPOSITES

A thermoplastic polymer is a long chain Polymer that can be either amorphous in Structure or semi-crystalline.
These polymers are long chain, medium to high molecular weight materials; whose general properties are those
of toughness, resistance to chemical attack and recyclability.
Thermoplastic composites are composites that use a thermoplastic polymer as a matrix which can be reinforced
with glass, carbon, aramid and metal fibres. Thermoplastic polymers used in thermoplastic composites can be
divided into two classes, high temperature thermoplastics and the engineering thermoplastics.







Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 219


Table 1 shows the most commonly used high temperature thermoplastic polymers.

MATRIX Tg(c)
PROCESS
TEMP(c)
PEEK 143 390
PEI 217 330
PPS 89 325
PEKK 156 340
Table 1 - High Temperature Thermoplastics
Table 2 shows the engineering thermoplastic polymers used in composites.
MATRIX Tg(c)
PROCESS
TEMP(c)
PBT 56 190
PA-6 48 220
PA-12 52 190
PP -20 190
Table 2 - Engineering Thermoplastics

The reinforcements used are in the form of short fibre, long fibre, continuous fibre, mat, fabric, etc. Continuous
Fiber Reinforced Thermoplastics (CFTs) differ from short fiber reinforced thermoplastics such as LFRT and
GMT in terms of fiber length. CFTs include a variety of products, including unidirectional prepregs, fabric
based prepregs, narrow tapes, coming led fibers in roving and fabric forms, sheets, and rods. Todays
thermoplastic composites are tougher, lighter, stiffer, have infinite shelf life, can be recycled and perform better
in every multi-impact, highly demanding environment. And cost is coming down fast. Advanced thermoplastic
composites (ATC) are a rapidly growing field of advanced materials. Thermoplastics high toughness makes
their use appealing in applications that require energy absorption and strength after impact.

APPLICATIONS OF THERMOPLASTIC COMPOSITES.

Thermoplastic Composite materials offer benefits over traditional materials in a wide range of applications.
3.1 Aerospace
Until recently, aerospace structures were all metal. Now the light weight, inherent fire resistance and toughness
of reinforced thermoplastic composites are reinventing the market. Replacing metal with thermoplastic
composites will help meet the current industry demand: 25-30%reductions in fuel burn and costs for next
generation single-aisle passenger aircraft. In the late 1980s, continuous fiber reinforced thermoplastic sheet
material was first used to form aircraft flooring for several European aircrafts. More recently, structural parts
made from thermoformed glass-reinforced PPSpanels have been put to notable use on the wing leading edges of
both the A340and new A380 commercial airliners built by France-based Airbus Industry. Lockheed
Aeronautical Systems
Company has used thermoplastic composites (PPS/PEEK) in the manufacture of an aircraft door
structure. There are many more examples of the use of high performance thermoplastics including a landing
gear strut door and access panel for the F-5F aircraft, a Hercules radomes, parts of the B- 2 Bomber and the
nose-wheel door for the Fokker-50
aircraft. PEEK and PPS thermoplastics exhibit excellent resistance to both jet fuel and hydraulic fluids and this
makes high performance resins ideal for many aviation operating environments. PPS materials are being used in
aircraft interiors at lower cost than traditional thermoset composites. ATCs have found limited use in the
aerospace industry and came about due to the need for tougher composites.
They are analogous to the first thermoset composites with fibre contents above 50vol% and utilize a highly
aligned continuous fibre structure. Actual applications include missile and aircraft stabilizer fins, wing ribs and
panels, fuselage wall linings and overhead storage compartments, ducting, fasteners, engine housings and
helicopter fairings.

3.2 Automotive
A main concern in the automotive industry is vehicle weight reduction so as to help reduce fuel consumption
and therefore emission levels. One way to realize this objective and meet the challenge of cost and performance
is by the use of glass reinforced
thermoplastics composites. The glass fibre/PP and glass fibre /PA6 are used mainly in automotive
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 220

applications replacing aluminum for cost and weight savings. Glass Mat Thermoplastics (GMTs) are commonly
used in spare wheel wells, lift-gates for hatchback cars, rear-axle support brackets, highly loaded underbody
shields for off-road use, pedestrian protection beams, and new-generation seat
structures and bumper beams. Due to the light weight and high toughness GMTs have been adopted by the
automotive industry. Applications include seat frames, battery trays, load floors, front ends, valve covers,
rocker panels and under engine covers. The product is targeted for automotive components, such as floor panels,
home appliances and pipe reinforcement as well as local reinforcement in combination with GMT/LFT for high
volume applications. Future developments will include carbon fiber reinforcement and wider range of
thermoplastics including polyamide-6, thermoplastic polyurethane (TPU), polyphenylene sulfide (PPS) and
blends. TWINTEX, is a commingled polypropylene (PP) and continuous glass fibre, reportedly has shown
promise for thermo formable sandwich structures application for automotive interiors due to high stiffness-to-
weight ratio yielding. Recently, Peguform developed trunk floor
automotive application for the Nissan (U.K.) which is weighted only 4.2kg by using TWINTEX/PP.
Additionally, Jacob Composites developed and manufactured 1,500 seat back structures for the BMW M3 CSL,
using a sandwich of TWINTEX skins and polyether sulfone (PES) foam, with polyester carpet over molded on
one side in a one-step thermoforming process. The 5 kg part offered more than a 50 percent weight reduction
versus
steel, as well as good acoustic insulation, excellent crash performance and low capital investment, due to the
low-pressure molding process. Many smaller components like ribs, brackets and curved panels, are very suitable
for thermoforming technologies, which offer a typical four minute cycle time at a very low cost compared to
autoclaved products. The automotive industry has produced a wide range of thermoplastic parts which are made
in very short processing times using fully automated equipment.

3.3 Construction Industry
Thermoplastic Composites are used in wide range for structural applications, including bridges, railroad ties,
retaining walls, marine pilings, and bulkheads, all made from a unique, patented, non-toxic plastic composite.

Bridges: Entire bridge structures can be built from Thermoplastic Composites. The technology has been
proven in vehicular, tank and railroad bridges. Three HS-25 rated bridges and two E-60 rated bridges have been
successfully constructed and are currently in service.

Railroad Ties: Thermoplastic Composites offer exceptional value in railroad tie applications. The material
will last far longer than wood or concrete ties, and will not leach any harmful chemicals into the environment.
The material has been successfully tested on with over one billion gross tons of freight traveling on the first test
ties installed over twelve years ago, without a single incident. Thermoplastic
Composite ties have also been installed in one of the highest volume freight corridors in the nation without
incident. In accelerated weathering tests, the ties exhibited no loss of strength after fifty years of simulated
conditions. Additionally, long lengths are available for complex switch tie applications.

Retaining Walls/ Sea Walls Thermoplastic Composites offer great strength and durability in both
retaining wall and seawall application. The material will not corrode and is impervious to insects and marine
borers. Simple retaining walls using Thermoplastic Composite railroad ties can be easily constructed without the
risks of harmful pressure treating chemicals common with wood tie. Utilizing tongue and groove products,
retaining walls and sea walls can be constructed. For high loading applications, I-Beams with cross members
run between the flanges offer excellent design flexibility, high strength and durability.

Decks/Piers Thermoplastic Composites are very well suited to decking applications. Unlike other plastic
lumber products, the immiscible polymer blend material offers exceptional strength and is suitable for structural
applications. Decks can be
designed following the design strategy used in the tank and railroad bridges, but with much lower capacities.

Marine Structures including Fixed and Floating
Docks: Thermoplastic Composite pilings offer an excellent foundation for marine structures. The I-Beams
provide a strong substructure for both fixed and floating structures. In these applications, the material will offer
longevity providing superior economic value over the life of the structure.
3.4 Medical
Weight reduction and strength are just some of the benefits of thermoplastic advanced composites for
prosthetics, medical devices and wheelchair parts. PEEK/carbon fibre thermoplastic composite is used in
medical, surgical and implantation applications due to PEEK's biocompatible and sterilization behavior. The
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 221

advantages of this material are its ability to closely match the modulus of natural bone while retaining high
strength, good fatigue resistance, and compatibility with MRI, CT, and x-ray technologies.
3.5 Consumer Electronics
Increase durability and reduce weight affordably in high volume applications like laptop enclosures, mobile
phones, scanners and other handheld devices.
3.6 Sports
Consumer sporting goods and equipment such as kayak paddles, backpack frames and skis are made
Lighter, more damage resistant and stronger. Glass fibre/TPU and carbon fibre/TPU are primarily used in
sporting goods applications, offering superior performance, compared to unreinforced plastics.

MARKET ANALYSIS

The global thermoplastic composites consumption was nearly 313,474 metric tons in calendar year 2008, an
overall double digit growth rate of 12 percent per year; it is forecasted that during the next five years (2009-
2014), thermoplastics composites will have double digit growth rate at tune of 18 %. LFRTs will show a
combined annual growth rate of 14 to 16 percent, while GMT would slow to about 4 percent. LFRTs
consumption is highest in Western Europe, which represents 55 percent of the global market, followed by North
America (28 percent) and Asia-Pacific (17 percent) .The glass-mat thermoplastic (GMT) precursor, was
developed by PPG Industries in the mid-1960s; offers moderate cost material which is compression mould able
with continuous random-fibre reinforcement. This material has application in industrial-scale production and
fabrication of large parts with relatively thin cross-sections. The long fiber-reinforced thermoplastics (LFRTs)
were commercialized in 1990s. They have enhanced mechanical properties over short glass thermoplastics
injection molding grades. LFRTs offer performance intermediate between GMT and short-glass thermoplastics.
LFRTs offer the opportunity to replace metals and it has shown significant growth over the last decade, accounts
for 35 % market share, while GMT still represents the largest segment of thermoplastic composites about 43
percent consumption. Coming led fibres perform are also making important presence in thermoplastic
composites manufacturing process. TWINTEX is a commingled polypropylene (PP) and continuous glass fibre,
was introduced in 1997. The glass content is 60 percent by weight, and fiber reinforcement is either a balanced
twill or plain-weave (4/1) fabric. It has excellent mechanical properties and its PP matrix lends it greater
ductility and better dimensional stability in a wet environment than standard thermosets and other
thermoplastics. It has great application in automotive market. Now TWINTEX is also available with a
thermoplastic polyester (PET) matrix. The Continuous Fiber Reinforced Thermoplastics (CFT) market has
experienced significant growth during last 5 years and is expected to reach $188.7 million in 2014 with a global
growth rate of 12% for the next five years. The CFTs materials are at the growth stage on life cycle curve. Cost
efficiency; time-to-market; and pricing are major factors in gaining customer confidence and increase market
share.

NOW DAYS:
Thermoplastic composite lumber materials are resistant to moisture, rot, insects, and degradation that occurs
with natural wood when exposed to the outdoor environment, chemically treated or not. Corrosion costs the
DoD over 22 billion dollars per year while the cost to the U.S. taxpayer is closer to $300 billion. Building with
materials that are going to rot and corrode is a classic case of fighting a losing battle.

CONCLUSION
Thermoplastic composites possess the potential to provide lighter weight, excellent dimension stability, high
service temperature, and improved mechanical performance for structural to lighter application for mass transit,
aircraft and industrial applications.
Long fiber reinforced thermoplastics (LFTs) have received attention largely from the automotive industry due to
their superior mechanical properties and relative ease of processing. PEKK is already applied to the medical
sector for implants and prostheses, aerospace, and deep-water oil and gas extraction. The thermoplastic
composite materials are
Eco-friendly, recyclable and reusable. However, the development of thermoplastic composites has been
restricted due to the greater difficulty of the fibre impregnation with thermoplastic resins compared to thermoset
resins. This is due to their higher viscosities which are between 10-100 Pa.s. as compared to 0.2- 2Pa.s. for
thermo set resins. Recent research works concentrate these drawbacks and produces thermoplastic composites
with improved properties and lesser cost through innovative technologies. Thus it may be said that composite
materials revolutionizing engineering.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 222

REFERENCES

[1] Zakaria Mouti et al (2010) Low Velocity Impact Behavior of Glass Filled Fiber-Reinforced Thermoplastic
Engine Components, Materials 2010, 3, 2463-2473
[2] Opportunities in Continuous Fiber Reinforced Thermoplastic Composites: 2009-2014
[3] Literature Review: Thermoplastic Composite Materials
[4] Ahmad Varvani-Farahani Composite Materials: Characterization, Fabrication and Application- Research
Challenges and Directions, Appl Compos Mater (2010) 17:6367
[5] Thermoplastic Composites An Introduction www.azom.com/Details.asp?ArticleID=85
[6] http://www.fiberforge.com/thermoplasticcomposites/thermoplastic-composites.php
[7 ] http://www.igsfederal.com/thermoplasticcomposite-applications.html











Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 223

IMPLEMENTATION AND BENEFITS OF TOTAL PRODUCTIVE
MAINTENANCE (TPM) IN INDIAN MANUFACTURING
COMPANY-A CASE STUDY
Narender
1
, Dr.A.K.Gupta
2

1
Research Scholar, Department of Mechanical Engineering, Mewar University, Chhitorgarh (Rajasthan)
2
Associate Professor, Department of Mechanical Engineering, D.C.R.University of Science & Technology
Murthal, Sonepat (India)
1
Go4narendersinhmar@gmail.com,
2
Anil_guptakkr@rediffmail.com


ABSTRACT The cost of operations and maintenance can make or break a business, especially with
todays increasing demand on productivity, availability, quality, safety and environment, and the decreasing
profit margins. The manufacturing industry has gone through significant changes in the last decade.
Competition has increased dramatically. Customers focus on product quality, product delivery time and cost
of product. Because of these, a company should introduce a quality system to improve and increase both
quality and productivity continuously. Total productive maintenance (TPM) is a methodology that aims to
increase the availability of existing equipment hence reducing the need for further capital investment.
Investment in human resources can further result in better hardware utilisation, higher product quality and
reduced labour costs. The aim of the paper is to study the implementation and benefits of the TPM
programme for a steering manufacturing company. Through a case study of implementing TPM in a
steering manufacturing company, the practical aspects within and beyond basic TPM theory, difficulties in
the adoption of TPM and the problems encountered during the implementation are discussed and analysed.
Moreover, the critical success factors for achieving TPM are also included based on the practical results
gained from the study. After the implementation of TPM model machine, both tangible and intangible
benefits are shown to be obtained for equipment and employees respectively. The productivity of the model
machine increased.

Keywords Total Productive Maintenance; Preventive Maintenance; Quality Maintenance; Overall
Equipment Effectiveness; Kaizen.

1. INTRODUCTION

Originally many systems in practice today do not perform as intended, nor are they cost effective in terms of
their operation and support. Manufacturing systems, in particular, often operate at less than full capacity.
Consequently, productivity is low and the cost of producing products is high. According to the study
reported by Mobley, 1990 from 15% to 40% (average 28%) of total production cost is attributed to
maintenance activities in the factory. In dealing with the aspect of cost, experience has indicated that a large
percentage of the total cost of doing business is due to maintenance-related activities in the factory (i.e., the
costs associated with maintenance, labour and materials and the cost due to production losses). Further,
these costs are likely to increase even more in the future with the added complexities of factory equipment
through the introduction of new technologies, automation, the use of robots, and so on.
TPM is known as Total Productive Maintenance where the word Total = Total employee involvement,
Total number of manufacturing equipment in the Factory, Total processes of the factory. Productive means
generating and getting the most out of any set of inputs and Maintenance meaning the careful management
and upkeep of the assets and equipment of the Factory. TPM is a manufacturing program designed primarily
to maximize equipment effectiveness throughout its entire life through the participation and motivation of
the entire work force Nakajima, 1988.
Total Productive Maintenance (TPM) provides a comprehensive, life cycle approach, to equipment
management that minimizes equipment failures, production defects, and accidents. It involves everyone in
the organization, from top level management to production mechanics, and production support groups to
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 224

outside suppliers. The objective is to continuously improve the availability and prevent the degradation of
equipment to achieve maximum effectiveness. These objectives require strong management support as well
as continuous use of work teams and small group activities to achieve incremental improvements.
Equipment maintenance has matured from its early approach of breakdown maintenance. In the
beginning, the primary function of maintenance was to get the equipment back up and running, after it had
broken down, where the attitude of the equipment operators was one of I run it, you fix it. The next phase
of the maintenance history was the implementation of preventive maintenance. This approach to
maintenance was based on the belief that if you occasionally stopped the equipment and performed regularly
scheduled maintenance, the catastrophic breakdowns could be avoided.
The next generation of maintenance brings us to TPM. In TPM, maintenance is recognized as a
valuable resource. The maintenance organization now has a role in making the business more profitable
and the manufacturing system more competitive by continuously improving the capability of the equipment,
as well as making the practice of maintenance more efficient. To gain the full benefits of TPM, it must be
applied in the proper amounts, in the proper situations, and be integrated with the manufacturing system and
other improvement initiatives.
2 LITERATURE REVIEW
2.1. Origin and development of TPM
Total productive maintenance (TPM) is a maintenance program which involves a newly defined
concept for maintaining plants and equipment. The goal of the TPM program is markedly increase
production while, at the same time, increasing employee morale and job satisfaction. The TPM Program
closely resembles the popular total quality management (TQM) Program. Many of the same tools such as
employee empowerment, bench marking, documentation etc. are used to implement and optimize TPM J
Venkatesh, 2007.
Christian N. Madu, 1994 has stated that total productivity maintenance (TPM) is a maintenance
productivity improvement practice analogous to the use of total quality management (TQM). TPM involves
the participation of employees from cross-functional departments to achieve continuous improvement in
terms of product quality, operation efficiency, production capacity, and safety.
Laura Swanson, 1997 stated that in an increasingly competitive environment, manufacturing firms have
continued to implement new technologies aimed at improving plant performance. These new technologies
are often more complex to maintain. At the same time, equipment breakdowns can become more costly and
disruptive. However, managers tend to give little consideration to how different production technologies
may affect the maintenance function. This paper reports the results of a study of the relationship between the
characteristics of production technology and maintenance practices.
G. Chand, 2000 stated that total productive maintenance is a Japanese concept of equipment
management that allows a facility to improve decisively the equipment performance in the manufacturing
area with the help and involvement of all employees.
Geert Waeyenbergh, 2004 stated that TPM activities focus on eliminating the six major losses. These
losses include equipment failure, set-up and adjustment time, idling and minor stoppages, reduced speed,
defects in process and reduced yield.
The PM concept was adopted by Japan in 1951. PM can be thought of as a kind of physical check up
and preventive medicine for equipment. Just as human life expectancy has been expanded by the progress in
preventive medicine to prevent human suffering from disease, plant equipment service life can be prolonged
by preventing equipment failure (disease) beforehand. In 1957, CM was introduced. The concept of this
system is to improve equipment so that equipment failure can be eliminated and equipment can be easily
maintained. In 1960, PM activity began. This activity aims at designing the equipment line to be
maintenance free. As the ultimate goal regarding equipment and production is to keep them completely
maintenance-free, every effort should be made to try and achieve the ultimate condition of what the
equipment and production line must be.
In other words, that is free of breakdowns and defective production. Together, these activities are
generally called productive maintenance. In 1971, Nippon Denso Co. introduced and successfully
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 225

implemented a programme called Total Productive maintenance in Japan. They won the PM Excellence
Plan Award for their efforts. This was the beginning of TPM in Japan, and Europe.
2.2 Types of Maintenance System
1. Breakdown maintenance: It means that people waits until equipment fails and repair it. Such a thing
could be used when the equipment failure does not significantly affect the operation or production or
generate any significant loss other than repair cost.
2. Preventive maintenance (1951): It is a daily maintenance (cleaning, inspection, oiling and re-
tightening), design to retain the healthy condition of equipment and prevent failure through the prevention of
deterioration, periodic inspection or equipment condition diagnosis, to measure deterioration. It is further
divided into periodic maintenance and predictive maintenance. Just like human life is extended by
preventive medicine, the equipment service life can be prolonged by doing preventive maintenance.
2a.Periodic maintenance (Time based maintenance - TBM): Time based maintenance consists of
periodically inspecting, servicing and cleaning equipment and replacing parts to prevent sudden failure and
process problems.
2b.Predictive maintenance: This is a method in which the service life of important part is predicted based
on inspection or diagnosis, in order to use the parts to the limit of their service life. Compared to periodic
maintenance, predictive maintenance is condition based maintenance. It manages trend values, by measuring
and analyzing data about deterioration and employs a surveillance system, designed to monitor conditions
through an on-line system.
3. Corrective maintenance (1957): It improves equipment and its components so that preventive
maintenance can be carried out reliably. Equipment with design weakness must be redesigned to improve
reliability or improving maintainability
4. Maintenance prevention (1960): It indicates the design of new equipment. Weakness of current
machines are sufficiently studied (on site information leading to failure prevention, easier maintenance and
prevents of defects, safety and ease of manufacturing) and are incorporated before commissioning a new
equipment.
Modern equipment management began with preventive maintenance and evolved into productive
maintenance. These approachesboth abbreviated as PMoriginated in the US with activities focused
in the maintenance department. TPM, however, stands for total productive maintenance, or productive
maintenance with total participation. First developed in Japan, TPM is team-based productive maintenance
and involves every level and function in the organisation, from top executives to the production floor
operators.

2.3 Differences between PM and TPM
The differences between traditional PM and TPM developed in Japan can be clarified by citing the
haracteristics of TPM as follows:
* TPM is aimed at overall pursuit of production efficiency improvement to its maximum extent. Many
production systems are humanmachine systems. Needless to say, dependence of production systems on
equipment increases as automation progresses. Similarly, production efficiency is governed by degree of
proficiency in methods of manufacturing, usage, and maintenance equipment. TPM is designed to prevent
the occurrences of stoppage losses due to failures and adjustment, speed losses resulting from minor
stoppages and speed reduction, and defect losses caused by process defects, start-up and yield declines, by
improving the methods of manufacturing, usage, and maintenance equipment. Its purpose is to maximise the
efficiency of production systems in an overall manner.
* In contrast, the approach of traditional PM is centred on equipment specialists. Accordingly, although
improving the methods of equipment manufacturing and maintenance gives maximum equipment efficiency,
PM does not call for pursuing overall production efficiency to its limit by improving methods of equipment
use.
* One of the characteristics of TPM is AM, which means operators must preserve their own equipment.
Operators must protect the equipment used by them. Failures and defects are the illnesses of equipment. To
prevent such illnesses, routine maintenance (cleaning, oiling, tightening, and inspection) must be
implemented without failure. Furthermore, maintenance staff, who are medical practitioners specialising in
equipment, conduct periodic inspections (diagnosis) and carry out early repair (treatment). In the US, work
specialisation has progressed so that operator is occupied with production (operation), while maintenance is
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 226

under the charge of maintenance staff. Routine maintenance is the task of maintenance staff, and is not
considered as the task of operators.
* TPM consists of small-group activities in which all members participate. Small-group activities in TPM
are conducted by employees who, based on self-discipline, conduct work jointly with the formal operation.
Operators enforce AM by performing cleaning, oiling, tightening, inspection, and other routine maintenance
tasks themselves. Such AM is part of the operators normal work, and therefore completely different from
the voluntary type. TPM small-group activities are called overlapping small-group activities, because
they are conducted jointly with formal organisation. On the individual level, small groups set their own
themes and targets by which they conduct their activities.
These small groups include a managerial staff group, composed of a section manager and led by the
plant manager, a group led by a section manager, with unit chiefs or team heads as its members, and a
frontline group headed by a managerial staff member, such as a unit chief or team head, and made up
members of a unit or team. Such overlapping small-groups led by formal organisation constitute a major
characteristic of TPM. In contrast, such activities are not implemented in the traditional US-style PM.

2.4. Measurement of TPM effectiveness
When people use the term equipment effectiveness they are often referring only to the equipment
availability or up-time, the percentage of time it is up and operating. But the overall or true effectiveness of
equipment also depends upon its performance and its rate of quality. One of the primary goals of TPM is to
maximize equipment effectiveness by reducing the waste in the manufacturing process. The three factors
that determine equipment effectiveness: equipment availability, performance efficiency, and quality rate are
also used to calculate the equipments Overall Equipment Effectiveness (OEE).
Equipment Availability
A well functioning manufacturing system will have the production equipment available for use
whenever it is needed. This doesnt mean that the equipment must always be available. The equipment
availability is affected by both scheduled and unscheduled downtime. In a well functioning system the
unplanned downtime is minimized, while the planned downtime is optimized; based on the amount of
inventory in the system and the equipments ability to change production rates. The most common cause of
lost equipment availability is unexpected breakdowns.
These failures affect the maintenance staff (which must scramble to get the equipment running) and the
equipment operator (who often has to wait for the equipment to be repaired to continue working). Keeping
back-up systems available is one way to minimize the effect of lost equipment availability. However, this is
rarely the most cost effective approach since it requires investing in capital equipment that wouldnt be need
if the equipment performed more reliably. Another drain on the equipment availability is the time required
to change-over the equipment to run different products. This set-up time is often overlooked, even though
it has the potential to eliminate a significant amount of non value added time in the production cycle.

Performance Efficiency
Equipment efficiency is a commonly used to metric when evaluating a manufacturing system. The
efficiency is typically maximized by running the equipment at its highest speed, for as long as possible, to
increase the product throughput. The efficiency is reduced by time spent with the equipment idling (waiting
for parts to load), time lost due to minor stops (to make small adjustments to the equipment), and lower
throughput from running the equipment at a reduced speed. These efficiency losses can be the result of low
operator skill, worn equipment, or poorly designed manufacturing systems.
Quality Rate
If the equipment is available and operating at its designed speed, but is producing poor quality parts,
what has really been accomplished? The purpose of the manufacturing system is not to run equipment just to
keep people busy and watch machines operate; the purpose is to make useful products. If the equipment is
worn to the point where it can no longer produce acceptable parts, the best thing to do is shut it down to
conserve the energy and raw materials, and repair it.
Quality losses also include the lost time, effort, and parts that result from long warm up periods or
waiting for other process parameters to stabilize. For example, the time lost and parts scrapped while
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 227

waiting for an injection molding machine to heat up should be considered part of the equipments quality
rate. The effort to improve the quality rate needs to be linked back to the critical product requirements.

Overall Equipment Effectiveness =Availability Performance Quality Yield

1. Availability =Operating Time/Loading Time
2. Performance=Standard Cycle Time Actual Output/Net Operating Time
3. Quality Yield=Actual Output-Defective Pieces/Actual Output
Also
1. Loading Time=Total Available Time-Planned Shut Down
2. Operating Time=Loading Time-Down Time
3. Net Operating Time=Operating Time-Minor Stoppages
4. Standard output in total available time= Net Operating Time/Standard Cycle Time
5. Speed Loss= (Standard Output-Actual Output) Standard Cycle Time
3. Case study of TPM implementation
In this section, the TPM implementation is demonstrated through a case study in a steering manufacturing
company. Some background of the company is listed in Section 3.1 and then the TPM implementation
procedures are discussed in detail.
3.1. Background information of the steering manufacturing company
The company in this case study is a multinational company, which is a leading company active in design and
manufacturing of steering. Sona Koya Steering Systems Ltd. is a technical and financial joint venture company
of Koyo Seiko Company, Japan and the global technology leader in Steering Systems. With a Market share of
59%, the company is the largest manufacturer of steering gears in India and is the leading supplier of:
Hydraulic Power Steering Systems
Manual Rack & Pinion Steering Systems
Collapsible, Tilt and Rigid Steering Columns for Passenger Vans and MUVs
The company's product range also extends to Rear Axle Assemblies and Propeller Shafts. Named as a Global
Growth Company in 1997 by the World Economic Forum, the company is now well positioned to lead the
Indian Automotive Component Industry to Global Standards in the coming millennium.
They are surging ahead in a journey of Total Quality Management (TQM). They are also developing a core
competence and aligning objectives at all levels so as to realize synergy in operations. An initiative of
improving the most important resources, the Human Resource, as well as the plant equipment has been initiated.
This technique, Total Productive Maintenance (TPM), has been adopted to improve performance through the
philosophy of prevention .Sona Koya steering Systems Ltd. aims to achieve
Zero accidents
Zero defects
Zero Breakdowns
By using the Production System as the foundation of all change programmes. Customer Satisfaction continues
to be of utmost importance to as do consistent quality, constant innovation, value engineering, process
improvement and customer orientation.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 228

3.2. Introduction-preparatory stage of TPM
Implementation
Step A - Preparatory Stage:
Step 1 - Announcement by Management to all about TPM introduction in the organization: Proper
understanding, commitment and active involvement of the top management in needed for this step. Senior
management should have awareness programmes, after which announcement is made to all. Publish it in the
house magazine and put it in the notice board. Send a letter to all concerned individuals if required.
Step 2 - Initial education and propaganda for TPM: Training is to be done based on the need. Some need
intensive training and some just an awareness. Take people who matters to places where TPM already
successfully implemented.
Step 3 - Setting up TPM and departmental committees: TPM includes improvement, autonomous
maintenance, quality maintenance etc., as part of it. When committees are set up it should take care of all those
needs.
Step 4 - Establishing the TPM working system and target: Now each area is benchmarked and fix up a target
for achievement.
Step 5 - A master plan for institutionalizing: Next step is implementation leading to institutionalizing wherein
TPM becomes an organizational culture. Achieving PM award is the proof of reaching a satisfactory level.
Step B - Introduction Stage
This is a ceremony and we should invite all. Suppliers as they should know that we want quality supply
from them. Related companies and affiliated companies who can be our customers, sisters concerns etc. Some
may learn from us and some can help us and customers will get the communication from us that we care for
quality output.
Stage C - Implementation
In this stage eight activities are carried which are called eight pillars in the development of TPM activity.
Of these four activities are for establishing the system for production efficiency, one for initial control system of
new products and equipment, one for improving the efficiency of administration and are for control of safety,
sanitation as working environment.
Stage D - Institutionalizing Stage
By all their activities one would has reached maturity stage. Now is the time for applying for PM award.
Also think of challenging level to which you can take this movement.
3.3 Uses of 8 pillars of TPM in a Organization
Fig. 1 illustrates the organisation structure of TPM organisation in the company. The establishment of
TPM office and TPM departmental office create the formal organisational model for TPM implementation. The
company uses the eight pillars of TPM which are explained below.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 229


Fig. 1 Organizing Structure for TPM Implementation Pillars of TPM


Fig. 2 Eight pillar of TPM

PILLAR 1 - 5S: TPM starts with 5S. Problems cannot be clearly seen when the work place is unorganized.
Cleaning and organizing the workplace helps the team to uncover problems. Making problems visible is the
first step of improvement.


Table 1
5S
Japanese
Term
English
Translation
Equivalent S
term
Seiri Organization Sort
Seiton Tidiness Systematize
Seiso Cleaning Sweep
Seiketsu Standardization Standardize
Shitsuke Discipline Self Discipline

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 230


PILLAR 2 - JISHU HOZEN (Autonomous maintenance):This pillar is geared towards developing operators
to be able to take care of small maintenance tasks, thus freeing up the skilled maintenance people to spend time
on more value added activity and technical repairs. The operators are responsible for upkeep of their equipment
to prevent it from deteriorating.
Policy:
1. Uninterrupted operation of equipments.
2. Flexible operators to operate and maintain other equipments.
3. Eliminating the defects at source through active employee participation.
4. Stepwise implementation of JH activities.

PILLAR 3 - KAIZEN: Kai means change, and Zen means good (for the better). Basically kaizen is for
small improvements, but carried out on a continual basis and involve all people in the organization. Kaizen is
opposite to big spectacular innovations. Kaizen requires no or little investment. The principle behind is that a
very large number of small improvements are more effective in an organizational environment than a few
improvements of large value. This pillar is aimed at reducing losses in the workplace that affect our efficiencies.
By using a detailed and thorough procedure we eliminate losses in a systematic method using various Kaizen
tools. These activities are not limited to production areas and can be implemented in administrative areas as
well.
Kaizen Policy:
1. Practice concepts of zero losses in every sphere of activity.
2. Relentless pursuit to achieve cost reduction targets in all resources
3. Relentless pursuit to improve over all plant equipment effectiveness.
4. Extensive use of PM analysis as a tool for eliminating losses.
5. Focus of easy handling of operators.

PILLAR 4 - PLANNED MAINTENANCE: It is aimed to have trouble free machines and equipments
producing defect free products for total customer satisfaction. This breaks maintenance down into 4 families
or a group which was defined earlier.
1. Preventive Maintenance
2. Breakdown Maintenance
3. Corrective Maintenance
4. Maintenance Prevention
With Planned Maintenance we evolve our efforts from a reactive to a proactive method and use trained
maintenance staff to help train the operators to better maintain their equipment.
Policy:
1. Achieve and sustain availability of machines
2. Optimum maintenance cost.
3. Reduces spares inventory.
4. Improve reliability and maintainability of machines.

PILLAR 5 - QUALITY MAINTENANCE: It is aimed towards customer delight through highest quality
through defect free manufacturing. Focus is on eliminating non-conformances in a systematic manner, much
like Focused Improvement. We gain understanding of what parts of the equipment affect product quality and
begin to eliminate current quality concerns, then move to potential quality concerns. Transition is from reactive
to proactive (Quality Control to Quality Assurance).
QM activities are to set equipment conditions that preclude quality defects, based on the basic concept of
maintaining perfect equipment to maintain perfect quality of products. The condition is checked and measure in
time series to very that measure values are within standard values to prevent defects. The transition of measured
values is watched to predict possibilities of defects occurring and to take counter measures before hand.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 231

Policy:
1. Defect free conditions and control of equipments.
2. QM activities to support quality assurance.
3. Focus of prevention of defects at source
4. Focus on poka-yoke. (Fool proof system)
5. In-line detection and segregation of defects.
6. Effective implementation of operator quality assurance.

PILLAR 6 - TRAINING: It is aimed to have multi-skilled revitalized employees whose morale is high and
who has eager to come to work and perform all required functions effectively and independently. Education is
given to operators to upgrade their skill. It is not sufficient know only Know-How by they should also learn
Know-why. By experience they gain, Know-How to overcome a problem what to be done. This they do
without knowing the root cause of the problem and why they are doing so. Hence it become necessary to train
them on knowing Know-why. The employees should be trained to achieve the four phases of skill. The goal is
to create a factory full of experts.
Policy:
1. Focus on improvement of knowledge, skills and techniques.
2. Creating a training environment for self learning based on felt needs.
3. Training curriculum / tools /assessment etc conductive to employee revitalization
4. Training to remove employee fatigue and make work enjoyable.

PILLAR 7 - OFFICE TPM: Office TPM should be started after activating four other pillars of TPM (JH, KK,
QM, PM). Office TPM must be followed to improve productivity, efficiency in the administrative functions and
identify and eliminate losses. This includes analyzing processes and procedures towards increased office
automation. Office TPM addresses twelve major losses. They are
1. Processing loss
2. Cost loss including in areas such as procurement, accounts, marketing, sales leading to high inventories
3. Communication loss
4. Idle loss
5. Set-up loss
6. Accuracy loss
7. Office equipment breakdown
8. Communication channel breakdown, telephone and fax lines
9. Time spent on retrieval of information
10. Non availability of correct on line stock status
11. Customer complaints due to logistics
12. Expenses on emergency dispatches/purchases

PILLAR 8 - SAFETY, HEALTH AND ENVIRONMENT: In this area focus is on to create a safe workplace
and a surrounding area that is not damaged by our process or procedures. This pillar will play an active role in
each of the other pillars on a regular basis. A committee is constituted for this pillar which comprises
representative of officers as well as workers. The committee is headed by senior vice President (Technical).
Utmost importance to Safety is given in the plant. Manager (Safety) is looking after functions related to safety.
To create awareness among employees various competitions like safety slogans, Quiz, Drama, Posters, etc.
related to safety can be organized at regular intervals.
Policy:
1. Zero accident,
2. Zero health damage
3. Zero fires.

3.4 Fuguai Identification
TPM member first decided to make equipment tree, Identify the problematic component then identify
fuguai in each component after that establish a relation between fuguai and loss so that problem of the
component might be remove. Make one point lesson related to fuguai next is to give Training with one point
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 232

lesson. In the equipment tree we may have numerous parent equipments but they all essentially boil down to a
limited no. of child equipments (Sub- assemblies/ Components).Once we learn how to find fuguai in child
equipments, the parent will automatically take care of.
Types of Fuguai
There are seven types of Fuguais used for the identification of losses which are given below:
1. Minor Defect
2. Basic Condition
3. Source of contamination
4. Difficult to access area
5. Quality defects
6. Unwanted objects
7. Unsafe point
Fuguai Tags
There are three types of Fuguais tags used for the identification of losses which are given below:
1. White tags for fuguais to be attended by operatives.
2. Red tags for fuguais to be attended by the specialists.
3. Green tags for fuguais found in the similar type of machines also to be considered for JH machines.
The given table shows the different types of Fuguais, method of detection of Fuguais and the effect of
Fuguais in the form of loss.
Table 2: Fuguai and its effects


Fuguai Method What will happens if it remains
Worn damage Eye, Hand Energy loss, Accident
Abnormal noise Ear Force deterioration
Misalignment Eye Body damage, accident
Thimble missing Eye Low MTBF
Clip in Opposite Direction Eye Rope may slip accident
Guide roller missing Eye Low MTBF
Rope in muck Eye Low MTBF, High energy
Foreign material in rope pathway Eye Low MTBF. Accident

3.5 Results and benefits of Kaizen application
In addition to eight Pillars of TPM, a ninth pillar of Tools Management was formed. The objective of this
pillar was
Reduction in Tool change Time.
Reduction in Tool Cost.
They found that tooling cost might be reduced by following method.
By Tool Life Improvement
By technology Improvement
By Improving the Process
By indigenization/Localisation
By Negotiation
By Design change
Kaizen sheets made for change in design theme was to reduce the cost by tool life improvement. There
are following areas where improvement has been done.
Similarly there was other way to reduce the Tooling cost by Technological improvement and
improving the Process and by the Localization.






Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 233

Table 3: Result and benefit after Kaizen application.

Present status Counter Result Benefit
Tapping operation creating burr, due to
which tool life is low
Entry angle of tap reduced from
22 to 15
Tool Cost per component reduced
from Rs.0.74 to Rs.0.51
Life of hole mill per resharpening is less
due to chip clogging was there
Design of hole mill changed
from 4 Flute helix to30 to 3
Flute helix 40 by which chip
disposal problem eliminated
Tool Cost per component reduced
from Rs.0.51 to Rs.0.20.
Breakage of honing stick holder due to O-
ring protruding out of the groove
Grove depth increased and there
is no protruding of O- ring
Reduced breakage loss up to 0.
Complete H.S.S. Tap and adapter being
used (Single piece)
Tap and Tap adapter made of
Diff material (Two Piece)
Tool cost per component reduced
from Rs.0.73to Rs.0.53.
High cost due to extra long series of
drilling tool
Low cost due to stdd. Tool and
step introduced for chamfer
Tool cost per component reduced
from Rs.0.098 to Rs.0.044.

Table 4: Tooling cost reduction summary

1 By Tool Life Improvement (All figure in Rs.)
(a) Design Change 06,35,160
(b) Technological improvement 01,29,272
(c) Process change 01,12,186
2 By localization 23,46,639
3 By Negotiations 05,30,033
TOTAL 37,53,290

Advantages of Total Productive Maintenance
1. Involvement of all people in support functions for focusing on better plant performance
2. Better utilized work area
3. Reduce repetitive work
4. Reduced inventory levels in all parts of the supply chain
5. Reduced administrative costs
6. Reduced inventory carrying cost
7. Reduction in number of files
8. Reduction of overhead costs (to include cost of non-production/non capital equipment)
9. Productivity of people in support functions
10. Reduction in breakdown of office equipment
11. Reduction of customer complaints due to logistics
12. Reduction in expenses due to emergency dispatches/purchases
13. Reduced manpower
14. Clean and pleasant work environment.
Discussions
During TPM implementation in the company, several issues that affected the failure of initial TPM
implementation were overcome. At first, management had been well trained in the TPM know how. An
executive TPM introduction course was conducted for all management level people. From the course, they
learnt about the benefits and effects of TPM implementation in the company, i.e., maximised assets and
reduced costs by focusing on the customer and getting all employees involved. They knew that management
commitment for TPM implementation was the fundamental precept, which governed the success or failure of
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 234

the change. Also, through the training course, they realised that TPM implementation was not a one-time
activity for continuous improvement. It was a long term and never ending improvement activity. In this
connection, management committed resources for TPM implementation in the form of operators time, and a
short-term investment of money that brought equipment into condition. The issues of lack of management
support, lack of long-term vision, and lack of sustaining momentum, were resolved simultaneously. Secondly,
the formation of TPM office helped the company to define and set-up the TPM basic policies and targets that are
aligned with manufacturing strategy. The TPM office was also responsible for the creation of a master plan,
promotion of TPM activities and development of a training plan for employees. The establishment of TPM
office changed the approach for TPM implementation in the numerous manufacturing activities within the
company. Structural organisation for TPM implementation was developed and job responsibility for delegated
persons in TPM implementation across different departments was clearly defined. The main initial objective for
TPM implementation was no longer early implementation on so many machines, but rather the gradual and
proper implementation on a model machine. As a result, the problem of simultaneous introduction of TPM on
too many machines was tackled. Furthermore, model machine implementation resolved the problem of resource
constraints, as resources were focused on one machine rather than several machines.

4.1. Difficulties encountered during implementation
During the TPM implementation, there were some difficulties that were encountered in the area of
organisational change and paradigm shifting in the initial phase of TPM implementation. In the TPM
introduction stage, only selected team members were involved in the model machine implementation. The
remaining production people did not accept the TPM concept as they felt that TPM was trying to make
production employees do more work so the organisation was running with fewer maintenance people (lack of
TPM know how). Also, TPM members in each shift did not work in a co-operative manner. Work habits,
communications in each shift were different. This affected the morale of the TPM team development (lack of
production involvement). Furthermore, differences occurred between the TPM and non- TPM members. As
non-TPM members felt that the workload was increased after TPM implementation. They did not like the TPM
members as they worked too hard and they became the models for the whole production area eventually (lack of
long-term vision). Finally, the educational level of production people was not so high, some people only had
primary school level. As a result, the progress of TPM implementation was slow, as they needed time to digest
what was the actual meaning of TPM and how to perform well in the implementation.
Based on these problems encountered, there were some corrective actions taken to resolve these issues.
Firstly, brief introductory training for all production people was developed by TPM office. This training
intended to introduce the benefit of TPM implementation in production as well as the relationship between
operators and maintenance personnel in TPM development. This training helped production people to
understand the actual
meaning of TPM implementation as well as their roles and responsibility for supporting the success of TPM
implementation. Misunderstanding of the actual TPM concept disappeared and the fear of job security was
overcome. Secondly, as the education background of operators in the company was quite low, some of them
only had primary level education. So, operators with a high educational level and younger in age were selected,
as they were eager to learn and accept changes in the working environment. Thus, the progress of TPM
implementation was not delayed due to the education background issue.
4.2. Successful factors for TPM implementation
Some factors that contributed to the success of TPM implementation are as follows:
A specific guideline/training for realising the benefits in the production and maintenance department
during TPM implementation was required. Supervisors or management were required to convince their
subordinates to buy into the concept of TPM by providing proper training.
Selection of team members was crucial. As mentioned before, some operators only had primary school
level education. Some of them had worked in the company for more than 10 years. They were not eager to learn
and accept the new change in culture and paradigm shifting. Therefore, the selected team members were
required to have a positive attitude and be willing to accept new changes.
As maintenance skill was required to be transferred to production operators, a well-developed
maintenance training system was one of the key factors for TPM implementation. The system needs to update
frequently as technology was changing rapidly.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 235

Management support for TPM implementation was very important as their commitment sustained and
enhanced morale of production operators and maintenance personnel.
Simultaneous implementation of TPM caused insufficient resource allocation. This resulted in low
productivity improvement or even no observed improvement result. This caused a negative psychological effect
and production people lost interest in the implementation. So, model machine implementation was also one of
the crucial factors for TPM implementation.

CONCLUSIONS
5.1. Summary of TPM implementation
The general aim of the project is to study the implementation benefits and difficulties encountered in a steering
manufacturing company during the TPM implementation and also the major success factors that contribute to
the success of TPM. The objective of this project is to study the implementation of TPM and the evaluation of
its result on model machine
The efficient maintenance of the production and other plant machinery is crucial in determining the
success of the manufacturing process. Despite time and money spent on the development/production of the
advanced plant and its equipment, there has not been enough attention to defining comprehensive maintenance
strategies, practices and policies. However, there are indications that the transition process from reactive
(breakdown) maintenance to preventive maintenance is already taking place.
In order to establish autonomous maintenance teams, a better communication and team-work must be
promoted. It is essential that the company devises an efficient data recording system, so that up-to-date and
accurate information will be available to the management.
The process of recording information must remain simple, but effective for future data analysis. If
provisions were made to highlight such problems and possible causes, then it may lead to the correction of
common problems such as breakdowns and re-work. Ultimately, if possible, the aim is to eliminate such causes.
Information provided by the trend analysis can provide a basis for forming long-term plans. The maintenance
department can plan spending requirements by using historical information to state the return on investment
when contributing to the annual business plan of the company. The availability of relatively cheap computing
power makes this process feasible and financially attractive.

5.2. Implications of this study
This case study employed TPM methodology to improve equipment effectiveness as well as the
technical skills, morale of members participated in TPM. This was not the first attempt that the company in this
study implemented TPM but, ultimately, the company succeeded to do so in the front-end process. It is
foreseeable that once a particular process adapts to the new philosophy, others processes can also be
reengineered to achieve the aims of TPM. One of the characteristics of TPM as discussed in the literature review
is AM, which in turn consists of small-group activities in which all staff of the company are trained and
participated throughout the TPM implementation period. The turning point that successful implementation of
TPM in the second attempt is related to the delegation of personnel (the TPM manager) and a structured
organisation change (the establishment of the TPM office) to lead and handle the TPM implementation issues.
This can overcome the problems that were faced in the first attempt. There are implications from this case study
to practising managers who are involved in TPM implementation. Evidences can be drawn from the results of
this study to put TPM into practice. Other than the successful factors than are listed in Section 4.2, it should be
stressed that TPM is not a one step improvement process, or stand alone improvement plan. Learning from past
experience is also crucial, which should be considered as early as possible in the planning stage.
In addition, TPM is not merely a company policy. It involves organisation change and participation
from all staff (total participation). One of the reasons that TPM can be successfully introduced in this case study
is the establishment of the TPM office. The office can integrate the staff from various departments with different
roles to work together towards the objective of the TPM. The office can also deliver structured training for staff
according to the needs of individual staff, which is another necessary element throughout the implementation
period.




Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 236

5.3. Future development and expansion
So far as there is only the model equipment being used to implement TPM, the next step is to disseminate the
concept of TPM and implement TPM into the whole production area. Some critical factors are required to be
considered during the implementation of TPM on remaining machines:
Training with tangible TPM achievement result:
Training with more tangible results for describing the relationship between TPM, equipment productivity and
quality of product is essential. As the model machine is implemented and tremendous improvement is obtained,
illustration of these results in training could enhance the understanding of employee of how their involvement in
TPM could affect the implementation. Also, the resistance to TPM in production can be taken care of.
Resources management: Resources allocation is one of the crucial factors for TPM implementation,
as the need of manpower for maintenance training is increased with the implementation of TPM for the
remaining production equipment. An OPL style training by model machine operators is a solution for
overcoming this problem. Operators not only learn how to use their acquired technical knowledge in
maintenance, they can also share their knowledge with others through the maintenance skill training.
RCM approach in PM system: T he ultimate goal for TPM, with respect to equipment, is to increase
its effectiveness to its highest potential and to maintain it at that level. In this connection, a development of an
effective preventive maintenance system is required. Reliability Central Maintenance (RCM) is a systematic
approach used to optimise preventive maintenance strategies. In many companies, PM tasks are performed more
frequently than actual needed. The aim of RCM is to maintain system function rather than restoring the
equipment to an ideal condition by PM system. RCM process provides a rational justification of PM tasks for
each piece of equipment. The identification of various PM tasks is directed towards preserving system function
and is based on a comprehensive knowledge of equipment failure modes. This process ensures selection of the
most applicable and cost-effective tasks.

REFERENCES
[1] Christian N. Madu, On the total productivity management of a maintenance float system through AHP
applications, Int. J. Production Economics, Vol.34, PP.201-207, 1994.
[2] Eugene C. Hamacher, A Methodology for Implementing Total Productive Maintenance in the
Commercial Aircraft Industry, MIT Sloan School of Management, PP.1-179, 1996.
[3] F.T.S Chan, H.C.W. Lau, R.W.L. Ip, H.K. Chan, S. Kong, Implementation of Total Productive
Maintenance: A case study, Journal of Production Economics Vol.95, PP.71-94, 2005.
[4] F. Ireland and B.G. Dale, A study of total productive maintenance implementation Journal of Quality
in Maintenance Engineering, Vol. 7 No. 3, PP.183-191, 2001.
[5] G. Chand, B. Shirvani, Implementation of TPM in cellular manufacture, Journal of Materials
Processing Technology, Vol.103.PP.149-154, 2000.
[6] Geert Waeyenbergh, Liliane Pintelon, Maintenance concept development: A case study, Int. J.
Production Economics, Vol.89, PP.395-405, 2004.
[7] Imad Alsyouf, Maintenance practices in Swedish industries: Survey results, Int. J. Production
Economics, Vol.105, PP.1-12, 2009.
[8] I.P.S. Ahuja, J.S. Khamba, Justification of total productive maintenance initiatives in Indian
manufacturing industry for achieving core competitiveness, Emerald Full Text Article, 2009.
[9] Venkatesh, An introduction to total productive maintenance 16
th
April 2007.
[10] Kamran Shahanaghi, Seyed Ahmad Yazdian, Analysing the effects of implementation of total
productive maintenance (TPM) in the manufacturing companies, Journal of Modelling and
SimulationVol.5,PP.120-129,2009.
[11] Laura Swanson, An empirical study of the relationship between production technology and
maintenance management, Int. J. Production Economics, Vol.55, PP.191-207, 1997.
[12] F.-K. Wang, W. Lee, Learning curve analysis in total productive maintenance, Omega, Vol.29,
PP.491-499, 2001.
[13] Marcelo Rodrigues, Kazuo Hatakeyama, Analysis of the fall of TPM in companies Journal of
Operations Management Vol.179, PP.276-279, 2006.
[14] Mobley R.K., An introduction to Predictive Maintenance,Van Nostrand Reinhold, New York, 2009.
[15] Nakajima S, Introduction to TPM: Total Productive Maintence, Cambridge, MA, Productivity
Press.1984.
[16] Wireman T, The History and Impact of Total Productive Maintenance , New York, 1991.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 237


REQUIREMENTS FOR AN EVOLVING MODEL OF SUPPLY CHAIN
FINANCE: A TECHNOLOGY AND SERVICE PROVIDERS
PERSPECTIVE - A CASE STUDY

Ashok Kumar, Dinesh Kumar

Institute of Management and Technology, Faridabad, Religare Securities Pvt. Ltd
akumar.imt@gmail.com, dinesh.gbu18@gmail.com


ABSTRACT

This paper explores current models and practice regarding the dynamics of financial flows along global supply
networks. Based on data collected from technology and service providers that focus on such issues along global
supply networks, the paper identifies and discusses requirements for improved solutions to supply chain finance
challenges. This research has particular relevance in the light of the disruptions that the global credit crunch
has brought to global financial systems, and the changes that are likely as responses to these disruptions.

Keywords: supply chain finance, financial systems, global supply networks, technology and service providers


INTRODUCTION

As the worlds leading nations struggle to introduce a globally co-ordinated fiscal stimulus and are design of the
financial systems to cope with the worst credit and financial crisis in the history of worldwide commerce this
paper discusses some of the blockades to and solutions for global finance along global supply networks. This
paper uniquely takes both a financial and technological perspective and explores the current model and
dynamics of financial flows along global supply networks and suggests solutions to many of the challenging
issues in this domain. Though the data collection for this research was completed prior to the credit crunch, the
findings point to some of the solutions which are needed in order to improve the current system. A catalyst for
change of supply chain finance structures, processes and practices is needed, and the credit crunch may bring
about such an opportunity if interventions and changes include a focus on maximising the returns to business
while providing customer satisfaction and a network which works optimally for all involved. This paper utilises
research from a core provider group - technology service providers within the financial service sector - as both
information rich respondents and also integral players in global supply networks. This paper is part of a larger
research programme aimed at investigating new models for global supply networks that reflect the full
capabilities of current regulatory, technological, and organisational, management, and human aspects of
contemporary business.


LITERATURE REVIEW

The study of global supply chains has traditionally focused on product/material and information flows. The
equally central and essential issue of capital and financial flows has received considerably less research attention
(Fairchild, 2005). This situation is mirrored in practice in that the command and control systems of supply
networks are generally designed and operated separately from the systems that manage and monitor financial
flows along supply networks. As supply networks become more complex, geographically dispersed and with
hundreds of players, the challenges for managing both aspects and both aspect together become even more
critical. Supply chain solutions are often of interest at CFO level only to the extent that they influence financial
drivers like growth, profitability and capital utilization (Timme & Timme-Williams, 2000). There is also a
major lack of open and transparent engagement between partners in the business to business financial domain.
There is more acceptances of speed and transparency in consumer finance. The chip and pin technology
introduced to counteract fraud has speeded up the payment process for retail purchasing, though the banking
system still operates in a traditional fashion behind the scenes. Systems like ATMs and online banking allow for
instantaneous, transparency and global dealing with the resultant savings and increased efficiency for all .
Technological Aspects of Global Supply Networks A recent study of information sharing impact on the bullwhip
effect found that the effect was lessened when information was shared (Hsiao & Shieh, 2006), and novel ways
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 238

of improving coordination and prediction based on internal information markets have been proposed (Fang et
al., 2008).A Fawcett and Magnan (2002) study found little evidence of information sharing and Ballou (2007)
suggests that this is because of companys concern about the practice. Structural obstacles, competitive issues
and motivation of profit (Hsiao & Shieh,2006) and value in ownership issues (Childerhouseet al., 2003) as well
as concerns about data security(DAubeterre, Singh, & Iyer, 2008; Johnson, 2008)are some of the many issues
and the application of IS meets with a great deal of resistance in real situations. As discussed above much of the
information sharing is with a narrow logistics focus. The challenge is compounded when companies have to
share financial information (Johnson, 2008). Sharing of revenue generating information could give transparency
to where benefit sharing is needed but benefit sharing techniques would have to be explored and developed
(Ballou, 2007). Overall, these information sharing and coordination challenges have not yet been fully met in
practice. Given the exposure to companies involved in sharing sensitive financial information with potentially
multiple partners, many of them often not directly linked to a specific firm sharing the information and thus less
likely to be trusted (DAubeterre, Singh, & Iyer, 2008), the issue of trust has received increasing attention in
their search on information sharing. Though called for in the literature, there is no centralised trust system for
sharing information within the global supply network and especially financial, nor are there trusted third parties
readily available to all supply chain partners (Fang et al., 2008). Within the customer domain there are systems
like PayPal and EBay but no corresponding system exists in business to business. This leads us to the central
role of banks as the trusted third party within the financial flows of global supply networks. Logistic companies
like UPS (Payne, 2000;Hofmann, 2005) and Tesco and Sainsbury (Griffiths& Remenyi, 2003) may have both
the funding and the systems to play this major role.
Reconfiguring the Supply Network The challenges of designing a new global supply chain have been discussed
within the literature. Redesigning it must take into account both the virtual and physical reality and should
provide a range of advantages to business, their partners, their shareholders and customers. What is needed is a
flexible system which can be reconfigured as aspect of the network change. A system which is responsive to
changes anywhere within the network and globally. What has been referred to as the Triple a Supply chain
agile, aligned and adaptable. This integrated system would have a common data format and technology
standards (i.e., between buyers and sellers) and seamless procurement-to payment solutions (Dreyer, 2002).
Also though changes to SCM have been suggested in the Literature changes in reality have been slow with
much of SCM having a logistics focus and collaboration focusing on a player and his top tier suppliers (Ballou.
2007).


RESEARCH METHODOLOGY

This research investigates the current systems of supply chain finance from the perspective of technology
service providers within the finance sector. The research contributes to the design of a future supply network
which has financial flows as a core and co-ordinating aspects alongside the focus on materials and product
flows. And also investigate the way of sharing information in between the supply chain finance.

RESULTS

Current status of the global supply chain financial flows Study participants indicated that a lack of
synchronization and integration between the physical and financial flows contribute to inefficiencies in supply
chain management. You know the industrial revolution kept moving, and it made the whole physical
manufacturing processes more efficient but none of that was ever translated across. Actually none of that was
ever translated across into the [] financial. However, respondents agree that the financial flows have not
received the same attention. The lack of a link between electronic freight-forwarding devices and other systems
and the dominance of the paper format and the lack of interplay between the physical and financial supply chain
reduces the amount of available working capital and results in liquidity problems, particularly for SMEs. The
second main issue the respondents identified is the significant lack of automation in supply chain finance. This
lack of widespread automation prevents financial information from flowing freely through the system. The lack
of buyer-supplier technology-based automated financial systems created major costs in many areas including
disputed invoices and resulted in inventory in transit as effectively a nonperforming asset. There was a clear
view that the inefficiencies in the financial flows are not due to a lack of technology. Another obstacle was the
lack of channels for the technology vendors to sell business to business technology systems targeted at the
supply chain as the focus is on developing low cost retail consumer technologies. The lack of a standard
technology was also an issue as customised, proprietary softwares are used which are costly and non
compatible. Standardisation and harmonisation of systems would make it easier and cheaper for all. If access
was improved, the cost of technology would decrease and information exchange would increase. Corporate
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 239

technology was considered more advanced than banks technology, and this creates an obstacle toward
improving financial flows.

Legacy and Incompatible Systems: Current software often consists of legacy systems and in non compatibility
with banking software. Within banking much of the software is designed for improved security, fraud
management and risk management rather than the optimisation of the supply chain. Respondents discussed lack
of confidence in electronic security for financial payments among corporate users as issues.

Information Sharing and Trust: Though the use of managed services and outsourcing has increased
information sharing there are also vested interests in not sharing some information because sharing proprietary
information reduces their influence in the whole process and increases the risk of competitors getting valuable
information. No trusted middle provider exists for managing financial information. The current status of
information sharing was segmented and not standardised. Respondents believe the economic value of
information sharing must be demonstrated to achieve critical mass.

Corporate Barriers: Resistance to change within corporate was also considered a bottleneck to adoption. There
was some evidence that some departments in larger organizations have a vested interest in not automating the
invoice process. There was evidence of living off their suppliers balance sheets by paying later and this was an
obstacle to improving financial flow.

Barriers to collaboration: In relation to collaboration, several respondents describe the collaboration between
players as poor. This is attributed to the fact that there are too many players along supply chains. For one
transaction there could be as many as 40 documents that come into play and these documents could emulate
from up to 10 or 20 different companies. The next stage of the data analysis focuses on the future expected
trends but there was disagreement on what the future of financial flows would look like, with replies from no
major changes in financial flows to suggestions that there will be significant restructuring. Specific changes in
automation, standardisation and payment structure were identified as components of the future financial supply
chain.

Automation: Most respondents stated that an automated, global standard payments system would be
implemented in the next five years and invoicing and payment will no longer be done via post. This automation
was expected to reduce administrative burdens on businesses; allow large buyers to hold on to working capital
longer; and facilitate business-to-business access. Alternative some respondents believed that paper-based
systems will continue to constrain automation for years to come and global acceptance of automation will take a
long time because the high cost of automation cannot be justified in every country.

Common or Interoperable Standards were considered a necessary requirement. However, respondents
disagree on how standards will contribute to financial flow, whether one standard can or would be developed
and if there is one standard that could be agreed for all markets.

Closed Systems: Most respondents opted for a closed communication networks with certification by a trusted
third party, and corporate will utilize information exchanged with partners through information catchers such as
BUS systems.

New Technologies: The main focus on new technologies was reducing energy consumption and increasing
battery power of existing technology, particularly relevant in the developing world. Respondents believed hand-
held and wireless devices, such as pocket PCs and mobile phones and PCs will enable rapid electronic
payments, information passing and decision making for the supply chain.

This model given blow having reviewed the current and the projected model the respondents then focused on the
requirements for and the possible nature of an evolving future model. These requirements address both current
limitations and possible improvements and additions to current financial flow and supply chain finance
arrangements. Table 1.1 summarizes these requirements.







Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 240

Table 1. Requirements for an Evolving
Financial Flow Model

Standards; Creation of global, common or interoperable standards which recognises diversity.
Merger of the
physical and
financial
supply chains:
An electronic environment that combines all relevant information regarding the financial
and physical aspects of supply chains in secure, transparent, real-time manner and enables
joint command and control of physical and financial flows.
Automation: End-to-end electronic, from purchase orders to all trade finance documentation and
payments should be seamless and of minimal cost.
Accessibility
open source
and all players:
Open source device that is universally accessible and customisable.
Greater
Information
Sharing:
Protocols for information sharing and an authentication method for identities.
Information
management
techniques:
Appropriate management of the effective use of that information.
Trusted Third
Party or
Exchange Hub
A coordinating player or trusted intermediary. Suggestions includes major corporate,
banks, trusted companies like Reuters, governments, standards agencies (like Swift) and
regulation


CONCLUSIONS

The results of this research clearly indicate major problems inherent in current arrangements and practices of
supply chain finance. Substantial benefits exist in novel approaches and models. The requirements for such
innovative developments identified and formulated by the expert participant of this research provide a useful
contribution to the current debate on the nature and role of global financial systems. The main findings are that
standards are needed to creating a global, common, transparent and interoperable system which recognises
diversity; that automation of the system and the merger of the physical and financial supply chains is an
imperative. Further, this research suggests that an open, probably internet- or mobile-based system is likely to
form the infrastructure of a shared system that needs novel information management techniques and greater
information sharing willingness among supply chain partners. For this the development of trust, for example
through a trusted third parties or jointly operated information exchange hubs, is also crucial. There is a critical
need to re-design existing financial systems. Given the current background of a global financial crisis and the
immense economic impact of the limitations in liquidity, any improvement in available liquidity from the user
side, for example through more efficient financial flows, can be useful for the system as well as for the
participating firms. Moreover, progress in developing inter organisational systems that promote more efficient
financial flows and also provide better financial transparency and therefore better risk assessment and --
ultimately -- more successful risk management can provide important benefits. Given the often desperate
reactions of markets, corporate and governments to the global financial crisis, public and corporate interest in
and likely acceptance of regulation-driven change in the financial services system has increased tremendously.
This paper contributes to the current debates and crucial discussions of how to redesign global financial systems
from a technological and service provider perspective. It highlights clearly that changes to the financial system
are inevitable and point to areas which need to be aligned and to the challenges which need to be overcome. The
key issue is that change should not only address the limitations of current systems identified through the
financial crisis, but should also take into account the inefficiencies and shortcomings from operational and
technological perspectives. There changes are needed but if the perspective is once again the protection of the
banking system then a major opportunity will be lost. We now have the impetus and the opportunity to make
major and radical changes which will benefit business, the economy and society for generations to come.





Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 241

REFERENCES

[1] Ballou, R.H. (2007). The evolution and future of logistics and supply chain management. European
Business Review. 19(4): 332-348.
[2] Chen, Z., Ma, S., & Shang, J.S. (2006). Integrated supply chain management for efficiency
improvement. International Journal of Productivity and Quality Management, 1(1/2): 183-206.
[3] Childerhouse, P., Hermiz, R. Mason-Jones, R., Popp, A., & Towill, R.A. (2003). Information flow in
automotive supply chains: Identifying and learning to overcome barriers to change. Industrial
Management + Data Systems, 103(7): 491-503.
[4] Croom, S., Romanob, P., & Giannakis, M. (2000). Supply chain management: An analytical
framework for critical literature review. European Journal of Purchasing & Supply Management, 6(1):
67-83.
[5] Croom, S.R., Romano, P., & Giannakis, M. (2000). Supply Chain Management: An Analytical
Framework for Critical Literature Review. European Journal of Purchasing and Supply Management,
6(1): 67-83.
[6] DAubeterre, F., Singh, R., & Iyer, L. (2008). A semantic approach to secure collaborative
interorganizational eBusiness processes. Journal of the Association for Information Systems, 9(3/4):
231-266.
[7] Fawcett, S.E., & Magnan, G.M. (2002). The rhetoric and reality of supply chain integration,
International Journal of Physical Distribution & Logistics, 32(5): 339-361.
[8] Giannakis, M., & Croom, S.R. (2004). Toward the Development of a Supply Chain Management
Requirements for an Evolving Model of Supply Chain Finance: A Technology and Service Providers
Perspective Communications of the IBIMA Volume 10, 2009 ISSN: 1943-7765 235 Paradigm: A
Conceptual Framework, Journal of Supply Chain Management, 40(2): 27-38.
[9] Griffiths, P., & Remenyi, D. (2003). Information technology in financial services: A model for value
creation. Electronic Journal of Information Systems Evaluation. 6(2), 107- 115.
[10] Hofmann, E. (2005). Supply Chain Finance: Some conceptual insights. In Lasch, R. & Janker, C.G.
(Eds.): Logistik Management Innovative Logistikkonzepte: 203-214. Wiesbaden, Germany.
[11] Hsiao, J.M.,& Shieh, C.J. (2006). Evaluating the value of information sharing in a supply chain using
an ARIMA model. The International Journal of Advanced Manufacturing Technology, 27(5-6).
[12] Johnson, M.E. (2008). Information Risk of Inadvertent Disclosure: An Analysis of File- Sharing Risk
in the Financial Supply Chain. Journal of Management Information Systems, 25(2): 97123.
[13] Leonard, L.N.K., & Davis, C.C. (2006). Supply chain replenishment: before and after EDI
implementation. Supply Chain Management, 11(3): 225-232. March.
[14] S., Raghu, T.S., & Vinze, A. (2008). Editorial introduction: Cultivating and securing the
information supply chain. Journal of the Association for Information Systems, 9(3/4): 95-97.
[15] Pyne. J, (2000)Plugging in finance to complete the flow of e-commerce, Strategic Finance, 81(11):34
39.
[16] Roussinov, D., & Chau, M. (2008). Combining information seeking services into a meta supply chain
of facts. Journal of the Association for Information Systems, 9(3/4): 175-199. Timme, S.G., &
Williams-Timme, C. (2000). The financial-SCM connection. Supply Chain Management Review, 4(2),
33-39.



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 242

ANALYSIS OF SPOT WELDING
Rahul Gandhi
1
, Pankaj Bansal
2
, Rahul Garg
3
, Muralikrishna.D
4

1,2,3
Student, Lingayas University.
4
Asst.Prof, Lingayas University
1
ganesh_21gandhi@rediffmail.com,
2
pbansal91@gmail.com,
3
garg.rahul0909@gmail.com
3



ABSTRACT
Resistance welding is one of the oldest of the electric welding techniques used to join similar and dissimilar
metals. In ode to reach an excellent welding quality by spot welding different materials have been used. In the
present work spot welding of similar and dissimilar materials joint with various thickness layer. This was
examined by welding MS-MS, Al-Al, Br-Br, MS-Al, MS-Br, AL-Br and compared with each other. Then the
effect of different thickness on spot weld bonding of similar and dissimilar material joints were observed. As the
thickness of strips used is increased the voltage and current increases simultaneously with motion and the
maximum voltage and current were obtained at highest regulator point which was set accordingly. The quality
we obtained after cutting the nugget which is analyzed by spot welding on similar and dissimilar strips was
excellent at the maximum current value. Otherwise good and bad quality of nuggets were also analyzed. By spot
welding two dissimilar strips, voltage and current shows the same variations as in the similar strips but the
quality obtained was not excellent, as it varies in between good and bad. This data indicates that by cutting the
nugget of two similar strips gives excellent quality of spot weld rather than using two dissimilar strips.
INTRODUCTION
Spot welding is used to join metal objects. It is widely used, for example, more than 100 million spots are
produced daily in the European vehicle industry (TWI). Resistance spot welding is the principle joining process
of similar and dissimilar metals in the automotive industries. The long history of using resistance spot welding
can be attributed to proven robustness and reliability, cost effectiveness and flexibility in production (1).
Resistance spot welding is a process in which contacting metal surfaces are joined by the heat obtained from
resistance to electric current flow. Workpiece are held together under pressure exerted by electrode (2). The
attractive feature of spot welding is a lot of energy can be delivered to the spot in a very short time (approx. ten
millisecond) (3). That permits the welding to occur without excessive heating to the rest of the sheet. In this
paper, the aim is to compare the quality of nuggets we obtained from various elements with varying thickness.
The research in the field has analysed the quality of welding by using analytical method. The studies have
utilized different features extracted from data. In our study, the welding machines, the material and the thickness
of the material can vary in different processes, but the changes in current and voltage are thought to be internal
in the process. According to the Ohms Law, watt and heat are considered synonymous. Welding loops are
simple electrical circuits. Voltage, current, and electrical resistance can be related by a mathematical formula.
The relationship between these three characteristics of a welding circuit is shown by Ohms Law equation:

E = IR
where,
E is the applied voltage in volts.
I is the current flowing in the circuit in amperes.
R is the total resistance of the circuit in ohms.
Volt (Voltage) the force that causes electrons to move in a circuit is measured in volts.
Ampere (Current) or Amp, is the rate at which current (electrons) flow in a circuit, also known as Coulombs
per second.
Ohm (Resistance) electrical resistance that impedes current is measured in ohms.
Since the development of resistance spot welding, a large amount of work has been devoted to understanding
the mechanism of nugget formation (4-11). However, the effect of sheet thickness on the quality of nuggets is
not explored. For the reason of exploring, this study has been undertaken.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 243

PROCESSING AND EQUIPMENT
Spot welding involves three stages; the first of which involves the electrodes being brought to the surface of the
metal and applying a slight amount of pressure. The current from the electrodes is then applied briefly after
which the current is removed but the electrodes remain in place in order for the material to cool. Weld times
range from 0.01 sec to 0.63 sec depending on the thickness of the metal, the electrode force and the diameter of
the electrodes themselves.
The equipment used in the spot welding process consists of tool holders and electrodes. The tool holders
function as a mechanism to hold the electrodes firmly in place and also support optional water hoses which cool
the electrodes during welding. Tool holding methods include a paddle-type, light duty, universal, and regular
offset. The electrodes generally are made of a low resistance alloy, usually copper, and are designed in many
different shapes and sizes depending on the application needed.
The two materials being welded together are known as the workpieces and must conduct electricity. The width
of the workpieces is limited by the throat length of the welding apparatus and ranges typically from 5 to
50 inches. Workpiece thickness can range from 0.008in. to 1.25in.
After the current is removed from the workpiece , it is cooled via the coolant holes in the center of the
electrodes. Both water and a brine solution may be used as coolants in spot welding mechanisms.
RESULTS AND DISCUSSION
In order to reach an excellent welding quality product from a resistance spot welding similar and dissimilar
metals were used in the form of strip with different thickness. In the present study resistance spot welding of
similar and dissimilar materials joint with different thickness layer. This was examined by welding MS-MS, Al-
Al, Al-MS, Br-Br, Br-Al, Br-MS and compared with each other. Then the effect of thickness on spot weld
bonding of similar material joints and dissimilar material joints were observed. As the thicknesses of strips were
increased the voltage and current increase simultaneously with regulator motion and the maximum voltage and
current were obtained at highest regulator which was set accordingly. The quality we obtained from the similar
strips nugget was excellent at the maximum regulator motion; otherwise good and bad quality was also detected.
By using dissimilar strips, voltage and current shows the same variations as in similar strips but the quality we
obtained was not excellent it varies in between good and bad. Various studies on the analysis of the effective
quality estimation using Hopfield network (12). The primary circuit dynamic resistance, which includes the
information of nugget formation, to determine the weld quality. Johnson et al. [1972] discussed the electrode
movement signal caused by the weld expansion and analyzed its effect on the weld quality. Dickinson et al.
[1980] observed the relationship between the dynamic electrical factors and the phenomena that occurred during
the formation of a spot weld, based on the pattern changes of the dynamic resistance. Also, using information
established in this research, a spot weld control mechanism was proposed. Patange et al. [1985] suggested a
method of evaluating weld quality as an in-process system with the dynamic resistance monitored using a
microprocessor in the secondary circuit of the welding machine. Hao et al. [1996] performed the multiple linear
regression analysis on the characteristic analysis of aluminum RSW to estimate the nugget diameter and weld
strength. Since the emergence of the neural network in the early 1940s, the artificial intelligence (AI) technique
has been used in various engineering fields. Recently, it has been applied in the area of welding control and
quality estimation, including RSW, which is an electromechanically coupled nonlinear process. Brown et al.
[1998] used this method to estimate the nugget diameter, a factor closely related to weld strength. Dilthey et al.
[1999] used a similar approach to estimate the tensile shear strength of the welds.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 244








`






V1
T1
c2
V3
T3
C4
0
100
200
300
1
2
3
4
5
6
7
V1 C1
T1 v2
c2 T2
V3 C3
T3 V4
C4 T4
V1
T1
0
50
100
150
200
250
300
1
2
3
4
5
V1
C1
T1
V1
T2
0
50
100
150
200
250
300
1
2
3
4
5
V1
C1
T1
V2
C2
T2
Br-Br with varying thickness
1- 0.84-0.84(THICKNESS)
2- 0.43-0.43(THICKNESS)
3- 0.81-0.81(THICKNESS)
4- 1.01-1.01(THICKNESS)

Al-Al MS-MS
1. 0.56-0.56(THICKNESS) 1. 0.76-O.76 (THICKNESS)
2. 0.81-0.81 (THICKNESS)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 245

CONCLUSIONS
Resistance spot welding of similar and dissimilar materials also effected thickness of strip materials. Results of
present studies indicate that nugget obtained from the strips of similar material give the excellent quality product
rather than using dissimilar strips.
REFERENCES
[1] H. Junno, P. Laurinen , E. Haapalainen , L. Tuovinen , J. Rning , D. Zettel , D. Sampaio , N. Link, M.
Peschl , resistance spot welding process identification and initialization based on self-organising maps.
[2] http://books.google.com/books?id=zeRiW7en7HAC&pg=RA1-PA694
[3] http://www.robot-welding.com/Welding_parameters.htm
[4] J.A. Greenwood, Temperature on spot welding. British Welding Journal, 1961; 316-322.
[5] H.A. Nied , The finite element modeling of the resistance spot welding process. Welding Journal,
1984; 123-132.
[6] A.F. Houchens , R.E. Page, W.H. Yang, Numerical modeling of resistance spot welding, numerical
modeling of manufacturing processes, edited by R.F. Jones, D.W. Taylor, H. Armen, J.T. Hong, 1977;
117- 129.
[7] W. Rice And E.J. Funk, Analytical investigation of the temperature distribution during resistance
welding. Welding Journal, 1967; 175-186.
[8] J.E. Gould, An experimentation of nugget development during spot welding using both experimental
and a nalytical techniques. Welding Journal, 1987; 1.
[9] G.R. Archer, Caiculation of temperature response in spot welds. Welding Journal, 1960; 327-330.
[10] K.P. Bentry, J.A. Greenwood, P. Mc. Knowlson, R.G.Baker, Temperature distribution in spot welds.
British Welding Journal, 1963; 613-619.
[11] K. Nishiguch, K. Matsuyama, Influence of current wave form nugget formation in spot welding of thin
sheets. Iiw doc no. III805-85, IIW, 1985.
[12] Y. Cho and S. Rhee, Quality estimation of resistance spot welding by using pattern recognition with
neural networks. IEEE transactions on instrumentation and measurement, 2004; 2.
[13] K. I. Johnson and J. C. Needham, Newdesign of resistance spot welding machine for quality control,
Welding Journal, vol. 1972; 122s-131.
[14] D.W. Dickinson, J. E. Franklin, and A. Stanya, Characterization of spot welding behavior by dynamic
electrical parameter monitoring,Welding Journal, 1980; 170-176.
[15] S. R. Patange , T. Anjaneyulu, and G. P. Reddy, Microprocessor-based resistance welding monitor,
Welding Journal, 1985; 3338.
[16] M. Hao , K. A. Osman , D. R. Boomer, and C. J. Newton, Developments in characterization of
resistance spot welding of aluminum, Welding Journal, 1996; 1-8.
[17] J. D. Brown, M. G. Rodd, and N. T. Williams, Application of artificial intelligence techniques to
resistance
spot welding, Ironmaking and Steelmaking, 1998; 199-204.
[18] U. Dilthey and J. Dickersbach , Application of neural networks for quality evaluation for resistance
spot welds, ISIJ Int , 1999; 1061-1066.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 246

STUDY OF VARIOUS CHALLENGES AND SOLUTIONS IN
MAINTENANCE OF CRITICAL COMPONENTS

Balbir Singh
1
and Sanjay Mohan Sharma
2

1,2
School of Mechanical Engineering, Shri Mata Vaishno Devi University, Katra, Jammu
1
balbir.isst@rediffmail.com


ABSTRACT

With the advancement in technology, miniaturisation and automation of industries, equipment and machinery
are getting complex made of hundred of components involving various systems like mechanical, electronics,
instrumentation, computer etc. Maintenance of such costliest and delicate type of equipments are challenge to
maintenance group of industries. Present paper studies various challenges and potential solutions to improve the
availability of such critical components and minimise the disruption in production activities. Optimum
inventory, Streamlining configuration, continuing education, visionary planning and dynamic leadership could
be best solutions to meet challenges being faced in maintenance management of critical components.

1. INTRODUCTION:
Maintenance is essentially an act of care or upkeep and this not only includes fixing or mending broken parts
and machines but also ensuring that such acts of repair renew or revive the equipment, that is, restore it to a
sound or healthy state. At the same time, maintenance must include all activities taken to ensure prevention of
failures and also prevention of deterioration of the equipment. Modern era witnessed the fast development of
technology resulting multi-functional complex equipments. Complex equipment means complex
configurations and systems. Complex equipment is a very intricate business, as the term suggests. One piece of
equipment can have from 500 to over 1,000,000 components and require 200 to 500 steps to assemble. Complex
equipment manufacturing companies tend to be low-volume businesses with a large product mix.

Maintenance management today faces a myriad of challenges when trying to develop and implement strategies
for successfully maintaining critical production equipment. As plants implement advanced manufacturing
practices such as lean and quick response manufacturing, machines must be ready to run when production
needs them. Traditional approaches to these challenges have included hiring a new maintenance manager,
buying some training for the maintenance crew, buying a new Computerized Maintenance Management System
(CMMS), developing a preventive maintenance program, or one of many other approaches aimed at getting
more productivity from the maintenance organization. All these approaches need more reinforcement to adopt
foolproof approaches to meet various challenge being posed in maintenance of critical components. The good
news is that management now has more solution available to help them address complex maintenance
challenges.

2. CHALLENGES

2.1 Maintenance of large Variety of equipments
Over the past twenty years, maintenance has changed, perhaps more so than any other management discipline.
The changes are due to a huge increase in the number and variety of physical assets (plant, equipment ,
machinery , buildings and facilities) which must be maintained throughout the world, more complex designs,
new maintenance techniques and changing views on maintenance organization and responsibilities.

2.2 Shortening of Technology Age
Rapid growth in research reduces the technology age which make old technology redundant/obsolete. This put
lot of pressure on manufacturer to maintain long inventory of both old and parts of equipement.

2.3 Downtime
Downtime has always affected the productive capability of physical assets by reducing output, increasing
operating costs and interfering with customer service. In manufacturing, the effects of downtime are being
aggravated by the worldwide move towards just-in-time systems, where reduced stocks of work-in-progress
mean that quite small breakdowns are now much more likely to stop a whole plant.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 247


2.4Automation
Greater automation means that more failures affect our ability to sustain satisfactory quality standards. This
applies as much to standards of service as it does to product quality. Equipment failures can affect climate
control in buildings and the punctuality of transport networks as much as they can interfere with the consistent
achievement of specified tolerances in manufacturing.

2.5 Safety and the environment
More and more failures have serious safety or environmental consequences as witness in newspapers like train
accident, oil/gas leakage in industry etc at a time when standards in these areas are rising rapidly. This adds an
order of magnitude to our dependence on the integrity of our physical assets - one which goes beyond cost and
which becomes a simple matter of organizational survival.

2.6 Higher costs
At the same time as our dependence on physical assets is growing, so too is their cost - to operate and to own.
To secure the maximum return on the investment which they represent, they must be kept working efficiently
for as long as we want them to. Finally, the cost of maintenance itself is still rising, in absolute terms and as a
proportion of total expenditure. In some industries, it is now the second highest or even the highest element of
operating costs.

2.7 New techniques
There has been explosive growth in new maintenance concepts and techniques. Hundreds have been developed
over the past fifteen years, and more are emerging every week. The new developments include various
maintenance techniques, decision support and analysis tools, such as hazard studies, failure modes and effects
analysis and expert systems etc are challenges for maintenance manager to understand their efficacy and
implementation.

2.8 Shortage of skilled Manpower
Shortage of skilled manpower is universal problem. This reality, coupled with the fact that most apprentice
programs have been shuttled and training programs curtailed, has created a shortage of maintenance technicians
that is nearing critical proportions. Finding, trained and retaining skilled maintenance people is one of the top
challenges facing maintenance organizations today.

2.9 Dynamic leadership
Effective maintenance organizations must have dynamic leadership that is able to plan, both strategically and
tactically. Maintenance leadership must be able to convince the team that they need to think and work
differently than they have in the past. Maintenance needs leadership that is driven by results, not activity. Where
is this leadership going to come from is serious question for all of us.

2.10 Procurement and management of maintenance supplies
Most manufacturing companies view spare parts as very expensive, no substitution, difficult to manage, and
usually way out of control. Maintenance parts and supplies constitute up to 60% of maintenance spending, yet
most plants do not have effective plans to reduce the cost and number of parts that are consumed.

2.11 Techniques and technology
Maintenance today is far more a technology based activity than it is a repair activity with need for a far greater
emphasis on predicting and forecasting maintenance needs. No plant can perform effective maintenance without
a Computerized Maintenance Management System (CMMS). If we cannot measure maintenance, we will not be
able to improve maintenance. It is that simple. With the remote monitoring capabilities of todays controls,
catastrophic failure can be a thing of the past. Using Computerized Maintenance Management System (CMMS)
planning, execution and feedback and improvement has become easy

3 SOLUTIONS

3.1 Streamlining the configuration process
A complex equipment may have over 1,000,000 parts. Configuring a system with so many parts is very difficult.
It is hard even to capture the customers requirements correctly. For any piece of complex equipment, customers
must provide detailed technical specifications. The specification process introduces many errors; for example,
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 248

some parts may be incompatible, the configuration may not meet the performance requirements, parts may be
missing, or parts may not fit. Streamlining the configuration process leads to reduction in expenses such as staff
maintenance

3.2 Minimise disruption in production
Sometime machine that is running at peak performance is disrupted for preventive maintenance as per
schedule, The snag is that now it takes days to get the machine running properly again. Making maintenance
less disruptive by having a production list of problem areas on which to focus, having a checklist of quality and
functionality issues and getting an official sign-off from the production, maintenance and quality assurance
departments before a machine is ready to go back on line are the best approaches to minimise production
stoppage.

3.3 OEM Tools and Training Inventory
Some equipment is so complex it takes special tools to set up or take apart. The cost of owning these tools can
be prohibitive for some companies. After an honest appraisal of in-house expertise and abilities, negotiate
preventive maintenance assistance, training and specialised tools into the equipment cost . Developing a good
relation with original equipment manufacturer (OEM) become handy in dealing with maintenance of critical
and sophisticated machinery.

3.4 Optimum inventory
Improves the predictability of changes by holding optimal inventory levels of safety stock and the most
commonly used configuration options. It uses risk pooling methodology to determine these optimal levels of
parts and sub-assemblies. If the company is holding a large amount of inventory higher up in the supply chain
(closer to the OEM). Optimum inventory increases the ability to provide the right parts in real time and fosters
good supply chain partner relationships.

3.5 Synchronizing production with demand
Manufacturing should provides a lean approach by driving production only with actual customer orders instead
of with work orders. Parts should be assembled at the same rate at which the customer is buying. In addition,
manufacturer lets companies sequence the production of families of assemblies (or multiple models) on the
same production line in a way that reduces queues between various stages or stations, ensuring that production
flow smoothly. Since complex equipment involves such a high degree of customization, two models are rarely
the same. These unique capabilities let complex equipment companies use the same manufacturing line to
produce multiple models

3.6 Availability of spare parts
Shortage of spare parts leads to reduction of availability of equipments. Industries should discuss spare parts
with their equipment manufacturers, both to determine the specific parts that may be needed for day-to-day
maintenance and those to have on hand when it's time for preventive maintenance. Provision of making
purchases contingent on the OEM providing quick turnaround and support for parts replacement. Further
knowledge of alternative sources and parts replacement potentials assures uninterrupted maintenance.

3.7 Maintaining Schedule
Preventive maintenance should be the highest priority for technicians. But in reality, breakdowns and
emergencies tend to take precedence. When this happens, maintenance personnel resemble first responders. The
problem is that this is a Catch-22 in which a company can lose its proactive position on maintenance. High-
quality preventive maintenance programs do not begin working overnight; it takes time for them to make a
difference. It is said that if you don't take care of preventive maintenance you're only going to have more
emergencies down the line


3.8 Updating of knowledge and computer Savvy
Today's packaging arena is rife with servos, programmable logic controllers (PLCs) and personal computers.
Maintenance personnel without a fundamental knowledge of these and other systems are at a serious
disadvantage. Education should not be an option for mechanics; it should be mandatory. It's the company's
responsibility to budget and make time available for mechanics to attend classes to keep up with technology. It
will benefit both the employee and the company if the mechanics have new or refined skills. A mechanic with
just mechanical skills will not be as valuable as a mechanic with electrical, mechanical and PLC skills.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 249

3.9 Avoiding Achilles' Heels.
Inspection equipments like weight measuring system and metal detectors etc are prone to a number of well-
known Achilles' heels that maintenance professionals need to protect against, certain ailments like moving
parts easily get out of alignment or are overly sensitive to changes in product pacing and/or repetition rates,
vibration, dust, electro-magnetic fields and certain packaging materials. In the case of product-inspection
systems, it's worthwhile to consider a good old-fashioned root-cause analysis. If you examine the typical causes
of failure of typical legacy systems, the results tend to be a relatively small number of problems that keep
coming up.

3.10 Need For Continuing Education
A refresher maintenance training course should be written into the terms and condition for the procurement of
the equipments. It should be spelled out in the quote as a separate line item. If this is not written in the terms, it
can be very expensive. And it can be hard to get the vendor to come out and provide the follow-up training.
Sometimes management does not understand the complexities of the equipment and they don't understand why
their maintenance people can't resolve the problems quickly and efficiently

3.11 Building A Better Maintenance Manager.
Denise A. Holloman, director of manufacturing engineering at General Mills, says an evolution is underway at
larger corporations that is transforming maintenance managers into employees who are well-versed at
articulating the technical and business nuances of their department. The maintenance manager now must be able
to clearly illustrate the cost/benefit ratio for his or her area. Upper-management must view maintenance-and the
department's employees-as an important business component. Companies should recruit maintenance managers
who have potential for advancement. Holloman says. It is not about assembling a group of people to do a job. It
is about having a customer-focused, results-oriented organization aligned with delivering sound business
results.

CONCLUSIONS
With design-to-market cycles more compressed, industry leaders are struggling to find ways to meet various
challenges like maintenance cost, knowledge gap, rapid development in technology, process, tool, increasing
complexity of equipment, machinery and facilities. By managing configuration complexities, improving the
flexibility of supply chains, and optimal servicing the installed base, imparting requisite training followed by
refresher courses, computer savvy, better relation with OEM, indigenization of spares and tools and tester and
dynamic leadership are best approaches to various challenges being faced by maintenance fleet.

REFERENCES:
[1] Bikash Bhadury, Developments in the practice of maintenance management, Productivity vol.41, No.
4, Jan-Mar 2001
[2] Jay Goyal et.al, Challenges to complex Equipment manufacturers Managing complexity, Delivering
flexibility, and Providing optimal service, Oracle Corporation World Headquarters 500 Oracle
Parkway, 2006
[3] Joe Barkai, Configuration-Based Maintenance Repair and Overhaul of Complex Equipment WHI T E
P A P E R, Manufacturing Insights Opinion, 2007
[4] Gopalakrishanan K and Banerji A.K. Maintenance and Spare Parts Management, PHI learning private
limited, edition 2009
[5] Mishra R.C, Reliability and Maintenance Engineering, New Age International Publisher, Edition 2008

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 250

A COMPARATIVE STUDY OF NATURAL FIBRE COMPOSITES AND
MAN-MADE SYNTHETIC FIBRE(GLASS-FIBRE) COMPOSITES

Gurmeet Singh Kalra
1
, Prem Singh
2

1
Associate Professor,
2
Professor
Mechanical Engineering Department, Graphic Era University Dehradun, Uttarakhand, India
1
gurmeetskalra@gmail.com


ABSTRACT

The use of natural fibres as reinforcement in polymeric composites for technical applications has been a
research subject of scientists during the last decade. There is a great interest in the application of natural fibres
as substitutes for man-made synthetic fibres, motivated by potential advantages of weight saving, relative high
strength and stiffness, lower raw material price, and ecological advantages of using green resources which are
renewable and biodegradable. Products made of natural fibres offer environmental advantages; they require
less energy during manufacture and are relatively easy to recycle after the end of their useful life. In this paper
we present a comparative study of eco-friendly natural fibre composites and man-made synthetic (Glass fibre)
composites which encourages the use of natural fibres composites over synthetic composites. The objective of
this paper is to discuss and encourage the use of natural fibre composites in various industrial applications. In
this paper we are comparing the natural fibre composites and glass fibre composites on the basis of mechanical
properties, energy consumption and environmental aspects.


INTRODUCTION

Till last decade or so glass fibres were used as reinforcement in composites due to their low cost as compared to
aramid and carbon and fairly good mechanical properties. But the ecological concern and global warming has
initiated a considerable interest in using natural fibres to produce green products since carbon dioxide neutrality
of natural fibres is attractive. Burning of substances derived from petroleum (e.g. synthetic fibres) releases
enormous amount of carbon dioxide into atmosphere which is the root cause of green house effect and worlds
climatic changes.Since last decade natural fibre composites are emerging as a realistic alternative to glass fibre
composites in many applications such as automotive, construction, leisure, sporting industries because of lower
cost and lower density. The ecological characteristics, biodegradability, low costs, safe fibre handling, non-
abrasive nature, low energy consumption, high specific properties, low density and wide variety of fibre types
are very important factors for the acceptance of natural fibre composites in automobile and construction
industry. Some other potential fields of natural fibre composites are door and instrument panels, package trays,
glove boxes, arm rest, seat backs. Natural fibres are recyclable and are environmental friendly due to less
dependence on non-renewable energy/ material resources, lower pollutant emissions, lower greenhouse gas
emissions, more energy recovery. They are non-abrasive to process equipment and can be incinerated at the end
of their life cycle for energy recovery as they possess a good deal of calorific value.
Natural fibres are safer during handling and less suspected to affect lungs during processing and use. On the
other hand glass fibre composites consume extensive energy and causes health risks during production and
handling. Glass fibre composites are non-recyclable and abrasive to process equipment. Their incineration
generates clinker like mass that is hard to dispose off except land filling.The objective of this paper is to
encourage the use of natural fibre composites by comparing it with glass fibre composites.
MECHANICAL PROPERTIES OF NATURAL FIBRE COMPOSITES VS. GLASS
FIBRE COMPOSITES

Paul Wambua, Jan Ivens, Ignaas in their work tested the mechanical properties of different natural fibre
composites and compared. Kenaf, hemp and sisal composites showed comparable tensile strength and modulus
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 251

results but in impact properties hemp appears to out-perform kenaf. The tensile modulus, impact strength and
the ultimate tensile stress of kenaf reinforced polypropylene composites were found to increase with increasing
fibre weight fraction. Coir fibre composites displayed the lowest mechanical properties, but their impact
strength was higher than that of jute and kenaf composites. In most cases the specific properties of the natural
fibre composites were found to compare favourably with those of glass.
The mechanical properties of sisal, hemp, coir, kenaf and jute reinforced polypropylene composites have been
investigated. The tensile strength and modulus increases with increasing fibre volume fraction. Among all the
fibre composites tested, coir reinforced polypropylene composites registered the lowest mechanical properties
whereas hemp composites showed the highest. However, coir composites displayed higher impact strength than
jute and kenaf composites.
The mechanical properties of the natural fibre composites tested were found to compare favourably with the
corresponding properties of glass mat polypropylene composites. The specific properties of the natural fibre
composites were in some cases better than those of glass. This suggests that natural fibre composites have a
potential to replace glass in many applications that do not require very high load bearing capabilities.
Table below compares the mechanical properties of different natural fibres to those of E-glass.
Properties of natural Fibres in relation to those of E-Glass:
Properties E-glass Hemp Jute Ramie Coir Sisal Flax Cotton
Density 2.55 1.48 1.46 1.5 1.25 1.33 1.4 1.51
Tensile Strength(MPa) 2400 550-900 400-800 500 220 600-700 800-1500 400
E Modulus(GPa) 73 70 10-30 44 6 38 60-80 12
Specific(E/d) 29 47 7-21 29 5 29 26-46 8
Elongation at Failure 3 1.6 1.8 2 15-25 2-3 1.2-1.6 3-10
Moisture Absorption % 8 12 12-17 10 11 7 8.25
The tensile strength of flax, sisal, jute and hemp falls between 600 and 800MPa, which is much higher than
other natural fibres. In terms of density natural fibres weigh only 40% that of glass fibres, thus having
competitive advantage in automobile applications. It is shown that stiffness of flax fibre composites is
comparable or even better whereas, flexural and tensile strength properties are slightly lower. However, impact
strength of flax fibre composite is only 3-/7 kJ/m
2
as compared to 40 kJ/m
2
for glass fibre composite.

Energy Savings:
Energy consumption comparison: hemp vs. glass fibre composites
Overall energy consumption schedule for Natural fibre Mat Thermoplastic & Glass fibre Mat
Thermoplastic:
Quantity(1 metric ton) NMT(65% fibre) MJ GMT(30% fibre) MJ

1 Materials Hemp Cultivation 1340 Glass Fibre Production 14500
PP Production 35350 PP Production 70700
Total 36690 Total 85200
2 Production Composite 11200 Composite 11200
3 Incineration
PP Incineration Energy Required 117 Energy Required 234
Energy Released - 7630 Energy Released -15260
Hemp Fibre Incineration Glass Fibre Incineration
Energy Required 1108 Energy Required 516
Energy Released -10650
Net 17055 Net -14510
4 Balance Gross Energy Required 49115 Gross Energy Required 97150
Energy Released -18222 Energy Released -15260
Net energy required 30800 Net energy required 81890
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 252

In Table above, detailed comparison is shown for energy consumption between hemp based and glass fibre
composites. The main reference in this regard is the work of Corbiere-Nicollier et al. 2001 based on life cycle
assessment of biofibers replacing glass fibers in plastics.

From the above table, it is evident that natural fiber composites consume only 37% energy in their entire life
cycle as compared to glass fiber composites. In other words for a same amount of product 60% savings is
achieved. The low savings in terms of incineration can be attributed to high incineration value of PP compared
to hemp fiber as shown in above table. The net saving of around 50 000 MJ by using 65% hemp fibers instead of
30% glass fibers in thermoplastic matrix not only saves non-renewable fossil fuels to a great extent but also
helps in reducing CO
2
level in the atmosphere.


ENVIRONMENTAL IMPACT CONSIDERATIONS:

It has been already pointed out that the use of natural-organic fillers allows obtaining several environmental
advantages in comparison to mineral-inorganic counterparts. The environmental impact, in fact, can be
improved mainly due to the reduction in the use of fossil-based resources (especially petroleum). A detailed
paper has been written by Joshi et al. collecting and discussing some LCA assessment studies on composites
filled with, respectively, natural-organic fillers (NFR) and glass fibers (GFR). Joshi then proposes a
generalization of the results, identifying and presenting four interesting indicators of the superior relative
environmental performance of NFR composites in comparison to GFR ones. These can be summarized as
follows:

(A)Environmental impacts of natural fiber vs. glass fiber production: NF win because of energy consumption
(solar energy vs. thermal energy from fossil fuels) and lower emissions (except for nitrates). Table A below
shows the estimated life cycle non-renewable energy requirements for production of glass fiber and two natural
fibres. As can be seen glass fiber production requires 510 times more non-renewable energy than natural fiber
production. As a result, the pollutant emissions from glass fiber production are significantly higher than from
natural fibre production. Columns 2 and 3 of Table B tabulate the environmental impacts from glass fiber
production and china reed fiber production processes. Except for nitrate emissions associated with fertilizer use
in china reed production, all other emissions are much lower for natural fibers. Increased nitrate emissions can
lead to eutrophication of water bodies, which is a significant water quality problem in many areas. However,
Corbiere et al. find that life cycle eutrophication impacts of NFR composites are lower than life cycle
eutrophication effects of GFR composites, when they include contribution of atmospheric NOx emissions to
eutrophication. These observations are likely to be valid across different natural fibers, since their production
processes are very similar. Hence substitution of glass fibers by natural fibers of equal weight normally
improves environmental performance of the component, with possible exception of local eutrophication effects.




Table A : Non renewable energy requirement for production of different fibres

Non renewable energy requirement (MJ/Kg)
Glass fibre mat Flax fibre mat China reed fibre
Raw material 1.7 Seed production 0.05 Cultivation 2.50
Mixture 1.0 fertilizers 1.0 Transport plant 0.40
Transport 1.6 Transport 0.9 Fibre extraction 0.08
Melting 21.5 Cultivation 2.0 Fibre grinding 0.40
Spinning 5.9 Fibre separation 2.7 Transport fibre 0.26
Mat production 23.0 Mat production 2.9
TOTAL 54.7 TOTAL 9.55 TOTAL 3.64



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 253

Table B : Life cycle environmental impacts from production of glass fibre, epoxy resin, ABS and PP
Environmental Glass Fibre China Reed Epoxy Resin ABS Polypropylene
Impact Fibre
Energy use(MJ/kg) 48.33 3.64 140.71 95.02 77.19
CO
2
emissions (kg/kg) 2.04 .66 5.90 3.1 1.85
CO emissions(gm/kg) .80 .44 2.2 3.8 0.72
SO
x
emissions(gm/kg) 8.79 1.23 19 10 12.94
NO
x
emissions (gm/kg) 2.93 1.07 35 11 9.57
Particulate Matter (gm/kg) 1.04 .24 15 2.9 1.48
BOD to water (mg/kg) 1.75 .36 1200 33 33.94
COD to water(mg/kg) 18.8 1 2.27 51000 2200 178.92
Nitrates to water (mg/kg) 14 24481 1 71 18.78
Phosphates to water (mg/kg) 43.06 233.6 220 120 3.39

(B)Replacement of base polymer by higher polymer volume percentage of natural fibre: In order to obtain
comparable mechanical properties, higher volume fractions of NF are used and this favours a lower use of oil-
derived polymers. NFR components typically will have a higher fiber volume fraction compared to GFR
components for equivalent strength and stiffness performance, because glass fibers have better mechanical
properties than natural fibers. This higher fiber volume fraction reduces the volume and weight fraction of the
base polymer matrix used in the composite. The life cycle energy use and emissions from the production of most
base polymers used in composites are significantly higher than those associated with natural fiber production.
For example, columns 46 in Table B show estimates of life cycle energy use and emissions from production of
1 kg of epoxy resin, ABS and PP extracted from APME eco-profiles. These can be compared to the life cycle
emissions associated with 1 kg of china reed fiber production shown in column 3 and it is obvious that energy
use and emissions associated with base polymer production are significantly higher than those associated with
natural fiber production. For example, PP production requires about 20 times more energy than natural fiber
production and correspondingly the emissions are also higher. These observations are valid across most natural
fibers and base polymers. Hence substitution of base polymer by higher natural fiber fraction will improve the
environmental performance of NFR composites compared to equivalent GFR composites.

(C)Reduced emissions during service life of the product, due to lower weight: NFR composites have lower
specific weights than GFR composites and this, in turn, reduces energy consumption both directly (e.g. in the
case of composites used for automotive applications) and indirectly (reduction of emissions due to product
transport and delivery). This higher volume fraction of lower density natural fibers in NFR composites also
reduces the weight of the final component. Table C shows the weights of equivalent GFR and NFR components
from the three studies. As can be seen NFR components result in 2030% reduction in weight. Natural fiber
composites are becoming popular in automotive applications because of this weight reduction. Lower weight
components improve fuel efficiency and in turn significantly lower emissions during the use phase of the
component life cycle.

Table C: Weight Reduction with natural fibre composites:
Component Source
study
Conventional
composite
materials
Weight (gm)
of reference
component
NFR
materials
Weight (gm)
of NFR
component
Weight
reduction
(%)
Auto side
panel
[7] ABS 1125 Hemp-Epoxy 820 27
Auto
insulation
panel
[5] Glass Fiber- PP 3500 Hemp-PP 2600 26
Transport
pallet
[4] Glass Fiber-PP 15000 China reed
PP
11770 22

(D)Carbon and Energy credits from natural fibres incineration: Differently from glass fibres, natural fibres can
be conveniently incinerated at the end of products service life, with a double advantage of energy recovery and
no net additional CO
2
emission, since the CO
2
released during combustion is theoretically the same which was
taken up by the plant from the atmosphere, during its growth. Hence incineration of NFR composites leads to
positive carbon credits and lower global warming effect.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 254

CONCLUSION:

Though the mechanical properties of natural fibres are much lower than those of glass fibres, but their specific
properties, particularly, stiffness is comparable to glass fibres. Natural fibre composites are being popular in
industrial applications which do not require high mechanical resistance but low purchasing and maintenance
costs. Also from the various studies we have found that natural fibre composites are environmentally more
superior to glass fibre composites in most applications, except for higher nitrate and phosphate emissions which
can lead to increased eutrophication in local water bodies. Full biodegradability of natural fibre composites can
be obtained by replacing traditional polymers matrix with biodegradable ones which will further improved its
environmental impacts. Research is also going on to enhance the mechanical properties and improve upon the
other limitations of natural fibre composites.

REFERENCES:
[1] Callister WD. Materials science and engineering*/an Introduction, 5th ed.. Wiley, 2000.
[2] Corbiere-Nicollier T, Laban BG, Lundquist L, Leterrier Y, Manson JAE, Jolliet O. Lifecycle
assessment of biofibers replacing glass fibers as reinforcement in plastics. Resource Conservation
Recycling 2001;33: 26787.
[3] Schmidt WP, Beyer HM. Life cycle study on a natural fiber reinforced component. SAE Technical
paper 982195. SAE Total Life-cycle Conf. Graz, Austria; December 13, 1998.
[4] Wotzel K, Wirth R, Flake R. Life cycle studies on hemp fiber reinforced components and ABS for
automotive parts. Angew Makromol Chem 1999; 272(4673):1217.
[5] Corbiere-Nicollier T, Laban B, Lundquist L, Leterrier Y, Manson J, Jolliet O. Life cycle assessment of
biofibers replacing glass fibers as reinforcement in plastics. Resources, Conservation and Recycling
2001; 33:267_/87.
[6] Garkhail SK, Heijenrath RWH, Peijs T. Mechanical properties of natural-fiber-mat-reinforced
thermoplastics based on flax fibers and polypropylene. Applied Composite Materials 2000; 7:351_/72.
[7] Oksman K. Mechanical properties of natural fiber mat reinforced thermoplastic. Applied Composite
Materials 2000; 7:403 _/14
[8] S.V. Joshi, L.T. Drzal, A.K. Mohanty, S. Arora: Are natural fiber composites environmentally superior
to glass fiber reinforced composites?



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 255

ELECTRO PHOTOCHEMICAL MACHINING: A NEW
MANUFACTURING TECHNOLOGY

Amir Shaikh
1
, Arbind Prasad
2

1
Professor,
2
Research Scholar, Department of Mechanical Engineering, Graphic Era University,
Dehradun248002,uttarakhand,India.
amir.shaikh7@gmail.com

ABSTRACT

Electro Photochemical machining is one of the non-traditional processes that has gained popularity owing to its
efficiency to work on materials that are otherwise difficult to handle by the traditional methods. The Process of
electro photochemical machining (often abbreviated as EPCM, and sometimes referred to as chemical milling
or chemical etching) is a technique for manufacturing high-precision flat metal parts, by electrochemically
etching away the unwanted materials. A photographically prepared mask is used to protect the metal that is to
remain after the etching process. This method gives good surface finish in addition to the formation of the
pattern on the material surface. Also, it enables to work on the complex parts and thinner materials. The
experiment showst how Electro photochemical machining (EPCM), which is a hybrid of electrochemical and
chemical machining, can be used to generate intricate designs on very thin metal foils with good surface finish
and edge characteristics. Moreover the process uses very innocuous chemicals. In this paper worked upon
titanium alloy. Optimum etch rates are determined for the sample subjected to different current densities.

Key word: Electro photochemical machining, etching, lithographic, photo resist etc.
1.INTRODUCTION

Electrochemical machining turnout to be first proposed in 1929, when a Russian, W. Gussef, filed a patent for
an electrochemical machining process with many features almost identical to the process as now practiced.
Furthermore an American, Burgess, had demonstrated the possibility of the process in 1941. Hedrew attention to
the striking differences between the mechanical and electrolytic methods of removing the metal, the first being
the method of 'brute force violence compared with the cool, steady non deforming magic of the electrolytic
process. However, despite the attractions of the process it was more than ten years before it was found possible
to control the action to permit the process to be used in industry. As a practical electro photo chemical
machining, as distinct from Electrochemical machining, probably had its birth in the United Stated when the
Battelle Memorial Institute, sponsored by the Cleveland Twist Drill Company, developed an electrochemical
method for sharpening carbide - tipped drills. This was accomplished patent was filed(British Patent No.
854541) in 1954. The two main line of approach have been followed in the attempt to overcome the problem
outlined above-thermal methods and chemical methods, both characterized by the fact that the rate at which
metal can be removed by their use is independent of the hardness of the work piece. Electro Photochemical
machining is a hybrid of Photo Chemical Machining and Electrochemical Machining. It forms of non-traditional
machining in which the removal of metal is accomplished by electrochemical reactions. The desired pattern to
be etched on the surface is formed by photolithography techniques that include cleaning of the sample, the
preparation of the photographic mask, and application of the Photo resist on the work piece and the generation
of pattern to be etched on the piece on exposure to suitable UV radiations. This method is thus used to cover the
work piece with the mask in such a way that only the portions to be etched remain exposed. The etching is done
by electro polishing. This is an electrolytic method in which the metal removal is achieved by electrochemical
dissolution of an anodically polarized work piece. The principle is the reverse of electroplating; in addition to
etching it is possible to get good surface finish. Thus this method is popular for the production of complex
configurations in thin materials and for production of delicate parts that could be easily damaged by the forces
of conventional cutting tools. This method is also applied for high strength and high resistant alloys and
materials that are difficult to cut by the conventional methods.


2. ELECTROPHOTOCHEMICAL MACHINING PROCESS

EPCM includes mainly two processes -
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 256

a) Photolithography. b) Etching by Electro polishing.

2.1. Photolithography
Photolithography is the process of transferring geometric shapes on a mask to the surface of a silicon wafer. The
steps involved in the photolithographic process are wafer cleaning, barrier layer formation, Photoresist
application, soft baking, mask alignment, exposure and development, and hard-baking. Pattern formation by
photolithography techniques is based on the application of polymer film in the desired configuration to metal or
insulator films covering the entire substrate surface. The pattern of the polymer mask is repeated in the metal or
insulator film by etching away the unprotected portions. The mask is generated by using a photosensitiv
polymer material (photo resist), whose molecular structure and Solubility changes on irradiation with U.V. light.
A photographic mask is required to confine the exposure to those regions where the solubility of the Photo resist
is to be changed. The development of the photographic pattern is followed by etching of the film material and
subsequent removal of the polymer mask. Since photolithography patterns are defined by light configurations of
much greater complexity, finer details can be obtained.

2.2. Etching:
Electro polishing is an electrochemical process based on the principles of the reverse of electroplating.
The functions of an ideal polishing process, in addition to etching, can be distinguished as :
a) Smoothing by elimination of the large-scale irregularities: This is by the formation of relatively thick viscous
layer of reactions products around the anode.
b).Brightening by removal of small irregularities: The formation of thin film on the surface of the anode
controls the brightening action.

2.3. Process requirements & factors effecting electro photo chemical machining (EPCM):
Requirement of the cell for Electro Polishing: the principle requirements of the Electro polishing cell are
following:
a) The anode connections should be constructed so that the specimen may be removed from the cell easily and
quickly for washing and subsequent treatment. It should be made of a corrosion resistant material, or else
covered with some inert film so that only the specimen is exposed to the solution.




Fig. 1 Vessel for electro photochemical attachment

b) Only the portion of the specimen to be polished should be in contact with the electrolyte, this can be done by
covering rest of the sample be paraffin wax or a layer of Photo resist.
c) The position of the specimen with respect to the cathode should remain fixed during electrolysis, so that no
unnecessary variation of the internal resistance of the cell occurs. Whether the work piece has to be kept
horizontal or in vertical position depends on the flow of the electrolyte in the system. This is explained properly
in the later part of the report.
d) The cathode should be as large as possible so that the deposit is distributed sparsely over the surface and the
danger of discrete particles leaving the cathode and interfering with the polishing process is reduced to a
minimum.
e) The temperature of the electrolyte should be kept constant during the process.
f) Cathode material should not react with the electrolyte.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 257

2.4. Factors influencing Electro Photochemical Machining
a) Tool Material
b) Solution Preparation / choice of the Electrolyte
c) Temperature of the electrolyte
d) Agitation of the electrolyte
e) Initial preparation of the surface
f) Time of treatment
g) Size of Electro polishing system
h) Ventilation

3. EXPERIMENTATION

To determine the etch rate and surface finish of the following materials are used:
Titanium: 50 m
Width of the line of the Mask = 370m.
Experimentation part starts with the pattern formation on the sample figure given below:


Fig. 2 Mask for EPCM
3.1. Pattern formation on the sample:
3.1.1. Cleaning:
The process to determine the etch rate starts from the cleaning of the sample. Concerning cleaning we have
ultrasonic cleaner in our lab but only ultrasonic Using only ultrasonic cleaner, Distil water +small amount of
soap: no proper results with patches of soap on the sample. Only distil water: results are good but yet sample is
not properly clean. Using washing powder: good cleaning but gives starches on the surface of the sample. Final
cleaning of the sample was done by using ultrasonic cleaner (containing only water) for 20 min and with
washing powder for not more than 2minutes. The cleaning part is very important and must not be
underestimated. There are many other solvents available for the cleaning purpose and can be seen form a good
reference book. After cleaning the sample must be kept in the oven at about 100C so as to remove any moisture
on the sample.

3.1.2. Application of the Photo resist:
The Photoresist is applied using dip and draw technique. The sample was dipped in the photoresist bottle for
about 2minutes and then taken out with the help of motor. The speed of the motor should slow enough so that
the layer of the resist should keep on drying while coming out of the bottle.


3.1.3. Developing:
This is the most important part of the experimentation and must be taken very carefully. Initially there were no
results coming as the Photoresist was either leaving the sample fully or when the baking time was increased, it
was not leaving the sample at all. The situation was analyzed and as the resist was kept for long. Its viscosity
was increased. The Photoresist used was SC-180 (negative Photoresist) and its solvent was benzene. A
controlled amount of benzene was added to the resist so as to reduce its viscosity. Benzene added should not be
more than 5-l0 ml in 100 ml of the resist.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 258


Table 1 Result of the developing
The reasons for the poor results in the lower part are due to the high speed of the motor and the sample is not
able to dry properly while coming out of the resist.

3.1.4. Post Baking: Post baking temperature
was kept 160C and concerning the time for posts baking
a) 1-1.5 hrs if the etching is to be done at room temperature
b) Up to 2 hrs if the etching is to be done at temperature greater than80c

3.2. ETCHING
3.2.1. Suitable cathode material: In both the case stainless steel was used as the cathode material.
3.2.2. Support to hold the electrode in the electrolyte:
This was made up of polycarbonate, a materials which has good thermal stability and inert to chemicals. It can
hold the work piece of length 6 cm and width 3.1 cm. The support should be such that there is no variation in the
gap during the process of electrolysis. It is shown in fig 3.



Fig 3
Hold the sample in the supported with nut and blots. Here sample that is anode and the cathode are tightened
with screws to the support and whole system is inserted in the electrolyte. These types of the design produce
good results

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 259

3.2.3. Flow of the electrolyte:
The flow of the electrolyte is very important, as it is required to take away the material, which has been cut
away by the electrolysis. The direction of alignment of electrodes in electrolysis basically depends on motion of
electrolyte in the beaker. The direction of motion of the electrolyte should be such that the throughout the plane
of the sample its speed must be very constant. In case of magnetic stirrer, etching is done in a beaker, which is
kept on a plate, and a magnet below stirs electrolyte. It is shown in fig 4.



Fig 4 set-up of the flow of the electrolyte

The results produced are hown in the fig 5. Difference in the thickness on the upper and lower part of the sample
can be easily seen.



Fig 5 Difference line thickness of the upper and lower part


The line width in the lower part of the sample is more as compared to the upper part.



Fig 6 Effect of line of width with distance along the line of Horizontal
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 260


Fig 7 Effect of line of width with distance along the line of Vertical

It can be easily seen that there is large variation in the line width in the case of vertical etching. Though keeping
the sample in a horizontal direction gives good result but yet there are other problems. The bubbles produced
during the etching process get stuck at the surface of the anode and hence locally hinders the process of cutting.
Because of this the results produced are uneven cutting. A sample showing the problem that can occur because
of sticking of bubbles on the sample, is shown in fig 8.



Fig 8 Effect of line due to bubbles.



A big enclosure was designed having capacity of around 12-15 liters with a sump/reservoir and an etching
chamber. The electrolyte is pumped up to the etching chamber and from there it goes back to the sump. In
between there is a filter that filters out the electrolyte during the process. Heater and thermometer are provided
in the sump only. The final diagram of the system is shown here under.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 261



Fig 10 Schematic diagram of laboratory set-up for EPCM

4. RESULTS AND DISCSSIONS

4.1. Determination of etch rate of Titanium:
(thickness 50m)
Anode material: Titanium with pattern
Cathode material: Stainless Steel
Electrolyte: 5% H2SO4
85% methanol
10% HCl
Area: 0.7 cm2

Table 2 Result of the developing Current

Form the readings it can be seen that good results were found for the case of titanium at around the current
densities of l A/cm2The problem encountered were unexpected changes of the current at the time of the process
and a constant watch had to be kept on the current level. Very important thing to note is that the sample required
momentarily very high current (approx equal to five times the working current) for the start of the reaction else
the electrolysis process won't starts at all. From the graph it can be seen though there are deviations at the ends,
yet results are good in the center part of the line with the variation of +/-2m. There is a scope of further
optimization in the process if all the variables of the process are taken proper care for titanium.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 262


Fig 11 Graphs showing variation in line width along the line

5. CONCLUSIONS

Electro photochemical machining was used to determine the etch rate and edge characteristics of titanium. This
was done successfully on all the materials. The work shows that how EPCM (Electro Photochemical
machining), which is a hybrid of electrochemical and chemical machining, can be used to generate intricate
designs on very thin metal foils with good surface finish and edge characteristics. Moreover the process uses
very innocuous chemicals.
The process has got wide applications; as it is suitable for achieving very precise dimensions and for making
very intricate type of designs on thin films. EPCM does not depends on the hardness of material hence it can be
used to handle hard metals for making small and precision parts like emitter contacts and likes out of Titanium.
This process can also be used for formation of number of surgical equipments including Intravascular Stents.

REFERENCES

[1] Electrochemical Machining. A.E.Debarr and D.A.Oliver, pp 11-103, Publication: Macdonald and Co,
1973.
[2] The Electrolytic and chemical polishing of metals in research and industry. W.J.McG.TEGART pp 1-
14, 1991.
[3] Machining data handbook. Aurthur.R.Meyen and Thomas.J.Slattring, Publication Machinability Data
Center, 1988.
[4] M.Tech thesis of R.S. Ramaya at IDDC Center IIT Delhi, 2000.
[5] Zhu, H.; Lindquist, R.; Profile control in isotropic plasma etching, ASMC 92 Proceedings. IEEE/SEMI
1992.
[6] Shin, H.; Hu, C.; Plasma-etching induced damage in thin oxide, 1992. ASMC 92 Proceedings.
IEEE/SEMI 1992.
[7] Mineta, T.; Electrochemical etching for micromachining of corrosion resistant alloys, proceeding IEEE
2003
[8] McNeil, V.M.; Wang, S.S.; Ng, K.- Yo; Schmidt, M.A: An investigation of the electrochemical etching
of (100) silicon in CsOH and KOH, proceedings IEEE 1990.
[9] Rakesh Kumar, Santosh Kumar Singh & Pradeep Kumar Kushwaha: Effects Of Grain Orientation And
Spray Direction In The Production Of Metallic Mask. Photo Mechanical Engineering Lab, IIT Delhi,
INCARF 2003.
[10] B. Chatterjee, "Fabrication of fine apparatus in metal foils", Precision engineering, 1986
[11] D.M. Allen "Photochemical machining of Molybdenum", Annals of the CIRP, 1986



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 263

AN OVERVIEW OF RAPID PROTOTYPING TECHNIQUES AND ITS
APPLICATION FOR THE MANUFACTURING INDUSTRY

Sumit Ganguly
1
Khem Chand Arora
2

Mechanical Engineering Department, Lingayas University, Faridabad
1
sumitgang@rediffmail.com,
2
khemchand.arora@rediffmail.com

ABSTRACT

Rapid Prototyping (RP) can be defined as a group of techniques that can be used to automatically
manufacture physical objects using additive manufacturing technology. RP techniques can also be used to
quickly fabricate a scale model of a part or assembly using three-dimensional computer aided design (CAD)
data.
Rapid Prototyping has also been referred to as solid free-form manufacturing, computer automated
manufacturing, and layered manufacturing. After its introduction in the late 1980s RP techniques have been
used extensively to make models and prototype parts. For example RP models can be used for testing, such as
when an airfoil shape is put into a wind tunnel to study its aerodynamic properties. RP models can be used to
create male models for tooling, such as silicone rubber molds and investment casts. In some cases, the RP part
can be used to manufacture production quality parts in relatively small numbers if the RP material is strong or
accurate enough.
There is a huge number of RP methodologies either in development or used by small groups of
individuals. The current paper that will discuss techniques that are currently commercially available such as
Stereolithography (SLA), Selective Laser Sintering (SLS), Direct Metal Laser Sintering (DMLS), Laminated
Object Manufacturing (LOM), Fused Deposition Modeling (FDM), Electron beam melting (EBM), Solid
Ground Curing (SGC), and 3D printing (3DP) techniques.

Keywords :Rapid Prototyping, Stereolithography, Selective Laser Sintering, Direct Metal Laser Sintering,
Laminated Object Manufacturing, Fused Deposition Modeling, Electron beam melting, Solid Ground Curing,
and 3D printing techniques.
INTRODUCTION
Rapid prototyping involves the automatic construction of physical objects using additive manufacturing
technology. The first techniques for rapid prototyping became available in the late 1980s. They were used to
produce models and prototype parts. Today, they are used for a much wider range of applications and are even
used to manufacture quality parts for production in relatively small numbers.
The use of additive manufacturing for rapid prototyping takes virtual designs from the computer aided
design (CAD) or animation modeling software, transforms them into thin, virtual, horizontal cross-sections and
then creates successive layers until the model is complete. It is a WYSIWYG process where the virtual model
and the physical model are almost identical.
With additive manufacturing, the machine reads in data from a CAD drawing and lays down successive
layers of liquid, powder, or sheet material, and in this way builds up the model from a series of cross sections.
These layers, which correspond to the virtual cross section from the CAD model, are joined together or fused
automatically to create the final shape. The primary advantage to additive fabrication is its ability to create
almost any shape or geometric feature.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 264

The standard data interface between CAD software and the machines is the STL file format. An STL
file approximates the shape of a part or assembly using triangular facets. Smaller facets produce a higher quality
surface.
The construction of a model with current methods can take from several hours to several days,
depending on the method used and the size and complexity of the model. Additive systems for rapid prototyping
can typically produce models in a few hours, although it can vary widely depending on the type of machine
being used and the size and number of models being produced simultaneously.
STEREOLITOGRAPHY
Stereolithography is an additive manufacturing process in which a vat of liquid UV-curable
photopolymer "resin" and a UV laser is used to build parts a layer at a time. On each layer, the laser beam traces
a part cross-section pattern on the surface of the liquid resin. Exposure to the UV laser light cures, solidifies the
pattern traced on the resin and adheres it to the layer below.
After a pattern has been traced, the SLA's elevator platform descends by a single layer thickness,
typically 0.05 mm to 0.15 mm (0.002" to 0.006"). Then, a resin-filled blade sweeps across the part cross section,
re-coating it with fresh material. On this new liquid surface, the subsequent layer pattern is traced, adhering to
the previous layer. A complete 3-D part is formed by this process. After building, parts are cleaned of excess
resin by immersion in a chemical bath and then cured in a UV oven.
Stereolithography requires the use of support structures to attach the part to the elevator platform and to
prevent certain geometry from not only deflecting due to gravity, but to also accurately hold the 2-D cross
sections in place such that they resist lateral pressure from the re-coater blade. Supports are generated
automatically during the preparation of 3-D CAD models for use on the stereolithography machine, although
they may be manipulated manually. Supports must be removed from the finished product manually; this is not
true for all rapid prototyping technologies.




Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 265

SELECTIVE LASER SINTERING
Patented in 1989 Selective laser sintering (SLS) is an additive manufacturing technique that uses a high
power laser (for example, a carbon dioxide laser) to fuse small particles of plastic, metal (direct metal laser
sintering), ceramic, or glass powders into a mass that has a desired 3-dimensional shape. The laser selectively
fuses powdered material by scanning cross-sections generated from a 3-D digital description of the part (for
example from a CAD file or scan data) on the surface of a powder bed. After each cross-section is scanned, the
powder bed is lowered by one layer thickness, a new layer of material is applied on top, and the process is
repeated until the part is completed.
The highlights of the process are as follows
Considerably stronger than SLA; sometimes structurally functional parts are possible.
Laser beam selectively fuses powder materials: nylon, elastomer, and soon metal;
Advantage over SLA: Variety of materials and ability to approximate common engineering plastic materials.
No milling step so accuracy in z can suffer.
Process is simple: There are no milling or masking steps required.
Living hinges are possible with the thermoplastic-like materials.
Powdery, porous surface unless sealant is used. Sealant also strengthens part.
Uncured material is easily removed after a build by brushing or blowing it off.

DIRECT METAL LASER SINTERING
Direct metal laser sintering (DMLS) is an additive metal fabrication technology developed by EOS out
of Munich, Germany, sometimes also referred to by the terms selective laser sintering (SLS) or selective laser
melting (SLM). The process involves use of a 3D CAD model whereby a .stl file is created and sent to the
machines software. A technician works with this 3D model to properly orient the geometry for part building
and adds supports structure as appropriate. Once this "build file" has been completed, it is "sliced" into the layer
thickness the machine will build in and downloaded to the DMLS machine allowing the build to begin. The
DMLS machine uses a high-powered 200 watt Yb-fiber optic laser. Inside the build chamber area, there is a
material dispensing platform and a build platform along with a recoater blade used to move new powder over
the build platform. The technology fuses metal powder into a solid part by melting it locally using the focused
laser beam. Parts are built up additively layer by layer, typically using layers 20 micrometres thick. This process
allows for highly complex geometries to be created directly from the 3D CAD data, fully automatically, in hours
and without any tooling. DMLS is a net-shape process, producing parts with high accuracy and detail resolution,
good surface quality and excellent mechanical properties.


LAMINATED OBJECT MANUFATURING
Laminated object manufacturing (LOM) is a rapid prototyping system developed by Helisys Inc.
(Cubic Technologies is now the successor organization of Helisys) In it, layers of adhesive-coated paper, plastic,
or metal laminates are successively glued together and cut to shape with a knife or laser cutter.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 266

The process is performed as follows:
Sheet is adhered to a substrate with a heated roller.
Laser traces desired dimensions of prototype.
Laser cross hatches non part area to facilitate waste removal.
Platform with completed layer moves down out of the way.
Fresh sheet of material is rolled into position.
Platform moves up into position to receive next layer.
The process is repeated.

ELECTRON BEAM MELTING
This solid freeform fabrication method produces fully dense metal parts directly from metal powder with
characteristics of the target material. The EBM machine reads data from a 3D CAD model and lays down
successive layers of powdered material. These layers are melted together utilizing a computer controlled
electron beam. In this way it builds up the parts. The process takes place under vacuum, which makes it suited to
manufacture parts in reactive materials with a high affinity for oxygen, e.g. titanium.

The melted material is from a pure alloy in powder form of the final material to be fabricated (no
filler). For that reason the electron beam technology doesn't require additional thermal treatment to obtain the
full mechanical properties of the parts. That aspect allows classification of EBM with selective laser melting
(SLM) where competing technologies like SLS and DMLS require thermal treatment after fabrication.
Comparatively to SLM and DMLS, EBM has a generally superior build rate because of its higher energy density
and scanning method.
The EBM process operates at an elevated temperature, typically between 700 and 1.000 C, producing
parts that are virtually free from residual stress, and eliminating the need for heat treatment after the build.
Melt rate: up to 80 cm3/h. Minimum layer thickness: 0.05 millimetres (0.0020 in). Tolerance capability: +/- 0.2
mm.
FUSED DEPOSITION MODELING
FDM begins with a software process, developed by Stratasys Inc., USA which processes an STL file
(stereolithography file format) in minutes, mathematically slicing and orienting the model for the build process.
If required, support structures are automatically generated. The machine dispenses two materials one for the
model and one for a disposable support structure.
The thermoplastics are liquefied and deposited by an extrusion head, which follows a tool-path defined
by the CAD file. The materials are deposited in layers as fine as 0.125 mm (0.005") thick, and the part is built
from the bottom up one layer at a time.
FDM works on an "additive" principle by laying down material in layers. A plastic filament or metal
wire is unwound from a coil and supplies material to an extrusion nozzle which can turn the flow on and off.
The nozzle is heated to melt the material and can be moved in both horizontal and vertical directions by a
numerically controlled mechanism, directly controlled by a computer-aided manufacturing (CAM) software
package. The model or part is produced by extruding small beads of thermoplastic material to form layers as the
material hardens immediately after extrusion from the nozzle. Stepper motors or servo motors are typically
employed to move the extrusion head.
Several materials are available with different trade-offs between strength and temperature properties.
As well as acrylonitrile butadiene styrene (ABS) polymer, polycarbonates, polycaprolactone,
polyphenylsulfones and waxes, A "water-soluble" material can be used for making temporary supports while
manufacturing is in progress,this soluble support material is quickly dissolved with specialized mechanical
agitation equipment utilizinga precisely heated sodium hydroxide solution.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 267






Fused deposition modeling: 1 - nozzle ejecting molten plastic, 2 - deposited material (modeled part), 3 -
controlled movable table
Stratasys of Eden Prairie, MN makes Fused Deposition Modeling (FDM) machines. The FDM process
was developed by Scott Crump in 1988. The fundamental process involves heating a filament of thermoplastic
polymer and squeezing it out like toothpaste from a tube to form the RP layers. The machines range from fast
concept modelers to slower, high-precision machines. The materials include polyester, ABS, elastomers, and
investment casting wax. The overall arrangement is illustrated below:




SOLID GROUND CURING
Solid Ground Curing, also known as the Solider Process, is a process that was invented and developed by
Cubital Inc. of Israel. The overall process is illustrated in the figure above and the steps are illustrated below.
The SGC process uses photosensitive resin hardened in layers as with the Stereolithography (SLA) process.
However, in contrast to SLA, the SGC process is considered a high-throughput production process. The high
throughput is achieved by hardening each layer of photosensitive resin at once. Many parts can be created at
once because of the large work space and the fact that a milling step maintains vertical accuracy. The multi-part
capability also allows quite large single parts (e.g. 500 500 350 mm / 20 20 14 in) to be fabricated. Wax
replaces liquid resin in non-part areas with each layer so that model support is ensured.

First, a CAD model of the part is created and it is sliced into layers using Cubital's Data Front End (DFE)
software. At the beginning of a layer creation step, the flat work surface is sprayed with photosensitive resin.
For each layer, a photomask is produced using Cubital's proprietary ionographic printing technique. Next, the
photomask is positioned over the work surface and a powerful UV lamp hardens the exposed photosensitive
resin. After the layer is cured, all uncured resin is vacuumed for recycling, leaving the hardened areas intact.
The cured layer is passed beneath a strong linear UV lamp to fully cure it and to solidify any remnant particles.
In the fifth step, wax replaces the cavities left by vacuuming the liquid resin. The wax is hardened by cooling to
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 268

provide continuous, solid support for the model as it is fabricated. Extra supports are not needed. In the final
step before the next layer, the wax/resin surface is milled flat to an accurate, reliable finish for the next layer.
Once all layers are completed, the wax is removed, and any finishing operations such as sanding, etc. can be
performed. No post-cure is necessary.


3D PRINTING
3D printing is a form of additive manufacturing technology where a three dimensional object is created
by laying down successive layers of material. 3D printers are generally faster, more affordable and easier to use
than other additive manufacturing technologies. 3D printers offer product developers the ability to print parts
and assemblies made of several materials with different mechanical and physical properties in a single build
process. Advanced 3D printing technologies yield models that can serve as product prototypes.
One method of 3D printing consists of an inkjet printing system. The printer creates the model one
layer at a time by spreading a layer of powder (plaster, or resins) and inkjet printing a binder in the cross-section
of the part. The process is repeated until every layer is printed. This technology is the only one that allows for
the printing of full colour prototypes. This method also allows overhangs. It is also recognized as the fastest
method.
Standard applications include design visualization, prototyping/CAD, metal casting, architecture,
education, geospatial, healthcare and entertainment/retail. Other applications would include reconstructing
fossils in paleontology, replicating ancient and priceless artifacts in archaeology, reconstructing bones and body
parts in forensic pathology and reconstructing heavily damaged evidence acquired from crime scene
investigations.

CONCLUSIONS
The current trends in manufacturing industries emphasizes increased number of product variants,
increasing complexity in products, decreasing product lifetime before obsolescence and decreasing delivery time
and so on.
Rapid Prototyping decreases the time for development of a product by allowing corrections to be made
early in the design process. By giving engineering, manufacturing, marketing, and purchasing a look at the
product early in the design process, mistakes can be corrected and changes can be made while they are still
inexpensive. Product lifetime can be extended by adding necessary features and eliminating redundant features
early in the design.




Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 269

REFERENCES
[1] http://www.photopolymer.com/stereolithography.htm
[2] http://www.protocam.com/html/sls.html
[3] http://laseroflove.files.wordpress.com/2009/10/dmls_history.pdf
[4] http://www.efunda.com/processes/rapid_prototyping/lom.cfm
[5] http://www.me.psu.edu/lamancusa/rapidpro/primer/chapter2.htm#applications
[6] http://wohlersassociates.com/NovDec02TCT.htm
[7] http://www.ptonline.com/articles/3d-printers-lead-growth-of-rapid-prototyping
[8] http://www.ops-uk.com/3d-printers/objet-connex
[9] http://www.protocam.com/html/video.html
[10] http://computer.howstuffworks.com/stereolith.htm/printable
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 270

ARC WELDING

Mohit

Student, Dept. of Mechanical Engineering
DCRUST, Murthal, Sonepat, Haryana-India
er.mohityadav706@gmail.com

ABSTRACT
Welding is an extremely complex process; however, due to its commercial importance, it is essential that a more
thorough study of the various processes be undertaken. Arc welding is the fusion of two pieces of metal by an
electric arc between the pieces being joined the work pieces and an electrode that is guided along the joint
between the pieces. The electrode is either a rod that simply carries current between the tip and the work, or a
rod or wire that also melts and supplies filler metal to the joint. The basic arc welding circuit is an alternating
current (AC) or direct current (DC) power source connected by a work cable to the work piece and by a hot
cable to an electrode. When the electrode is positioned close to the work piece, an arc is created across the gap
between the metal and the hot cable electrode. An ionized column of gas develops to complete the circuit. In this
paper the author highlights the various types of welding and the historical development of the welding,
importance and function of the power supply for the arc welding process. Types of the arc welding, various
methods, Corrosion issues and safety issues are also discussed.

Keywords: Arc Welding, Power Supply, Consumable Electrode Methods, Non- Consumable Electrode
Methods
INTRODUCTION
Arc welding uses a welding power supply to create an electric arc between an electrode and the base material to
melt the metals at the welding point. They can use either direct current (DC) or alternating current (AC), and
consumable or non-consumable electrodes. The welding region is sometimes protected by some type of inert or
semi-inert gas, known as a shielding gas, and/or an evaporating filler material. The process of arc welding is
widely used because of its low capital and running costs.
Definition: The process of joining together two pieces of metal so that bonding accompanied by appreciable
interatomic penetration takes place at their original boundary surfaces. The boundaries more or less disappear at
the weld, and integrating crystals develop across them. Welding is carried out by the use of heat or pressure or
both and with or without added metal. There are many types of welding, including Metal Arc, Atomic Hydrogen,
Submerged Arc and Resistance (Butt, Flash, Spot, Stitch, Stud and Projection) welding processes.
HISTORY
While examples of forge welding go back to the Bronze Age and the Iron Age, arc welding did not come into
practice until much later. In 1802, Vasily Petrov discovered the continuous electric arc and subsequently
proposed its possible practical applications, including welding. In 1881-82 a Russian inventor Nikolai Bernardos
created the first electric arc welding method known as carbon arc welding, using carbon electrodes. The
advances in arc welding continued with the invention of metal electrodes in the late 1800s by a Russian, Nikolai
Slavyanov (1888), and an American, C. L. Coffin. Around 1900, A. P. Strohmenger released in Britain a coated
metal electrode which gave a more stable arc. In 1905 Russian scientist Vladimir Mitkevich proposed the usage
of three-phase electric arc for welding. In 1919, alternating current welding was invented by C.J. Holslag but did
not become popular for another decade. Competing welding processes such as resistance welding and oxyfuel
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 271

welding were developed during this time as well; but both, especially the latter, faced stiff competition from arc
welding especially after metal coverings (known as flux) for the electrode, to stabilize the arc and shield the
base material from impurities, continued to be developed. During World War I welding started to be used in
shipbuilding in Great Britain in place of riveted steel plates. The Americans also became more accepting of the
new technology when the process allowed them to repair their ships quickly after a German attack in the New
York Harbor at the beginning of the war. Arc welding was first applied to aircraft during the war as well, and
some German airplane fuselages were constructed using this process. In 1919, the British shipbuilder Cammell
Laird started construction of merchant ship, the Fullagar, with an entirely welded hull; she was launched in
1921. During the 1920s, major advances were made in welding technology, including the 1920 introduction of
automatic welding in which electrode wire was continuously fed. Shielding gas became a subject receiving
much attention as scientists attempted to protect welds from the effects of oxygen and nitrogen in the
atmosphere. Porosity and brittleness were the primary problems and the solutions that developed included the
use of hydrogen, argon, and helium as welding atmospheres. During the following decade, further advances
allowed for the welding of reactive metals such as aluminum and magnesium. This, in conjunction with
developments in automatic welding, alternating current, and fluxes fed a major expansion of arc welding during
the 1930s and then during World War II. During the middle of the century, many new welding methods were
invented. Submerged arc welding was invented in 1930 and continues to be popular today. In 1932 a Russian,
Konstantin Khrenov successfully implemented the first underwater electric arc welding. Gas tungsten arc
welding, after decades of development, was finally perfected in 1941 and gas metal arc welding followed in
1948, allowing for fast welding of non-ferrous materials but requiring expensive shielding gases. Using a
consumable electrode and a carbon dioxide atmosphere as a shielding gas, it quickly became the most popular
metal arc welding process. In 1957, the flux-cored arc welding process debuted in which the self-shielded wire
electrode could be used with automatic equipment, resulting in greatly increased welding speeds. In that same
year, plasma arc welding was invented. Electroslag welding was released in 1958 and was followed by its
cousin, electro gas welding, in 1961.
Types Of Welding
There are two broad classification of welding:- plastic welding and fusion welding. In plastic welding, also
known as pressure welding, the piceses of metal to be joined are heated to a plastic state and then forced
together by external pressure. IN fusion welding, also known as nonpressure welding, the material at the joint is
heated to a molten state and allowed to solidify.
IMPORTANCE AND FUNCTION OF THE POWER SUPPLY FOR THE ARC
WELDING PROCESS
To supply the electrical energy necessary for arc welding processes, a number of different power supplies can be
used. The most common classification is constant current power supplies and constant voltage power supplies.
In arc welding, the voltage is directly related to the length of the arc, and the current is related to the amount of
heat input. Constant current power supplies are most often used for manual welding processes such as gas
tungsten arc welding and shielded metal arc welding, because they maintain a relatively constant current even as
the voltage varies. This is important because in manual welding, it can be difficult to hold the electrode perfectly
steady, and as a result, the arc length and thus voltage tend to fluctuate. Constant voltage power supplies hold
the voltage constant and vary the current, and as a result, are most often used for automated welding processes
such as gas metal arc welding, flux cored arc welding, and submerged arc welding. In these processes, arc
length is kept constant, since any fluctuation in the distance between the wire and the base material is quickly
rectified by a large change in current. For example, if the wire and the base material get too close, the current
will rapidly increase, which in turn causes the heat to increase and the tip of the wire to melt, returning it to its
original separation distance The direction of current used in arc welding also plays an important role in welding.
Consumable electrode processes such as shielded metal arc welding and gas metal arc welding generally use
direct current, but the electrode can be charged either positively or negatively. In welding, the positively charged
anode will have a greater heat concentration and, as a result, changing the polarity of the electrode has an impact
on weld properties. If the electrode is positively charged, it will melt more quickly, increasing weld penetration
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 272

and welding speed. Alternatively, a negatively charged electrode results in more shallow welds Non-
consumable electrode processes, such as gas tungsten arc welding, can use either type of direct current (DC), as
well as alternating current (AC). With direct current however, because the electrode only creates the arc and
does not provide filler material, a positively charged electrode causes shallow welds, while a negatively charged
electrode makes deeper welds. Alternating current rapidly moves between these two, resulting in medium-
penetration welds. One disadvantage of AC, the fact that the arc must be re-ignited after every zero crossing, has
been addressed with the invention of special power units that produce a square wave pattern instead of the
normal sine wave, eliminating low-voltage time after the zero crossings and minimizing the effects of the
problem.
TYPES OF THE ARC WELDING
These are the following types of arc welding process:
- Carbon arc
- Plasma arc
- Submerged arc
- Metal arc
- Electro slag
- Flux-cored arc
- Gas metal arc
- Gas tungsten arc
- Atomic-hydrogen arc

VARIOUS METHODS
Consumable electrode methods
Shielded metal arc welding(SMAW), one of the most common types of arc welding, is also known as manual
metal arc welding (MMA) or stick welding. An electric current is used to strike an arc between the base material
and a consumable electrode rod or 'stick'. The electrode rod is made of a material that is compatible with the
base material being welded and is covered with a flux that protects the weld area from oxidation and
contamination by producing CO2 gas during the welding process. The electrode core itself acts as filler material,
making separate filler unnecessary. The process is very versatile, requiring little operator training and
inexpensive equipment. However, weld speeds are rather slow, since the consumable electrodes must be
frequently replaced and because slag, the residue from the flux, must be chipped away after welding.
Furthermore, the process is generally limited to welding ferrous materials, though specialty electrodes have
made possible the welding of cast iron, nickel, aluminium, copper and other metals. The versatility of the
method makes it popular in a number of applications including repair work and construction. Gas metal arc
welding (GMAW), commonly called MIG (Metal Inert Gas), is a semi-automatic or automatic welding process
with a continuously fed consumable wire acting as both electrode and filler metal, along with an inert or semi-
inert shielding gas flowed around the wire to prevent the weld site from contamination. Constant voltage, direct
current power source is most commonly used with GMAW, but constant alternating currents are used as well.
With continuously fed filler electrodes, GMAW offers relatively high welding speeds, however the more
complicated equipment reduces convenience and versatility in comparison to the SMAW process. Originally
developed for welding aluminium and other non-ferrous materials in the 1940s, GMAW was soon economically
applied to steels. Today, GMAW is commonly used in industries such as the automobile industry for its quality,
versatility and speed. Because of the need to maintain a stable shroud of shielding gas around the weld site, it
can be problematic to use the GMAW process in areas of high air movement such as outdoors. Flux-cored arc
welding (FCAW) is a variation of the GMAW technique. FCAW wire is actually a fine metal tube filled with
powdered flux materials. Flux cored wire generates an effective gas shield precisely at the weld site, permitting
application involving more windy conditions or contaminated materials, however the flux cored wire leaves a
slag residue and is more expensive than solid wire. Submerged arc welding (SAW) is a high-productivity
automatic welding method in which the arc is struck beneath a covering layer of flux. This increases arc quality,
since contaminants in the atmosphere are blocked by the flux. The slag that forms on the weld generally comes
off by itself and, combined with the use of a continuous wire feed, the weld deposition rate is high. Working
conditions are much improved over other arc welding processes since the flux hides the arc and no smoke is
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 273

produced. The process is commonly used in industry, especially for large products. As the arc is not visible, it
requires full automatization. In-position welding is not possible with SAW.
Non-consumable electrode methods
Gas tungsten arc welding (GTAW), or tungsten inert gas (TIG) welding, is a manual welding process that uses a
non-consumable electrode made of tungsten, an inert or semi-inert gas mixture, and a separate filler material.
Especially useful for welding thin materials, this method is characterized by a stable arc and high quality welds,
but it requires significant operator skill and can only be accomplished at relatively low speeds. It can be used on
nearly all weldable metals, though it is most often applied to stainless steel and light metals. It is often used
when quality welds are extremely important, such as in bicycle, aircraft and naval applications. A related
process, plasma arc welding, also uses a tungsten electrode but uses plasma gas to make the arc. The arc is more
concentrated than the GTAW arc, making transverse control more critical and thus generally restricting the
technique to a mechanized process. Because of its stable current, the method can be used on a wider range of
material thicknesses than can the GTAW process and is much faster. It can be applied to all of the same
materials as GTAW except magnesium; automated welding of stainless steel is one important application of the
process. A variation of the process is plasma cutting, an efficient steel cutting process. Other arc welding
processes include atomic hydrogen welding, carbon arc welding, electroslag welding, electrogas welding, and
stud arc welding.
Corrosion issues
Some materials, notably high-strength steels, aluminium, and titanium alloys, are susceptible to hydrogen
embrittlement. If the electrodes used for welding contain traces of moisture, the water decomposes in the heat of
the arc and the liberated hydrogen enters the lattice of the material, causing its brittleness. Electrodes for such
materials, with special low-hydrogen coating, are delivered in sealed moisture-proof packagings. New
electrodes can be used straight from the can, but when moisture absorption may be suspected, they have to be
dried by baking (usually at 800 to 1000 F (425 to 550 C)) in a drying oven. Flux used has to be kept dry as
well. Some austenitic stainless steels and nickel-based alloys are prone to intergranular corrosion. When
subjected to temperatures around 700 C (1300 F) for too long time, chromium reacts with carbon in the
material, forming chromium carbide and depleting the crystal edges of chromium, impairing their corrosion
resistance in a process called sensitization. Such sensitized steel undergoes corrosion in the areas near the welds
where the temperature-time was favorable for forming the carbide. This kind of corrosion is often termed weld
decay. Knifeline attack (KLA) is another kind of corrosion affecting welds, impacting steels stabilized by
niobium. Niobium and niobium carbide dissolves in steel at very high temperatures. At some cooling regimes,
niobium carbide does not precipitate, and the steel then behaves like unstabilized steel, forming chromium
carbide instead. This affects only a thin zone several millimeters wide in the very vicinity of the weld, making it
difficult to spot and increasing the corrosion speed. Structures made of such steels have to be heated in a whole
to about 1950 F (1070 C), when the chromium carbide dissolves and niobium carbide forms. The cooling rate
after this treatment is not important. Filler metal (electrode material) improperly chosen for the environmental
conditions can make them corrosion-sensitive as well. There are also issues of galvanic corrosion if the
electrode composition is sufficiently dissimilar to the materials welded, or the materials are dissimilar
themselves. Even between different grades of nickel-based stainless steels, corrosion of welded joints can be
severe, despite that they rarely undergo galvanic corrosion when mechanically joined.

Safety issues
Correct and safe arc welding station Welding can be a dangerous and unhealthy practice without the proper
precautions; however, with the use of new technology and proper protection the risks of injury or death
associated with welding can be greatly reduced.

Heat and sparks
Because many common welding procedures involve an open electric arc or flame, the risk of burns is
significant. To prevent them, welders wear protective clothing in the form of heavy leather gloves and protective
long sleeve jackets to avoid exposure to extreme heat, flames, and sparks.
Eye damage
Helmet with arc-welding visor, HMS Illustrious (R06) visiting Liverpool, 25 October 2009 The brightness of
the weld area leads to a condition called arc eye in which ultraviolet light causes inflammation of the cornea and
can burn the retinas of the eyes. Welding goggles and helmets with dark face plates are worn to prevent this
exposure and, in recent years, new helmet models have been produced featuring a face plate that self-darkens
upon exposure to high amounts of UV light. To protect bystanders, transparent welding curtains often surround
the welding area. These curtains, made of a polyvinyl chloride plastic film, shield nearby workers from exposure
to the UV light from the electric arc, but should not be used to replace the filter glass used in helmets. Those
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 274

dark face plates must be much darker than those in sunglasses or blowtorching goggles. Sunglasses and
blowtorching goggles are not adequate for arc welding protection. In 1970, a Swedish doctor, ke Sandn,
developed a new type of welding goggles that used a multilayer interference filter to block most of the light
from the arc. He had observed that most welders could not see well enough, with the mask on, to strike the arc,
so they would flip the mask up, then flip it down again once the arc was going: this exposed their naked eyes to
the intense light for a while. By coincidence, the spectrum of an electric arc has a notch in it, which coincides
with the yellow sodium line. Thus, a welding shop could be lit by sodium vapor lamps or daylight, and the
welder could see well to strike the arc. The Swedish government required these masks to be used for arc
welding, but they were not used in the United States. They may have disappeared.
Inhaled matter
Welders are also often exposed to dangerous gases and particulate matter. Processes like flux-cored arc welding
and shielded metal arc welding produce smoke containing particles of various types of oxides. The size of the
particles in question tends to influence the toxicity of the fumes, with smaller particles presenting a greater
danger. Additionally, many processes produce various gases (most commonly carbon dioxide and ozone, but
others as well) that can prove dangerous if ventilation is inadequate. Furthermore, the use of compressed gases
and flames in many welding processes pose an explosion and fire risk; some common precautions include
limiting the amount of oxygen in the air and keeping combustible materials away from the workplace.
Interference with pacemakers
Certain welding machines which use a high frequency AC current component have been found to affect
pacemaker operation when within 2 meters of the power unit and 1 meter of the weld site.

REFERENCES
[1] Cary, Howard B. and Scott C. Helzer (2005). Modern Welding Technology. Upper Saddle River, New
Jersey: Pearson Education. ISBN 0-1311-3029-3
[2] Lincoln Electric (1994). The Procedure Handbook of Arc Welding. Cleveland: Lincoln Electric. ISBN
99949-25-82-2.
[3] Weman, Klas (2003). Welding processes handbook. New York: CRC Press LLC. ISBN 0-8493-1773-8.
[4] ASM International (society) (2003). Trends in Welding Research. Materials Park, Ohio: ASM
International. ISBN 0-87170-780-2
[5] Blunt, Jane and Nigel C. Balchin (2002). Health and Safety in Welding and Allied Processes.
Cambridge:Woodhead. ISBN 1-85573-538-5.
[6] Hicks, John (1999). Welded Joint Design. New York: Industrial Press. ISBN 0-8311-3130-6.
[7] Arc Flash Awareness [29] video (25:39) from U.S. National Institute for Occupational Safety and
Health



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 275

SURFACE MORPHOLOGY STUDIES ON CENOSPHERE FLYASH
FILLED BAMBOO REINFORCED EPOXY COMPOSITES

Anu Gupta
1*
, Ajit Kumar
1
and Amar Patnaik
2

1
School of Engineering and Technology, IGNOU, Maidan Garhi, New Delhi 110068, India

2
Mechanical Engineering Department, N I T, Hamirpur 177005, H.P., India
1*
anu339@gmail.com


ABSTRACT

Natural fiber reinforced composites have generated a lot of interest in the research community as they offer an
attractive solution to the ever depleting petroleum resource.Among the most available and inexpensive natural
fibers, Bamboo has emerged as a potential material for composites, owing to its unique biological structure and
excellent mechanical performance. Bamboo offers significant industrial and engineering applications due to its
high strength, easy availability and low cost. Cenosphere Flyash is generated in large quantities as a byproduct
when the coal is combusted in thermal power plants. This material poses a major disposal problem for the
plants. It has very low density and is freely available, hence can be used as a filler material to reduce the weight
and production cost of plastic products. It also offers improved performance, when reinforced into the resin
matrix as a polymer filling material. In the present work, hybrid composites with bamboo fibers reinforced in
cenosphere flyash filled epoxy have been fabricated by hand moulding technique. Various concentration of
cenosphere flyash (0, 10, 20 wt %) are taken to form the composites. Steady State erosion test has been carried
out to study the physical erosive behavior of these composites. Effect of the impingement angle, impact velocity
has been studied on the surface of composites. Scanning Electron Microscopy has been performed on the
samples to study the fracture mechanisms on the composite surface. On the basis of the impingement angle
studies, it has been found that the composites respond to solid particle impact, neither in a purely ductile nor in
a purely brittle manner. They show semi-ductile behavior. The composite samples are found to be deformed in
different ways like surface fatigue, plastic deformation or melting depending on the velocity of the erosive
particle.

Keywords: Bamboo Epoxy Composites, Surface Morphology, Erosion Test

1.INTRODUCTION

Fiber reinforced polymers are increasingly becoming potential candidates for replacing conventional materials
due to their many advantages. These composites are finding applications in diverse fields starting from
appliances to spacecrafts. The application of natural fibers as reinforcement in polymer composites has been
continuously growing during the last few years. The main advantages of such fibers are their low cost,
renewability, biodegradability, low specific gravity, abundance, high specific strength, and stiffness. Among the
various natural fibers, bamboo fiber is a good candidate for use as natural fibers in composite materials. The
reason that many studies focus on bamboo is because it is an extremely abundantly available, light weight,
functionally graded and high strength natural composite. Bamboo has been used as reinforcement with many
polymers in different research papers. Their properties are comparable with synthetic fiber reinforced polymer
composites but their properties can be enhanced by addling filler in the composite.
Cenosphere Flyash is one of the fillers utilized in polymer composites. Cenosphere Flyash are referred
to as microspheres, hollow spheres, hollow ceramic microspheres, microballoons, or glass beads and are
generated in large quantities as a by-product when the coal is combusted for electricity generation in thermal
power plant and pose major disposal problems [1, 2]. Cenosphere Fly ash are extensively used in plastic
compounds, as they are compatible with thermoplastics, polyesters, epoxies, phenolic resins,latex, plastisols,
and urethanes. It has very low density, low cost and freely availably, so that they can used as a filler material to
reduce the weight and production cost of plastic products, and improve performance when being reinforced into
the resin matrix as a polymer filling material . In India, the major portion of fly ash produced goes for disposal
in ash ponds, sea, landfills and a minor percentile of FA (<15%) is being used for preparing bricks, ceramics and
cements, a practice which is under examination for environmental concerns[3]. Therefore, considerable research
is being conducted worldwide for utilization of industrial wastes materials in order to avert an increasing toxic
threat to the environment, or to streamline present waste disposal techniques by making them more affordable
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 276

[4]. Hence, extensive research is going to find out the various ways to utilize cenosphere fly ash to restrain the
environmental problems as well as effectively use cenosphere fly ash as a filler material in composite materials
[1]. Satapathy et al [5] studied the effect of cenosphere-filled polypropylene (PP) composites fabrication and for
their structural characterization and fracture mechanical behavior. Similarly, Suresha et al. [6] explained the
influence of cenosphere filler in glass fiber-reinforced epoxy composites, the composites showed poor abrasive
wear performance. Whereas, unfilled glass epoxy composites showed better performance as comparison to filled
composites. In view of the above literature, the present study developed a new set of particulate filled hybrid
composites under different weight percentages of FA cenosphere filled bamboo fiber reinforced epoxy resin
composites for erosive wear analysis.

2.COMPOSITE FABRICATION

Epoxy LY 556 has been used as matrix and HY951 has been used as hardener in composite. Both are supplied
by Ciba Geigy India Ltd and bi-directional bamboo fibers are collected from Orissa. In the present study, 250
300mm in length and 150 mm in diameter bamboo fiber tubes have been taken. These tubes are then divided
into rods of overall diameter of 2.5mm and length of 150 mm. These rods are then separated into several types
of strips depending on individual requirements. Then the extracted fibers are dried in an oven for 4h at 45C to
remove moisture. Each ply of roving bi-directional bamboo mats (Fig. 1) is of dimension 150 150 mm
2
.

Fig.1. Bidirectional roving bamboo fiber mats
Composites slabs are made by reinforcing bi-directional bamboo mats in epoxy resin using simple hand lay-up
technique followed by light compression molding technique. The castings are put under load for about 24 h for
proper curing at room temperature. Composites of three different compositions (with 0 wt%, 10 wt% and 20
wt% of cenosphere fly ash filling) are made with bamboo fiber (kept at 40% bamboo fiber loading constant) for
all the samples. After the curing process, test samples are cut to the required sizes as per individual test
requirement. Designations of composites are given in Table 1.
Table 1 Designation of Composites
Designation Bamboo Fiber (wt%) Epoxy (wt %) Cenosphere Fly ash (wt%)
B1 40 60 0
B2 40 50 10
B3 40 40 20

3.STEADY STATE EROSION TEST
The set up for the solid particle erosion wear test (as per ASTM G76) used in this study is capable of creating
reproducible erosive situations for assessing erosion wear resistance of the prepared composite samples. The
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 277

solid particle erosion test rig consists of a compressor, drying unit, a conveyor belt-type particle feeder which
helps to control the flow of sand particle and an air particle mixing and accelerating chamber. The compressed
air is then mixed with the selected range of silica sand which is fed constantly by a conveyor belt feeder into the
mixing chamber and then passing the mixture through a convergent brass nozzle of 3 mm internal diameter. The
erodent particles impact the specimen which can be held at different angles with respect to the direction of
erodent flow using a swivel and an adjustable sample holder. The velocity of the eroding particles is determined
using standard double disc method [7]. In the present study, pyramidal shaped dry silica sand of particle size
250m is used as erodent. After each experimental run the eroded samples are cleaned in acetone and dried for 5
minute and then weighed to an accuracy of 0.01 mg before and after the erosion trials using electronic
balance. The weight loss is recorded for subsequent calculation of erosion rate. The process is repeated till the
erosion rate attains a constant value called steady state erosion rate. Finally, the eroded samples are examined
directly by scanning electron microscope (SEM) JEOL JSM-6480LV. The results of the erosion test are
discussed below:
3.1.1 Effect of impingement angle on erosion rate of the composites
The erosion wear behavior of polymer composites can be grouped into ductile and brittle categories although
this grouping is not definitive because the erosion characteristics depend on the experimental conditions as
much as on composition of the target material. It is known that for materials exhibiting ductile erosion response,
the peak erosion normally occurs at 15
0
to 20
0
angle of impingement while for brittle response, the erosion
damage is maximum usually at normal impact i.e. at 90
0
impingement angle [8, 9]. In the present study, the
variation of erosion rate of the composites with five different impingement angles(30, 45,60,75,90) are
studied by conducting experiments under specified operating conditions (Impact velocity: 45m/sec, stand-off
distance: 75mm, erodent size: 250-m).The peak erosion rate has taken place at an impingement angle of 45
0

for bamboo-epoxy composites. This clearly indicates that these composites respond to solid particle impact
neither in a purely ductile nor in a purely brittle manner. This behavior can be termed as semi-ductile in nature
which may be attributed to the incorporation of bamboo fibres and cenosphere fly ash filled within the epoxy
matrix. The rate of material loss of epoxy bamboo composites reduces significantly with the addition of hard
particulate fillers into the matrix The reduction in material loss in these particle filled composites can be
attributed to two reasons. One is the improvement in the bulk hardness of the composite with addition of these
fillers. Secondly, during the erosion process, the filler particles absorb a good part of the kinetic energy
associated with the erodent. This results in less amount of energy being available to be absorbed by the matrix
body and the reinforcing bamboo fiber phase. These two factors together lead to enhancement of erosion wear
resistance of the composites.

3.1.2. Effect of impact velocity on erosion rate of the composites
Erosion trials are conducted at four different impact velocities(35m/sec, 45m/sec, 55 m/sec, 65m/sec) under
constant operating condition (Impingement angle: 60
0
, stand-off distance: 75mm, erodent size 250 -m). The
effect of impact velocity on erosion is significant for higher impact velocity i.e 45m/sec and the rate of
increase in erosion rate are maximum at higher impact velocity 65m/sec. The increase in erosion rate with
impact velocity can be attributed to increased penetration of particles on impact as a result of dissipation of
greater amount of particle from the target surface. This leads to more surface damage, enhanced sub-critical
crack growth etc. and consequently to the reduction in erosion resistance.

4. SURFACE MORPHOLOGY

The scanning electron microscope (SEM) of the bamboo fiber reinforced epoxy composite filled with
cenosphere fly ash at three different weight percentages (0wt%, 10wt% and 20wt% of cenosphere fly ash) has
been done after steady state erosion test at different angles and different velocities. SEM images of eroded
samples are shown in Fig 2. In Fig. 2 (a) the matrix is chipped off and the broken bamboo fibers are clearly
visible beneath the matrix layer due to impact of dry silica sand particles of smallest grit size (250 m) with a
lower impact velocity (45 m/sec) at a low impingement angle of 30 for 10wt% of cenosphere fly ash content.
With increase in impingement angle to 60 the removal of matrix material little more as compared to the
previous micro graphs as shown in Fig. 2(b), However, with further increase in impingement angle to 90 show
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 278

formation of larger craters due to material loss and the arrays of broken and semi-broken fibers within. Due to
repeated impact of hard sand particles there is initiation of cracks on the fibers and as erosion progresses, these
cracks subsequently propagate on the fiber bodies both in transverse as well as longitudinal manner as shown in
Fig. 2(c)). But as the erosion tests are carried out with lower impact velocity (35m/sec) the morphology of the
eroded surface becomes different. Very less removal of material is clearly noticed in Fig. 2(d). With the increase
in impact velocity to 55m/sec (higher impact velocity) the removal of matrix materials becomes more as shown
in Fig. 2(e). On further increases in impact velocity (65m/sec), fibers in composites, subjected to particle flow,
break in bending. In case of an impact having a parallel component of velocity with respect to the fiber
orientation bending requires particle indentation into the composite. The indentation involves compressive
stresses and resistance to micro bending is very high. Thus there is local removal of resin material from the
impacted surface; this results in the exposure of the fibers Fig.2 (f).


(a) (d)


(b) (e)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 279

Fig. 2 SEM images of eroded surfaces of cenosphere fly ash filled composites
5. CONCLUSION

Successful fabrication of cenosphere flyash filled bamboo epoxy composite is possible by simple hand lay- up
technique. A study on the dependence of erosion wear on impingement angle also reveals their semi-ductile
nature as the peak erosion rate is found to be occurring at 45 impingement angle for all the composites. This
has been explained by analyzing the possible damage mechanism with the help of SEM micrographs. These
composites form a new class of bio-fiber reinforced composites which may find potential applications as
suitable materials for conveyor belt rollers, passenger seat frames (replacing wood/steel) in railway
coaches/automobiles, pipes carrying pulverized coal in power plants, pump and impeller blades, household
furniture and also as low cost housing materials.
6.REFERENCES

[1] Shukla, S., Seal, S., Akesson, J., Oder, R. Carter, R., Rahman, Z., Study of mechanism of electroless
copper coating of fly-ash cenosphere particles, Appl. Surf. Sci., 2001, 181, 35-50.
[2] Malhotra, V.M., Valimbe, P.S., Wright, M.A., Effects of Fly Ash and Bottom Ash on Frictional Behavior
of Composites, Fuel, 2002, 81, 235-244.
[3] Pandey, V.C., Studies on Some Leguminous Plants under the Stress of Fly-ash and Coal Residues
Released from National Thermal Power Corporation Tanda. Ph.D. thesis. Dr. R.M.L. Avadh University
Faizabad, India. 2008.
[4] Ahmaruzzaman, M., A review on the utilization of fly ash, Progress in Energy and Combustion Science,
2010, 36, 327363.
[5] Bhabani, K. S., Das, A., Patnaik, A., Ductile-to-brittle transition in cenosphere-filled polypropylene
composites, Journal of Materials Science, 2011, 46(6),1963-1974.
[6] Suresha, B., Chandramohan, G., Jayaraju, T. S., Influence of Cenosphere Filler Additions on the Three-
Body Abrasive Wear Behavior of Glass Fiber-Reinforced Epoxy Composites, Polymer Composites, 2008,
29(3), 307312.
[7] Agarwal B.D., and Broutman L.J., Analysis and performance of fiber composites: Second Edition. John
Wiley and Sons, Inc.1990.
[8] Arnold, C., Hutchings, I.M., Erosive wear of rubber by solid particles at normal incidence, Wear, 1993,
161, 213221,
[9] Arnold, J.C., Hutchings, I.M., A model for the erosive wear of rubber at oblique impact angles, Journal
of Physics D: Applied Physics, 1992, 25, 222229.


(c)

(f)
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 280

AN OVERVIEW OF SAFETY IN WELDING
Arbind Prasad
1
, Amir Shaikh
2

1
Research Scholar,
2
Professor, Department of Mechanical Engineering, Graphic Era University,
Dehradun248002,uttarakhand,India.
1
arbind.geit@gmail.com

ABSTRACT

The fabrication process which can permanently join two different materials, with the application
of heat, pressure or both is known as Welding. It has now become one of the basic industrial operations and has
evolved to such a great extent that, welding can be performed even under water or in space. Also there are
several types which have their own pros and cons. One of the factors which should be considered during
welding is the safety precautions. Industrial safety laws may suggest the use of gadgets to avoid any harm to the
Welder, but that alone is not enough to prevent any short or long time effects faced by the welder and by the
environment. In this article we have taken a step to identify the various drawbacks of the existing and widely
applied welding techniques. By doing so, we can try to find a suitable welding technique which has the lowest
possible ill effects, or may come across a technique which may not have any possibility of ill effects.

Key words: Safety, Health hazards, Radiation, MIG, TIG, SMAW, SAW, PAW, Explosimeter.
1. INTRODUCTION

Approximately 75 different welding processes and techniques are being used commercially. These include
Submerged Arc Welding (SAW), Metal Inert Gas (MIG), Tungsten Inert Gas (TIG), Plasma Arc Welding
(PAW), Gas Welding, Laser welding etc. Depending on the type the welding process may liberate fumes, high
intensity of light, radiations, electric shocks, etc., which are harmful to the welder and environment. The ill
effect ranges from
minor injuries such as skin rashes, irritation to eyes, allergy, trip and fall, etc., and also includes those which are
fatal such as third degree burn, polluting water in ponds killing fishes. In order to have complete look on the
safety precautions in welding processes, it is better to go process wise. Welding processes are of mainly six
types:
1) Arc Welding Process
2) Resistance Welding Process
3) Solid State Welding Process
4) Radiant Welding Process
5) Thermit Welding Process
6) Oxy fuel Gas Welding Process
Various safety points are reviewed process wise in the later part of the paper.

2. ARC WELDING
In this process electric arc is used to melt
and fuse together work pieces. An electric arc having temperature in the range of 3000
0
C 7000
0
for the process.

2.1 Electrical Hazards
Even though welding generally uses low voltage, there is still a danger of electric shock because in wet
surroundings voltages of 80 V AC (which may been countered in open circuit condition) can cause fatal shocks.
Electrical hazard is associated with all electrical equipments, extension lights, electric hand tool, puncher cable
and improper grounding of machine etc. The electrode and work circuit are electrically energized when the
output is on. The input power circuit and machine internal circuit are also electrically energized when the power
is on. Prior to installing the arc welder, one should determine, if the present electrical system is adequate to
handle the increased load requirement by the welder. The local power supplier or a qualified electrician can
assist in determining this. It is very important for safety to install the welder with National Electric code (NEC)
by a qualified electrician. Failure to do so can cause fire, a ground fault or equipment failure. Erroneous
grounding of the secondary circuit can cause problems with other electrical equipment in the vicinity, burning
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 281

out their earth wires, slings, ropes, chains or metal pipe work. The following guidelines are important to be
taken care of: -
Check the welding equipment to make certain that electrode connection and insulation on the holder & cable are
in good condition.
- The outer case or frame of the welding machine should be properly grounded.
- Hands & feet should be kept insulated from the electrode &the work piece by gloves & rubber soled
shoes.
- A safety type disconnection switch shall be located near the machine.
- The welding operation should be performed within the rated capacity of the equipment.
- The welding machine should be protected by suitable sized fuse or circuit breaker.

2.2 Fire And Explosions Hazards
The huge amount of heat & sparks are produced possessing high temperatures which can cause fire &
explosions to combustible or flammable materials lying in the vicinity of the welding site. Combustible
materials like trash, wood, paper, textiles, plastic, chemicals, liquids& gases must not be placed in the areas near
the welding site. Things, which are not possible to remove, should be tightly covered with flame resistance
materials. Explosimeter can be used to detect the presence of gases, which are highly flammable. Fire
extinguishers should be provided to tackle some kind of fire or explosions.

2.3 Mechanical Hazards
In case a fabrication process involves drilling, milling, or similar operations that should be carried out right after
a welding operation, then the welder may opt to weld at a close vicinity of other machines just to reduce the
time taken to complete the whole process. Even this act leads to freak accidents such as trip and fall, and other
such mishaps. It is suggest allocating a separate space for welding with ample light and air flow. There should
not be any interruption to the welder and the process. Even the welding tools like chipping hammer should be
kept in a specific location in the welding room to be accessed only when needed.

2.4 Radiation Hazards
During arc welding process large amount of radiations are produced which include visible light, Ultra-violet
rays, Infra-red rays & X-rays. The human eye is only able to detect the visible light but not Ultra-violet rays,
Infra-red rays & X-rays which can cause Eye damage, blindness, skin burns, serious skin diseases and
temporary eye pains which last for 2 to 3days. Applying dark gray or black paint of matt finish to all helmets,
face shields& portable screens etc can protect exposes to radiations. In order to safeguard the body parts,
suitable leather apron can be used. Depending upon the intensity of radiations, different shade numbers are
recommended in the range of 2-14 for low to high intensity respectively.

2.5 Noise Hazards
Expose to loud noise can permanently damage welders hearing, workers feel tired, nervous and irritable if
exposed to noisy environment for longer periods & it also leads to stress, increased blood pressure & heart
diseases. Welding environments are noisy above 85 dB and in some particular cases it can overshoot108 dB.
Such high pitched noise requires some protection. Noise measurement instruments should be used to check
cumulative noise due to welding, cutting, grinding, chipping, peening and other machining operations which
might be going on simultaneously in the welding shop.

2.6 Fumes & Gases
Many welding processes generate fumes& gases which are harmful to the welders health. Welding vaporizes
metals and anything which is resting on the surface. This gives rise to fumes, which are condensed fine particles.
The burnt paints, lubricants, metal oxides, alloying elements leads to fumes. The nature & quantity critically
depends upon various operation parameters. The fumes & gases are produced from base metal, shielding gases,
chemical reactions & consumables. Chromium, Silica, Nickel, Carbon Monoxide &ozone are some of the toxic
substances present in welding fumes & gases.

2.7 Environmental Effects
The effect to the environment may not be immediate but will last longer and far too disastrous than imagined.
Even though environmental damage takes place slowly, the resulting scare could be permanent. Even the slight
emission of fumes should be taken care by installing fumes extractor, water purification plants to purify the
water used after the welding process. Though the control measures are costly, it will be very cheap compared to
the ill effects faced at a later stage. We should understand and remember that even though the fumes and
respective pollutions are very less compared to other emissions they still contribute to acid rain, water
contamination, global warming.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 282



3. RESISTANCE WELDING PROCESS

Its the oldest method of the electric welding process. This welding is accomplished by the combination of
current, pressure & time. Heating of the parts is caused by the resistance offered by the base material to the flow
of electric current. Unlike the Arc Welding Process, resistance welding produces negligible fume and radiations,
but there is higher risk of mechanical hazards.

3.1 Mechanical Hazards
In resistance welding there is risk of crushing the fingers & hands between the electrode tongs. Moving parts,
hot metals, tips, linkage can cause injuries to welding operator. Many accidents are caused by tool falling off, so
each tool should be placed at designated safe place.

3.2 Fire & Explosion Hazards
Some of the fire hazards from flying sparks is likely to occur but lower than other processes. Where possible,
move the work to a location well away from combustible materials. Fire &explosion can also be caused by
sparking from lose connection of work cable. In resistance welding, high current is flowing thru cables, so
electric equipments and wirings are installed &insulated properly & should have recommended circuit
protection.

3.3 Electrical Hazards
The electrode & work circuit are electrically energized when the power is on, so touching the electrode & work
in such condition can cause the electric shock. Electric shock from puncher cable and wet gloves is dangerous.
Current higher than 0.1 Ampere (or 60 Volts) cause heavy shock to human body.

3.4 Fume & Gases
Welding vaporizes metals & anything which is coated on the surface like zinc coating, chromium, paints or
oxide of the metal. Because these coating having lower melting point thereby vaporizes and generate the fume
which is harmful to the health. Effect of fumes may occur immediately or at later times.

3.5 Environmental Effects
The effect to the environment may not be immediate but will last longer and far too disastrous than imagined.
Even though environmental damage takes place slowly, the resulting scare could be permanent. Even the slight
emission of fumes should be taken care by installing fumes extractor, water purification plants to purify the
water used after the welding process. Though the control measures are costly, it will be very cheap compared to
the ill effects faced at a later stage. We should understand and remember that even though the fumes and
respective pollutions are very less compared to other emissions they still contribute to acid rain, water
contamination, global warming.

4. OXY FUEL GAS WELDING PROCESS

In this welding process, heat is transferred from the flame to the work by forced convection & by radiation. The
forced convection is proportional to the gas flow and temperature difference between the gas & the work.
Radiation is governed by Stefan Boltzman's law. In this, fuel gas is burnt in the presence of oxygen commonly
used fuel gases are Acetylene, Propane & Hydrogen.
Acetylene gives the hottest flame and has highest combustion intensity.

4.1 Fire Hazards
The welding possesses high temperature, so it may lead to severe fire hazards. In case of manual welding, by
holding torch in hand, the risk of fire hazard increases to large extent, so great care need to be taken to protect
against any fire. Fire extinguishers need to be placed in proximity of the welding site.


4.2 Burn Hazards
Sparks & spatters fly off from the welding process. Hot metal & sparks burst out from the cutting process which
may cause burns. Apart from this, radiation produced from the high temperature flame can cause burns, so a
leather gloves, safety boots and a cap is strongly recommended for the operator. Welding produce molten
droplets of metal which are scattered in all directions, so it becomes mandatory to wear cloths which will not
burn or melt. Silky & synthetic clothes are prohibited to wear during welding.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 283


4.3 Leakage Hazards
Fire can be caused, due to leaking of fuel gas thru the leaking pipes or regulator placed at the top of the cylinder.
In such situations, when leakage is more, it can come in contact with oxygen present in the atmosphere and can
result in serious explosions. If the fire on the cylinder is a small flame that can be controlled by putting wet cloth
on the cylinder. Different regulators should be used for fuel gas & oxygen.

4.4 Accident Hazards
Acetylene is composed of carbon & hydrogen, produced by the reaction of water & calcium carbide. A great
care is required, while mixing above mentioned constituents as more reaction can lead to heavier acetylene
pressures in the cylinder which can cause fatal accident by bursting. In case of industrial acetylene cylinders,
these should be kept in vertical position, while transporting in the workshop. The cylinders should be stored
upright in a well ventilated, well protected, dry location. Expose of high temperature and knocking of acetylene
cylinder by falling objects must be avoided as they may burst due to increased pressure in the cylinder.

4.5 Fume & Gases Hazards
Fume & gases are generated from the combustion of the fuel gas & metal oxides, surface coatings etc. So the
welding site should be properly ventilated and confined space must be avoided.

4.6 Radiation Hazards
The temperature of the gas flame is as high as 6000 degree C, so high amount of radiations are produced which
can cause burns to the skin and damage to the eyes. So body parts and eyes must be prevented by suitable safety
equipments.

5. RADIANT ENERGY WELDING

There are mainly two processes viz Laser Welding & Electron Beam Welding. In laser welding a focused beam
of light is used to produce precise& deep penetration weld. And in electron Beam Welding a focused beam of
electrons is used to produce high precision welds.

5.1 Eye Protection
Special eye protection equipment must be used and care must be taken with any reflective surface, since both
the original & the reflected beam are extremely dangerous. In some laser system, Ultra Violet Light may be
leaked into the work space. Thus the eye wear should provide primary beam protection, secondary radiation
protection and also ultra violet protection.

5.2 Radiation Hazards
Ultra Violet & Infra Red light radiations are produced during welding. These can seriously burn eyes & skin
quickly.

5.3 Fire Hazards
If the beam of high energy, of electron beam welding and laser welding, hits flammable material then it causes
fire. Even reflected radiation could start firing in unexpected places.

5.4 Fume Hazards
These both processes vaporize metals, metal coating, paints, oxides of metals etc to produce harmful fumes &
gases.

5.5 Electrical Hazards
Since these processes need high electric current so more chances of electric shock are there.



Key Points To Remember
- Remove all flammable material from the vicinity of the welding.
- Keep a suitable fire extinguisher nearby at all times.
- Wear eye protection at all times
- Never weld without adequate ventilation.
- All the circuits should be properly connected & grounded.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 284

- Protect hose pipes & cables against any flying spark & open flame.
- Don't leave the hot rejected stubs, steel scrap on the floor.
- Make use of all personal safety equipments as per the guidelines.
- Keep a well stocked First Aid Kit.
- Locate the welding operations so that other workers are not exposed to direct or reflected radiations.
- Don't touch the live electrical parts.
- Read the instruction manual before installing the operating and servicing the equipments.
- Turn off all the equipments when not in use.
- Follow safe work practices when working below overhead activities.
- Safeguard the environment which makes us and our children live.

6. CONCLUSION

Welding is associated with intense radiation of invisible infrared and ultraviolet rays, high noise, and fumes and
gases containing toxic gases and metal vapour. Exposure to any of these harmful rays or gases may lead to
severe damage to eyes, ear, skin, lungs, etc. So, it is always advisable to take necessary safety precautions in the
welding shop.

REFERENCES

[1] Parmar.R.S. Welding Processes & Technology.
[2] AFSCME Health & safety Fact Sheets- Welding Hazards.
[3] Welding Hand Book Vol. IV
[4] The Health & safety at work Act 1974.
[5] AWS Fact Sheets- Welding Safety & Hazards.








Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 285

MACHINING OF GLASS AND CERAMIC WITH ALUMINA AND
SILICON CARBIDE IN ABRASIVE JET MACHINING

1
Bhaskar Chandra Kandpal,
2
Naveen Kumar,
2
Rahul Kumar,
2
Rahul Sharma,
2
Sagar Deswal

1
Asst.Prof., M.R.C.E, Faridabad
2
Department of Mechanical Engineering, ITM UNIVERSITY, Gurgaon, Haryana, India

1
kandpalbhaskar2000@gmail.com

ABSTRACT

As the world is advancing forth technically in the field of space research, missile and nuclear industry; very
complicated and precise components having some special requirements are demanded by these industries. The
conventional methods, in spite of recent advancements are inadequate to machine such materials from stand
point of accuracy, precision and economic production. The metal like hastalloy, Nitra alloy, nimonics and many
harder to machine material are such that they cant be machined by conventional methods but require some
special techniques. Abrasive jet machine (AJM) removes material through the action of focused beam of
abrasive laden gas. Micro abrasive particles are propelled by an inert gas of velocity. When directed at a
work piece, the resulting erosion can be used for cutting, etching, drilling, polishing and cleaning. In this paper
testing and analyze various process parameters of abrasive jet machining is presented
Keywords Abrasive jet machining, Erosion rate, Glass

1. INTRODUCTION
Abrasive machining is a machining process where material is removed from a work piece using a multitude of
small abrasive particles. Common examples include grinding, honing, and polishing. Abrasive processes are
usually expensive, but capable of tighter tolerances and better surface finish than other machining processes
chances, delectability, costs and safety aspect etc. The literature study of Abrasive Jet Machining [1-7]
reveals that the Machining process was started a few decades ago. Till date there has been a through and
detailed experiment and theoretical study on the process. Most of the studies argue over the hydrodynamic
characteristics of abrasive jets, hence ascertaining the influence of all operational variables on the process
effectiveness including abrasive type, size and concentration, impact speed and angle of impingement. Other
papers found new problems concerning carrier gas typologies, nozzle shape, size and wear, jet velocity and
pressure, stand off distance (SOD) or nozzle tip distance (NTD). These papers express the overall process
performance in terms of material removal rate, geometrical tolerances and surface finishing of work pieces, as
well as in terms of nozzle wear rate. Finally, there are several significant and important papers which focus on
either leading process mechanisms in machining of both ductile and brittle materials, or on the development of
systematic experimental statistical approaches and artificial neural networks to predict the relationship between
the settings of operational variables and the machining rate and accuracy in surface finishing. In recent years
abrasive jet machining has been gaining increasing acceptability for deburring applications. AJM deburring has
the advantage over manual deburring method that generates edge radius automatically. This increases the quality
of the deburred components. The process of removal of burr and the generation of a convex edge were found to
vary as a function of the parameters jet height and impingement angle, with a fixed SOD. The influence of other
parameters, viz. nozzle pressure, mixing ratio and abrasive size are insignificant. The SOD was found to be the
most influential factor on the size of the radius generated at the edges. The size of the edge radius generated was
found to be limited to the burr root thickness.
Abrasive jet finishing combined with grinding gives rise to a precision finishing process called the integration
manufacturing technology, in which slurry of abrasive and liquid solvent is injected to grinding zone between
grinding wheel and work surface under no radial feed condition. The abrasive particles are driven and energized
by the rotating grinding wheel and liquid hydrodynamic pressure and increased slurry speed between grinding
wheel and work surface achieves micro removal finishing Abrasive water jet machines are becoming more
widely used in mechanical machining. These machines offer great advantages in machining complex
geometrical parts in almost any material. This ability to machine hard to machine materials, combined with
advancements in both the hardware and software used in water jet machining has caused the technology to
spread and become more widely used in industry. New developments in high pressure pumps provide more
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 286

hydraulic power at the cutting head, significantly increasing the cutting performance of the machine. Analysis of
the economic and technical has been done by researchers. Those technology advancements in applying higher
power machining and intelligent software control have proven to significantly improve the overall performance
of the abrasive water jet machining operation, thus widening the scope of possible applications of this
innovative and promising technology. Quality of the surface produced during abrasive water jet machining of
aluminum has been investigated in recent years. The type of abrasive used was garnet of mesh size 80. The
cutting variables were stand off distance of the nozzle from the work surface; work feed rate and jet pressure.
The evaluating criteria of the surface produced were width of cut, taper of the cut slot and work surface
roughness. It was found that in order to minimize the width of cut; the nozzle should be placed close to the work
surface. Increase in jet pressure results in widening of the cut slot both at the top and at exit of the jet from the
work. However, the width of cut at the bottom (exit) was always found to be larger than that at the top. It was
found that the taper of cut gradually reduces with increase in standoff distance and was close to zero at the stand
off distance of 4 mm. The jet pressure does not show significant influence on the taper angle within the range of
work feed and the stand off distance considered. Both stand off distance and the work feed rate show strong
influence on the roughness of the machined surface. Increase in jet pressure shows positive effect in terms of
smoothness of the machined surface. With increase in jet pressure, the surface roughness decreases. This is due
to fragmentation of the abrasive particles into smaller sizes at a higher pressure and due to the fact that smaller
particles produce smoother surface. So within the jet pressure considered, the work surface is smoother near the
top surface and gradually it becomes rougher at higher depths.

Fig 1 Layout of AJM

Table 1
Abrasive jet machine characteristics
Mechanics of metal removal Brittle fracture by impinging abrasive grains at high speed.
Carrier gas Air , carbon dioxide
Abrasives Alumina , SiC
Pressure 2-10 atm
Nozzle WC, sapphire,
Critical parameters Abrasive flow rate and velocity, nozzle tip distance abrasive grain
size
Material application Hard and brittle metals ,alloys, and non metallic

2. EXPERIMENTAL WORK

Experiments were conducted to confirm the validity of my results as well as the theoretical results. The
experimental work was carried on a test rig which was manufactured in the workshops of ITM University,
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 287

Gurgaon, and Haryana, India. The abrasive grits like aluminum oxide, silicon carbide were mixed with air
stream ahead of nozzle and the abrasive flow rate was kept constant throughout the machining process. The jet
nozzle was made of tool steel to carry high wear resistance. Drilling of glass sheets and ceramic plates were
conducted by setting the test rig as shown in figure 2 .Glass and ceramic tiles were used as a work piece material
because of its homogeneous properties. The test specimens were cut into square and rectangular shape for
machining on AJM unit. In machine the initial weights of glass plates and ceramic tiles were measured with the
help of digital balance. After machining the final weights were measured with the help of digital balance to
calculate the material removal rate. In our machine the movement to specimens in x-y directions is provided
with the help of cross slide and in z direction with help of worm and worm wheel drive. First the abrasive that
was alumina in powder form was fed in the hopper carefully. After that compressor connections were checked.
The glass specimen was properly clamped on cross slide with the help of various clamps. As the compressor
was switched on, the hopper gate valve was opened so that abrasive grains were mixed with air jet coming from
the compressor and focused on the specimen with help of nozzle. Same experiments were with silicon carbide as
abrasive in AJM process.
Different readings were taken using different process parameters on the glass plates and ceramic tiles of
different thickness and all results were tabulated. All results were compared with the theoretical results also to
check the validity of our results. All results were used to plot various graphs as shown here.

Fig.2 schematic layout of abrasive jet machine test rig












Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 288

3. RESULTS
A. Experimental Results
GRAPH 1 B.W MRR AND NTD BY USING SILICA AS ABRASIVE AND GLASS AS WORKPIECE

NTD

GRAPH 2 B.W MRRAND NTD BY USING SILICON CARBIDE AS ABRASIVE AND CERAMIC AS
WORKPIECE



GRAPH 3 B.W MRR AND STAND OFF DISTANCE BY USING SILICON CARBIDE AS ABRASIVE AND
GLASS AS WORKPIECE


NTD



0
1
2
3
4
5
6
0 2 4 6 8 10 12
MATERIAL REMOVAL RATE
Y-
0
2
4
6
0 1 2 3 4 5 6 7 8 9
MATERIAL REMOVAL RATE
Y-Values
0
0.5
1
1.5
2
2.5
0 2 4 6 8 10 12
MATERIAL REMOVAL RATE
Y
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 289

GRAPH 4 B.W MRR AND PRESSURE BY USING SILICON CARBIDE AS ABRASIVE AND GLASS AS
WORKPIECE

PRESSURE


GRAPH 5 B.W MRR AND NTD BY USING 60 MESH SIZE ABRASIVE

NTD


GRAPH 6 B.W MRR AND PRESSURE BY TAKING 60 MESH SIZE ABRASVE

PRESSURE






0
2
4
6
8
0 1 2 3 4 5 6 7 8 9 10
MATERIAL REMOVAL RATE
Y-Values
0
2
4
6
0 2 4 6 8 10
MATERIAL REMOVAL RATE
Y
0
5
10
15
20
0 1 2 3 4 5 6 7 8 9 10
MATERIAL REMOVAL RATE
Y-Values
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 290

GRAPH 7 B.W MATERIAL REMOVAL RATE AND PRESSURE BY TAKING ALUMINA AS
ABRASIVE


PRESSURE

B. VALIDATION AND DISCUSSION OF RESULTS

GRAPH 8 EFFECT OF PRESSURE ON METAL REMOVAL RATE


PRESSURE

So from the graph 8 between the material removal rate and the pressure we can say that the material removal
rate is directly proportional to the pressure or it is increasing when the pressure is increased. When we
compared this graph with the theoretical graph then we find out that the inc. rate of MRR with pressure in case
of actual graph is more than the theoretical graph.








0
1
2
3
4
5
6
0 2 4 6 8 10 12
MATERIAL REMOVAL RATE
Y-Values
0
0.2
0.4
0.6
0.8
1
0 1 2 3 4 5 6
MATERIAL REMOVAL RATE
Y-
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 291

GRAPH 9 BETWEEN NOZZLE TIP DISTANCE AND THE MATERIAL REMOVAL RATE



STAND OFF DISTANCE
So from this graph we can say that initially the material removal rate varies linearly with the stand off distance
but after a limit it starts diminishing. From both the graphs we can say the inc. rate is same in both case but the
diminishes rate after some time is much in case of theoretical than the actual graph.

GRAPH 10 B.W ABRASIVE PARTICLE SIZE AND MATERIAL REMOVAL RATE



ABRASIVE PARTICLE SIZE
From this graph 10 we can also say that material removal rate varies with the abrasive particle size as the
abrasive particle size increases material removal rate also increases and the same case will happen in case of
theoretical graphs we can say that in case of MRR with abrasive size both the theoretical and the actual graphs
are same. Samples of work pieces machined on AJM are shown in figure 3


Fig. 3- SAMPLES OF GLASS PLATE AND CERAMIC PLATE CUT BY AJM

0
2
4
6
8
10
12
0 2 4 6 8 10 12
MATERIAL REMOVAL RATE
Y-Values
0
0.01
0.02
0.03
0.04
0.05
0 0.2 0.4 0.6 0.8 1 1.2 1.4
MATERIAL REMOVAL RATE
Y-Values
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 292


4. CONCLUSIONS
This paper presents various results of experiments have been conducted by changing pressure, nozzle tip
distance on different thickness of glass plates and ceramic plates. The effect of process parameters of AJM on
the material removal rate (MRR) was measured and graphs were plotted. These were compared with the
theoretical results. It was observed that as nozzle tip distance increases, material removal rate (MRR) increases
as it is in the general observation in the abrasive jet machining process. As the pressure increases material
removal rate (MRR) is also increased as we found in AJM process. Similarly as abrasive particle size increases
MRR increases.
REFERENCES
[1] Ghobeity, A.; Spelt, J. K.; Papini(2008) Abrasive jet micro machining of planar areas and transitional
slopes , M.Publication: Journal of Micromechanics and Microengineering, Volume 18, Issue 5.
[2] M.Wakuda,Y.Yamauchi and S. Kanzaki (2002) Effect of work piece properties on machinability in
abrasive jet machining of ceramic materials, Publication: Precision Engineering, Volume 26, Issue 2,
pp193-198.
[3] R. Balasubramaniam, J. Krishnan and N. Ramakrishnan(1999) , An experimental study on the abrasive jet
deburring of cross drilled holes , Publication: Journal of Materials Processing Technology, Volume 91,
Issues 1-3, pp 178-182.
[4] R. Balasubramaniam, J. Krishnan and N. Ramakrishnan(2002) A study on the shape of the surface
generated by abrasive jet machining, Publication: Journal of Materials Processing Technology, Volume
121, Issue 1, 14February 2002, pp 102-106.
[5] M. K. Muju and A. K. Pathak(1988) Abrasive jet machining of glass at low temperature, Publication:
Journal of Mechanical Working Technology, Volume 17, pp 325-332.
[6] P. Verma and G. K. Lal Publication(1984) An experimental study of abrasive jet machining,
International Journal of Machine Tool Design and Research, Volume 24, Issue 1, pp 19-29.
[7] A.EI-Domiaty, H.M.Abd EI Hafez, and M.A. Shaker(2009) Drilling of glass sheets by abrasive jet
machining, World Academy of Science , Engineering and Technology 56.




Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 293

COMPARATIVE STUDY OF EDM DRILLING WITH RAM TYPE EDM
J.S. Tiwana
1
, Gaurav Kumar
2
, Narayan Garg
3

1
Associate Professor, GZSCET, Bathinda,
2
Lecturer, GTBKPC, Chhapianwali, Malout,
3
Lecturer, ACET, Barnala
1
jstiwana1@rediffmail.com,
2
gaurav1588@rediffmail.com,
3
ernarayangarg@gmail.com


ABSTRACT

With the rapid technological acceptance of Tungsten Carbide in industrial application, the machining of the
Tungsten Carbide has been very important for manufacturing engineers. Machining of Tungsten Carbide has
still remained as a major problem in the area of material machining. Hence a research investigation is needed
for searching out a suitable non- conventional machining process for proper machining of Tungsten Carbide.
Keeping in view, Electro Discharge Machining (EDM) process is considered for generation of holes. This paper
presents the results of comparative study of EDM Drilling with Ram type EDM. The difference of parameters
used on these two machines affects the time period of the operation. This paper also describes the metal removal
during generation of holes. The tool used for EDM drilling is a thin brass tube which has one or more holes
inside from which the flushing fluid flows. The practical test results of Drilling holes in Tungsten Carbide on
EDM Drilling as compared to Ram type EDM will help to manufacturing engineers to increase productivity.
The EDM Drilling Parameters are optimized for achieving an enhanced production rate with improved
accuracy and surface finish.

KEYWORDS: EDM Drilling, Machining Time, Machining Parameters
INTRODUCTION
With the growth of industry and technology, development of harder and difficult to machine materials has found
wide applications in aerospace, nuclear engineering and other engineering fields. New developed materials like
composite materials and high tech ceramics having good mechanical properties and thermal properties as well as
sufficient electrical conductivity so that they can be machined by spark erosion. To machine these types of
materials EDM machines are employed. The problems related to high complexity in shape, size, demand for
higher accuracy and surface finish can be solved through non- traditional methods. In many manufacture
industries drilling, milling, grinding and other traditional machining operations have been replaced by EDM.
Electrical discharge machine drill is a non- traditional machining method it works on the principle; the
thermal energy is used to machine electrically conductive material irrespective to their hardness and toughness
make it applicable not only in manufacturing industries but also in field of medicine, automotive and also used
for producing micro- holes in diesel fuel injection nozzles, holes in turbine blades, holes less than 200m in
diameter. The holes that are drilled by EDM produces an EDM hole with a neck this is due to the fact that the
increase in the gap therefore the electrode moves forward to maintain the gap and this results a hole with neck.
This problem is eliminated by the use of EDM DRILL because a high pressure dielectric fluid which is usually
supplied to the gap through the bore of the tubular electrode.
The material removal takes place by melting and vaporization. A small gap about 0.025 mm is maintained
between the tool and the work piece by servomotor system. The tool is made cathode and work piece is anode.
During the process electrical energy is converted into thermal energy the sparks are generated between work
piece and tool. Thus each spark produces a tiny crater in material by melting and eroding the work piece to the
shape of the tool. One of the important advantages obtained by EDM drilling is the speed and accuracy.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 294

The main difference between the RAM type EDM and EDM DRILLING is the use of a high pressure i.e. (70-
100 bar) dielectric pump. The combination of the high pressure dielectric fluid, the rotation of the tubular
electrode and high electrode feed rate make it possible to produce holes at a very fast rate.
EDM performance also depends on a number of other factors including the electrical parameters, electrode
geometry and dielectric flushing. It is also called non-contact electro-thermo process.
ELECTRO DISCHARGE MACHINING (EDM)
EDM is defined as a non conventional machining process. This process uses the thermal energy for material
removal by melting and vaporization of the work piece. Heat is used as a source of energy used for melting and
vaporization generated by a series of sparks between two electrodes having a gap of 0.025-0.05mm. This
process can be used for any electrical conductor materials irrespective to their hardness and toughness.
EDM RAM TYPE

The working principle of EDM is removal of material by the rapid action of electric sparks taking place between
tool and the work piece. There is no direct contact between the tool and the work piece. A thin gap of 0.05 mm
is maintained between them through servomotor known as spark gap. Both the tool and the work piece are
submerged in the dielectric fluid.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 295

Fig. Show the electric setup of the Electric discharge machining. The tool is made cathode and work
piece is anode submerged in an insulating liquid such as dielectric fluid. The electrode and work piece are
connected to a suitable power supply. The power supply generates an electrical potential between the two parts.
When the voltage across the gap becomes sufficiently high dielectric breakdown occurs in the fluid, forming a
plasma channel, and a small spark jumps. These sparks happen in huge numbers at seemingly random locations
between the electrode and the work piece. As the base metal is eroded, and the spark gap subsequently
increased, the electrode is lowered automatically by the machine called servomotor. Several hundred thousand
sparks occur per second, with the actual duty cycle carefully controlled by the setup parameters. These
controlling cycles are sometimes known as "on time" and "off time".
EDM DRILL MACHINE
EDM drill is also known as fast hole EDM drilling, hole popper. It uses a hollow electrode to drill holes. Any
material that conducts electricity can be drilled, either hard or soft for e.g. Carbide, aluminum, titanium etc. The
principle of EDM drill is same as the other EDM i.e. material removal is achieved by sparks between electrode
and work piece, with associated melting and vaporization caused by high temperatures. There is no physical
contact between the work piece and the electrode and the small gap is maintained between them by servomotor
is known as spark gap.

The holes that are drilled by EDM produces an EDM hole with a neck this is due to the fact that the increase in
the gap therefore the electrode moves forward to maintain the gap and this results a hole with neck. This
problem is eliminated by the use of EDM DRILL because a high pressure dielectric fluid which is usually
supplied to the gap through the bore of the tubular electrode.
EDM fast hole drilling plays a significant role in the aerospace industry. It is one of the few manufacturing
processes that can be applied to the drilling of precision small holes in a number of parts, including turbine
blades
IMPORTANT PARAMETERS OF EDM
Spark On-time (pulse time or T-on): The duration of time (s) the current is allowed to flow per cycle.
Material removal is directly proportional to the amount of energy applied during this on-time. This energy is
really controlled by the peak current and the length of the on-time. The on time setting determines the length or
duration of the spark. Hence, a longer on time produces a deeper cavity for that spark and all subsequent sparks
for that cycle, creating a rougher finish on the work piece.
Spark Off-time (pause time or T-off): The duration of time (s) between the sparks(that is to say, on-time).
This time allows the molten material to solidify and to be washout of the arc gap. This parameter is to affect the
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 296

speed and the stability of the cut. Thus, if the off-time is too short, it will cause sparks to be unstable. Off time is
the period of time that one spark is replaced by another. A longer off time, for example, allows the flushing of
dielectric fluid through a nozzle to clean out the eroded debris, thereby avoiding a short circuit. These settings
can be maintained in micro seconds.
Arc gap (or gap): The Arc gap is distance between the electrode and work piece during the process of EDM. It
may be called as spark gap. Spark gap can be maintained by servo system.
Discharge current (current I
P
): Current is measured in amp Allowed to per cycle. Discharge current is
directly proportional to the Material removal rate.
Duty cycle (): It is a percentage of the on-time relative to the total cycle time. This parameter is calculated by
dividing the on-time by the total cycle time (on-time pulse off time).
Voltage (V): It affects the material removal rate. Voltage is given by in this experiment is 50 V.
Diameter of electrode (D): It is the electrode of Cu-tube there are two different size of diameter 1mm and
2mm in this experiment. This tool is used not only as a electrode but also for internal flushing.
Over cut It is a clearance per side between the electrode and the work piece after the marching
operation.Width: Width is the throwing of a spark in a specific time, and it changes frequency of that sparks.
Frequency: Frequency and Width have reverse ratio. Additionally, the machining time can be decreased by
increasing the frequency, but in that case the surface quality is decreased directly.
Polarity: With the polarity, the polarity of electrode or the work piece material can be changed.
Regulation: Regulation is the parameter which control the Gap and Gain.
Gap: Gap adjusts distance between work piece and electrode. If gap set too high, number of sparks will be
decreased and time of process will be increased, also if gap adjusts too low, number of parks will be increased
and time of process will be decreased but surface quality will be also decreased.

CASE STUDY

Experiments about holes are made with diameter 2mm and 1mm in tungsten Carbide by using Brass
tool, which is shown in table below. Parameters are adjusted with the help of literature survey which are not
known and it also save time to find optimal parameters.



In first experiment, 1 mm hole is drilled on EDM Ram type with the parameter listed in the table1.
Then one more hole is drilled with 2mm diameter by keeping the all other parameters same as given in table
below.
Parameters Hole 1 (1mm) Hole 2 (2mm)
Current 6 amp 6 amp
Frequency 50 Hz 50 Hz
Voltage 220 V 220 V
Gap 0.05 0.05
Time 1 hour 12 min 1 hour 47 min
Table 1

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 297



In second Experiment, holes are drilled with the help of EDM Drill machine. The parameter for the holes of 1
mm and 2mm diameter are shown in table 2 below.
Parameters Hole 1 (1mm) Hole 2 (2mm)
Current 6 amp 6 amp
Frequency 50 Hz 50 Hz
Voltage 220 V 220 V
Gap 0.05 0.05
Time 7 min 35 sec 10 min 48 sec
Table 2
In third experiment, holes are drilled with the help of EDM Drill machine with different value of current. The
current is taken as 10 amp in this experiment instead 6 amp. Other parameters are kept same and listed in table 3
below
Parameters Hole 1 (1mm) Hole 2 (2mm)
Current 10 amp 10 amp
Frequency 50 Hz 50 Hz
Voltage 220 V 220 V
Gap 0.05 0.05
Time 1min. 12 sec 2 min 9 sec
Table 3
DISCUSSION
As a result, our project is focused data analysis because processes are investigated experimentally. It can be said
that machining process of tungsten carbide are try to study experimentally on EDM Ram type and EDM Drill
machine. Time based results of these studies are included in this paper. The objective of the research is to
machine tungsten carbide in short span of time with optimum machining parameters. Results shows that Drilling
operation can be carried out in very small time span on EDM Drilling Machine as compared to EDM Ram type.
Results also show that the parameters of the process effect the machining time, shape and surface quality of the
holes. Machining time and surface quality of the hole are directly proportional to each other. As we try to reduce
the machining time by increasing the current supply to EDM Drilling Machine, surface quality of the hole also
decrease.

CONCLUSIONS
To sum up the aim of our project is investigate the EDM processes and try finding a solution for machining
tungsten carbide with the help of experiments and researches. As a result EDM Drilling machine is the better
option. In addition machining time and effect of current are observed and understood, which can be adjusted
according to the requirement of the work/job.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 298

REFERENCES
[1] B. Yavru, O. S. Karadag, T. Savas, M.A. Cigdem, V.O. Erdolu, A. Erden, A. K. Eroglu, Micro-EDM and
Ram Type EDM Set-up, Mechatronics Engineering Department, Atilim University,
Incek/Ankara/TURKEY
[2] N. L. Gupta, Optimization Of Micro-Wire EDM Operation Using Grey Taguchi Method, Department of
Mechanical Engineering National Institute of Technology, Rourkela May, 2011. P 1-7.
[3] Shailesh Kumar Dewangan, Experimental Investigation Of Machining Parameters For EDM Using U-
Shaped Electrode Of AISI P20tool Steel, Department of Mechanical Engineering National Institute of
Technology Rourkela (India)2010.
[4] S. Dhar, R. Purohit, N. Saini, A. Sharma and G.H. Kumar, 2007. Mathematical modeling of
[5] electric discharge machining of cast Al-4Cu-6Si alloy-10 wt.% sicp composites. Journal of Materials
Processing Technology, 193(1-3), 24-29.
[6] K.H. Ho, S.T. Newman, State of the art electrical discharge machining (EDM),International Journal of
Machine Tools & Manufacture 43 (2003).
[7] S. S. Mohapatra, Amar Pattnaik, Optimization of WEDM process parameters using Taguchi method,
International Journal of Advanced Manufacturing Technology (2006).
[8] S. Singh, S. Maheshwari And P. Pandey, (2004). Some investigations into the electric discharge machining
of hardened tool steel using different electrode materials. Journal of Materials Processing Technology,
149(1-3):272277.
[9] Text book of Manufacturing science by Amitabha Ghose & Ashok Mallik in 2005 West press private Ltd.
[10] Text book of Production Engineering Technology by R.K Jain.
[11] j-gate.informindia.co.in










Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 299

PRODUCT DEVELOPMENT PROCESS IN CURRENT ERA:

Chirag D Suthar,

Northumbria University, Newcastle, UK
contact.chirag.1982@gmail.com


ABSTRACT

Product development process plays a significant role in the success of new development product (NPD).
Product development process should be able to manufacture new product as per the design specifications to
meet the stringent service conditions. It should also meet customer s spoken and unspoken needs. In the current
global era the customer satisfaction plays an important role in product success. Thus the product development
process should be able to take up various challenges of the global competitions.

The present paper provides product development process using Quality function deployment (QFD).
The QFD has been used in developing the required customers requirement successfully.

Keywords: Quality function deployment (QFD).


INTRODUCTION

In this materialistic world majority of time we satisfy our needs by items which are called as Products.
Developing quality products which satisfy customer needs involves lot of steps. These involve study of
customer needs, surveys, technological, economical factors, market trends, reliability, safety etc. In country like
ours which is at developing stage and where 70% of the total population is under the poverty line, prices of the
products should be economical to the end users. Organizations also plan to develop product with optimum profit
and quality acceptable to the users.
In the organizations, activities of different departments with a common vision for developing new
product is called product development process. If efforts for developing products are not in the defined way then
lot of energy is wasted and ultimately organizations lose time and money in many of the cases. If the qualitative
energies are directed for achieving defined goal and efforts of the individual team members are not repeated
then probability of the success of the product increases. Current era is the era of competition. Product
development process increases studies of steps like basic and applied research, design and development, market
research, marketing, planning, production, distribution and after sales service.

STRUCTURE OF INNOVATION PROCESS

When we are intending to improve any processes in product development first we study traditional approaches.
We trust on these approaches because these processes have been used for years. We know that world is
continuously changing and for the survival and growth we must change accordingly. We can say Chang is the
norm. Changes in the processes must be done after thoyough study of advantages and disadvantages.

If we want to improve process in the product development, we need to study each team members
activity, what part individual plays in product development and find the ways to organize these activities.

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 300


FIG-1 STRUCTURE OF INNOVATION PROCESS
(PRODUCT DESIGN FUNDAMENTALS AND METHODS BY N.F.M ROOZENBURG, J EEKELS)

Fig-1 shows basic approach for the innovation process. At the initial stage of the product development
companies must decide what they want to achieve.



FIG-2 INNOVATION PROCESS FOR PRODUCT DEVELOPMENT COMPANY

If we study policies of the company we can know what company/department wants to achieve. Defining goals
are not sufficient for improvements. In the next step we need to study strategies for fulfilling goals. Product
development and product to market strategies are more important out of all strategies. Majority of companies
lose their credit in the market if they fail in product to market point. For improvements ideas must be generated
from each team member. We call this phase as idea finding. Deciding executable idea out of all generated ideas
is also important step to do. With the help of this step detailed plan for new business activities are done.Fig-2
shows innovation process which can be applied for process improvement in optimizing product.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 301


Factors Effecting Process improvement and project:


Fig-3 Factors which effect development process in the organization

As shown in fig-3 various parameters effect the project & process improvement. As our intentions are always to
reduce time of product development from concept to final dispatch of the product to the market with
improvement in the development process. These processes are related to the various technological trends,
regulations, cultural and economical factors. For example in the current era we have softwares like SAP, Team
center, GDMS (global database management system) which links different departments of the company and due
to these softwares all the team members of different departments can view the updated information of the
products. Automation in some of the processes is possible. Work of one team can be use by other teams as
ready made information.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 302


Fig-4 process of information sharing in the product development company
Above fig shows process of information sharing in organization.

QFD (QUALITY FUNCTIONAL DEPLOYMENT)

It was first proposed by Akao in late 1960.This is a concept and mechanism for translating the voice
of customer through various stages of product planning, engineering and manufacturing in to final product.
Basic structure of QFD is for translating customer needs into product design as per the time requirement of
the customer. For relating various variables associates with customer attributes to variables associated with the
engineering characteristics, chart are used which is called house of quality.
A house of quality contains information on what to do (marketing),how to do it(engineering) how my
competitors do it(benchmarking) and the integration of this information. With use of house of quality, the
design team first selects the customer attributes in which the company is weak compared to its competitors.
Then team finds engineering attributes which effects customer needs. In this process overall goal of team is to
improve current product development process considering quality of product and timely deliverables to the
customer.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 303



Fig-5 QFD Analysis for material handling company

As shown in the Fig-5, customer gives Safety,cost and long life as highest priority for material handling
products. Also from companys point of view it is always in the process of improving capacity, functional
testing, product evaluation, design process and keeping less inventory in the company for optimizing cost of the
product.



CONCLUSIONS

While designing product not only product quality is considered but process for developing product
plays crucial role. Product development processes is comprehensively related with many up-coming
technologies. Making modifications in the development processes with the prevailing latest technologies may
help to reduce development time while improving the overall quality of the product.
In the present work, attempt has been made to evolve a new NPD methodology wherein Analytic
Hierarchy Process and Quality Function Deployment has been used. QFD offers a more systematic comparison
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 304

between various attributes of the product and thus each part may be systematically compared with whole
product and can be designed systematically. QFD offers feasibility of imbibing customers likes and dislikes.
QFD may also be employed in comparing the competitors product with respect to various attributes of the
product which help for the mass customization of the product.

REFERENCES

[1] Nobuoki Ohtani, S. D. a. S. O. (1997) JAPANESE DESIGN AND DEVELOPMENT. Hampshire: Gower
Publishing Limited.
[2] Tischner, M. C. a. U. (2001) Sustainable Solutons. Sheffied,Uk: Greenleaf Publishing Limited.
[3] P.G.Maropoulos, D. C. 'Design Verification and Vaidation in Product lifecycle', CIRP ANNALS-
MANUFACTURING TECHNOLOGY.
[4] Alaa Hassan, A. S., Jean-Yves Dantan,Petrick Martin (2009) 'Conceptual process planning-an improved
approach using QFD,FMEA and ABC methods', Robotics and Computer-Integrated manufacturing.
[5] C.P.M.Govers 'What and how about quality functional deployment(QFD)', Production economics.
[6] Strategy Process, Content, Context (3rd. Edition) by De Wit and Meyer (2004) - Chapter 7
[7] Collaborate with your Competitors and Win by G. Hamel, Y. Doz & C.K. Prahalad (Harvard Business
Review Jan Feb 1989 Vol. 67), Also in Strategy Process, Content, Context (3rd. Edition) by De Wit and
Meyer (2004) Reading 7.1, Pages 383 387
[8] Creating a Strategic Centre to Manage Web of Partners by G. Lorenzoni & C Baden-Fuller (California
Management Review Vol. 37, No. 3), Also in Strategy Process, Content, Context (3rd. Edition) by De Wit
and Meyer (2004) Reading 7.2, Pages 388 396
[9] Outsourcing: A Core or Non-core Strategic Management Decision by J. Heikkila and C. Cordon (Strategic
Change Vol. 11, 2002. Pages 183 193).


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 305

OPTIMIZATION OF MACHINING PARAMETERS OF WIRE ELECTRIC DISCHARGE
MACHINING DURING FABRICATION OF PUNCHING DIE

1
Singh Jitender
2
Kumar Dhiraj
3
Chand Krishan
1
Jiturohilla24@gmail.com,
2
Dhirajkuamr410@gmail.com


ABSTRACT

As the newer and more exotic materials have been developed in the past few decades, conventional machining
operations tend to reach their limitations as relatively more complicated shapes jobs are required to be
manufactured. The dissertation work present an investigation on the effect and optimizing the machine
parameter for cutting speed and dimensions, in (WEDM) of Die steel which is commonly used in manufacturing
of die and punch, is heat treated to raise its hardness up to 55 RC. Taguchis design of experiments for
conducting the experiments and analyzing the data. L18 orthogonal array is used to carry out experiments.
Process parameters selected in this research work are: Discharge current, Pulse-On time, Pulse-OFF time,
Wire speed and Wire tension. In this study Die steel plate has been machined to manufacture a die, Die which is
to be used to produce punching die width of 5 mm. Utility concept has been used to optimize the wire electrical
discharge machining process with multi-objectives. Conformation experiments show the error associated with
the cutting speed and the die width is 2% & 0%.

Key words: WEDM: Wire Electric Discharge Machining, RC: Rockwell Hardness
INTRODUCTION
WEDM process with a thin wire as an electrode transforms electrical energy to thermal energy for cutting
materials. With this process, alloy steel, conductive ceramics and aerospace materials can be machined
irrespective to their hardness and toughness. Furthermore, WEDM is capable of producing a fine, precise,
corrosion and wear resistant surface. WEDM has tremendous potential in its applicability in the present day
metal cutting industry for achieving a considerable dimensional accuracy, surface finish and contour generation
features of products or parts. Moreover, the cost of wire contributes only 10% of operating cost of WEDM
process. The difficulties encountered in the die sinking EDM are avoided by WEDM, because complex design
tool is replaced by moving conductive wire and relative movement of wire guides.

Fig.1. Diagram of the Basic Principle of WEDM Process
A series of electrical pulses generated by the pulse generator unit is applied between the work piece and the
travelling wire electrode, to cause the electro erosion of the work piece material. As the process proceeds, the
X-Y controller displaces the worktable carrying the work piece transversely along a predetermined path
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 306

programmed in the controller. While the machining operation is continuous, the machining zone is
continuously flushed with water passing through the nozzle on both sides of work piece. Since water is used as
a dielectric medium, it is very important that water does not ionize. Therefore, in order to prevent the
ionization of water, an ion exchange resin is used in the dielectric distribution system to maintain the
conductivity of water. The mechanism of metal removal in wire electrical discharge machining mainly involves
the removal of material due to melting and vaporization caused by the electric spark discharge generated by a
pulsating direct current power supply between the electrodes. In WEDM, negative electrode is a continuously
moving wire and the positive electrode is the work piece. The sparks will generate between two closely spaced
electrodes under the influence of dielectric liquid. Water is used as dielectric in WEDM, because of its low
viscosity and rapid cooling rate. The process is ideal for stamping die components. It is often possible to
fabricate punch as well in same cut. Other tools and parts with intricate outline shapes such as lathe form
tools, extrusion dies, flat templates and almost any complicated shapes can be produced.

STATEMENT OF THE PROBLEM
The present work Effect of Process Parameters on Performance Measures of Wire Electrical Discharge
Machining has been undertaken keeping into consideration the following problems:

It has been long recognized that cutting conditions such as pulse on time, pulse off time, servo voltage, peak
current and other machining parameters should be selected to optimize the economics of machining
operations as assessed by productivity, total manufacturing cost per component or other suitable criterion.

High cost of numerically controlled machine tools, compared to their conventional counterparts, has forced us
to operate these machines as efficiently as possible in order to obtain the required payback.

Predicted optimal solutions may not be achieved practically using optimal setting of
machining parameters suggested by any optimization technique. So, all the predicted optimal solutions should
be verified experimentally using suggested combination of machining parameters.

Objective Of Present Investigation
These are the following objective of the present investigation.

Optimize the process parameters for single response optimization using. Taguchis L18 OA, during rough cut,
i.e. to maximize the cutting speed, and to archive the target square width.

Predict the optimal value of each response characteristic correspond to their optimal parameter setting using
MINITAB 16.

Predict the optimal value of all three characteristic correspond to one optimal setting using MINITAB 16.
Compare the experimental result with the predicted optimal values

TAGUCHIS PHILOSOPHY
Taguchis comprehensive system of quality engineering is one of the greatest engineering achievements of
the 20th century. Taguchis philosophy is founded on the following three very simple and fundamental
concepts:

Quality should be designed into the product and not inspected into it.

Quality is best achieved by minimizing the deviations from the target. The product or process should be so
designed that it is immune to uncontrollable environmental variables.

The cost of quality should be measured as a function of deviation from the standard and the losses should be
measured system-wide.




Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 307

Experimental Design Strategy
Taguchi recommends orthogonal array (OA) for lying out of experiments. These OAs are generalized Graeco-
Latin squares. To design an experiment is to select the most suitable OA and to assign the parameters and
interactions of interest to the appropriate columns. The use of linear graphs and triangular tables suggested by
Taguchi makes the assignment of parameters simple. The array forces all experimenters to design almost
identical experiments. In the Taguchi method the results of the experiments are analyzed to achieve one or
more of the following objectives:

To establish the best or the optimum condition for a product or process

To estimate the contribution of individual parameters and interactions

To estimate the response under the optimum condition

The optimum condition is identified by studying the main effects of each of the parameters. The main effects
indicate the general trends of influence of each parameter. The knowledge of contribution of individual
parameters is a key in deciding the nature of control to be established on a production process. The analysis of
variance (ANOVA) is the statistical treatment most commonly applied to the results of the experiments in
determining the percent contribution of each parameter against a stated level of confidence. 3.3 LOSS
FUNCTION The heart of Taguchi method is his definition of the nebulous and elusive term quality as the
characteristic that avoids loss to the society from the time the product is shipped. Loss is measured in terms of
monetary units and is related to quantifiable product characteristic Taguchi defines the loss function as a
quantity proportional to the deviation from the nominal quality characteristic L(y) =k(ym (1.1) Where, L = Loss
in monetary units m = value at which the characteristic should be set y = actual value of the characteristic k =
constant depending on the magnitude of the characteristic and the monetary unit involved

Fig.2 Taguchi Loss Function

Fig.3 Traditional Approach
The S/N ratio consolidates several repetitions (at least two data points are required) into one value.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 308

The equation for calculating S/N ratios for smaller is better (LB), larger is better (HB) and nominal is best
(NB) types of characteristics are as follows:
1. Larger the better: (S/N)
HB
= -10 log(MSD
HB
) (1.2)
Where
MSD
HB
=

1/R
R
j=1
(1/Y
2
)
2. Smaller the Better: (S/N)
LB
= -10 log(MSD
LB
) (1.3)
Where
MS D
LB
= 1/R
R
j=1
(Y
i
)
2
3. Nominal the best: (S/N)
NB
= -10 log(MSD
NB
) (1.4)
Where
MSD
NB
= 1/R
R
j=1
(Y
j
Y
o
)
R=Number of repetitions
The mean squared deviation (MSD) is a statistical quantity that reflects the deviation from the target value. The
expressions for MSD are different for different quality characteristics. For the nominal is best characteristic,
the standard definition of MSD is used. For the other two characteristics the definition is slightly modified
Experimental Procedure
A series of experiments were conducted to study the effects of various machining parameters on WEDM
process. Studies have been undertaken to investigate the effect of important parameters viz., discharge current,
pulse on time, pulse off time, wire speed and wire tension on cutting speed, and square width. Taguchis DOE
approach is used to minimize the process parameters. Experiments were carried out on the Die steel as work
piece electrode and brass coated copper wire as a tool electrode. Distilled water has been used as dielectric fluid
throughout test.

Fig.4 Workpiece and Punches with confirmation experiment


SCHEME OF EXPERIMENTS
Selecting a particular OA to be used as a matrix for conducting the following two points were first considered
as suggested by Ross (1996)

The number of parameters and interactions of interest
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 309


The number of levels for the parameters of interest

Non linear behavior, if exists, among the process parameters can only be studied .then two levels of the
parameters are used. Therefore, each of parameters was analyzed minimum three levels. In this study mixed
type of array is chosen in which one parameter is set at six levels and rest are set at three levels. Degree of
freedom (DOF) associated each factor is equal to (no. of level -1). Therefore total degree of freedom for five
factors is (5*2+2*4 = 18) as per Taguchis method the total DOF of selected OA must be greater that or the total
DOF required for the experiment. So an L18 OA (a standard Mixed-levels OA) having 17(=18-1) degree of
freedom was used for the present analysis. Most tool failure are due to such mechanical causes. However, with a
variety of tool steel available for manufacturing metal forming tool, it is often possible to choose a tool steel
with a favorable combination of properties for particular applications. By comparing the levels of metallurgical
properties offered by different steels, tool users can determine which steels are best suited for fixing or resting
performance problems, or for enhancing performance.
Control Factors Symbols
Discharge Current
A
Pulse-ON
B
Pulse-OFF
C
Wire Speed
D
Wire Tension
E
Table 1. Control parameters and symbol
Fixed Parameters Specification
Wire Type Brass coated copper
Angle of cut Vertical
Work piece Thickness 20 mm
Wire Compensation 0.14
Work piece hardness 55RC
Dielectric Flow 12 LPM
Table 2.Fixed parameters and their specifications

Table 3.Levels for various control factors
Experimentation
While performing various, the following precautionary measure taken. 1. Each set of experiments was
performed at room temperature in a narrow range (26 2. With continuous usage of wire electrode, there was a
problem of deposition of wire material on work surface which reduces the quality of specimen. To minimize this
problem, high pressure flushing was used. In this phase, five process parameters viz., peak current, pulse on
time, pulse off time, Wire feed and wire tension was selected as given in table 3. Experiments were conducted
according to the test conditions specified by the L18 OA (Table 3). In each of the trial conditions, cutting speed,
square width was measured
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 310

The work piece material used in the investigation is D3 tool steel. Composition of D3 steel is
C=2.25%,Cr=12%, V=0.6%, Mo=1%, Si=0.6%, Mn=0.6% and balance is Fe. A D3 tool steel plate of size 120
*80*20mm is heat treated to raise its hardness up to 55 RC. All the faces of tool steel plate are grinded to
remove the burrs and rust. Wire moves smoothly throughout the work piece. D3 steel is also characterized by
dimensional stability after hardening and tempering, high compressive strength. The success of a metal forming
tool depends on optimizing all the factors affecting its performance.

Table 4. L18 Orthogonal Array

ANALYSIS OF FOR SINGLE RESPONSE OPTIMIZATION
The experiments were planned by using the parametric approach of the Taguchis method. the standard
procedure to analyze the data, as suggested by Taguchis is employed. The main effects of process parameter
for raw data and S/N data are plotted. The response curves (main effect) are used for examine the parametric
effect on their response characteristics. The analysis of variance (ANNOVA) of raw data and S/N data is
performed to identify the significant parameters and to qualify their effect on the response characteristics. The
most response characteristics is established by analyzing response and the ANNOVA tables. The analysis of
response data is done by well known software MINITAB 16specifically used for the design of experiment
applications. In this section, the effect of independent WEDM process parameter (peak current, pulse on time,
pulse off time, current, wire speed and wire tension) on the selected response characteristics (cutting speed, die
dimensions) will be discussed. 5.1 EFFECT ON CUTTING SPEED (C.S.)
The average value of cutting speed and the S/N ratio for each parameter at different levels has been calculated
using MINITAB 16 Software.
The annova table andplotted graph of cutting speed is as shown in table 5 and Fig. 5.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 311

S.No.
Mean Cutting Speed
SNRA1
1 0.12 -19.3743
2 0.738 -2.98468
3 0.943 -0.57388
4 0.193 -15.4962
5 0.94 -0.82096
6 1.32 2.36961
7 0.123 -18.4453
8 1.07 0.546369
9 1.535 3.378396
10 0.135 -17.6324
11 0.97 -1.32976
12 2.298 6.482108
13 0.315 -17.2529
14 1.245 1.788206
15 2.64 7.987925
16
0.115 -19.1191
17 1.003 -0.13795
18 3.243 10.16247
Table 5. Raw data for cutting speed

Fig. 5. Plotted Graph for Mean Cutting Speed
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 312


Fig.6. Plotted Graph for S/N Ratios Cutting Speed
It is clear from S/N plot the max S/N ratio occurs corresponded to A5, B3, C2, D2 & E2.Therefore the predicted
mean value will be corresponded to these factors but the only significant factor would be chosen. The significant
factor is chosen from ANONVA table.
Source DOF Seq.SS Adj
MS
F P
A 5 1.7059 0.3412 1.44 0.372
B 2 10.0726 5.0363 21.3 0.007
C 2 0.6715 0.3358 1.42 0.342
D 2 0.5215 0.2607 1.1 4.16
E 2 0.1625 0.0813 0.34 0.728
Error 4 0.946 0.2365
Total 17 14.08
Table 6. ANOVA for mean (raw data) for Cutting Speed
Selection Of Optimum Level
In order to study the significance of process parameter toward the cutting speed. Analysis of variance
(ANOVA) is performed. The ANOVA of the raw data and S/N data are given in table 6. Therefore from above
table, it is clear that the only two parameters A,B, significantly affect both the mean and variation in the values
of cutting speed. Cutting speed is the higher the better type of quality characteristics. Therefore value of
cutting speed is considered to be optimal.
Predicted mean or optimal value of cutting speed= {(A5+B3)-(Tavg)} = 2.34 mm/min.
Effect On Square Width
For better performance of the square width it should be dimensionally accurate so that it matches with the punch
and produces the required product. In this study square width is measured. Dimensional accuracy is a nominal
is best type characteristic. In this study we have chosen the formula for S/N ratio= -10Log (S
2
), where S
2

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 313

=
n
i=1
D2)/(n1) D=5.097 mm. Table 6.5 gives the detail of raw data and S/N data of the square width avg.
these effects are plotted using MINITAB16 Software. The dimensionally accurate dies give the accurate
Product. Therefore optimized parameter setting is necessary to get accurate single finish cut so that total
manufacturing time is reduces. In this study the square width is measured by moving the table in X and Y
direction to touch the both edges and the difference is counted from computer screen. Therefore, square width
sum of wire diameter and the total movement of work table in one direction.
S. No.
Mean
Square
Width


SNRA
2



1 5.093 44.728
2 5.086 44.40692
3 5.096 40.67611
4 5.088 38.83005
5 5.078 40.91962
6 5.082 40.02905
7 5.091 53.30993
8 5.077 43.27902
9 5.097 39.88571
10 5.096 48.8885
11 5.090 46.0351
12 5.092 41.00544
13 5.097 39.47242
14 5.091 38.69934
15 5.103 44.73876
16
5.100 41.24939
17 5.073 45.22879
18 5.098 47.89147
Table 7.Raw data for Square width

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 314

Fig.7. Main Effect Plot for Square Width

Fig.8 S/N Ratio Plot (Nominal is Best) for Square Width
Selection Of Optimum Level
In order to find the significant process parameter levels which affect the die width analysis of variance
(ANOVA) is performed. The ANOVA of the raw data is in table. It is clear from the ANOVA table that no
single parameter affects significantly the square width therefore to predict the optimal value of square width we
include all the five factors. Predicted mean value of square width is (A3+B1+C2+D1+E3-4Tavg.) = 5.097
mm 6.

Confirmation Experiment
Confirmation experiments were conducted for the cutting speed, and square width. The experimental value
obtained at the optimal setting of parameter are CS=2.34 mm/min, and square width = 5.097 mm. Therefore
confirmation experiment shows that the error associated with cutting square width is below 5% i.e. under 95%
confidence level. The result come out from conformation experiment is showing that there is error of 2 % in
cutting speed & 0 % in die Square width. 7.

CONCLUSION
Base on the constraints of the present set of experimentation, the following conclusions are drawn:

The average cutting speed (CS) is mostly affected by the pulse-on time, peak current and pulse off-time.
Optimum setting CS is A5, B3, C2, D2, and E2.

There is no most significant factor which affects the square width. As is clear from ANOVA table. Optimum
setting of process parameters is A3, B1, C2, D1, and E3.

A single set of parametric combination can never produce the highest productivity (within the possible range)
with the best surface finish at high cutting speed and least geometrical in accuracy. A proper trade-off becomes
inevitable to satisfy all the above mentioned three machining criteria simultaneously.

Confirmation experiments show that the error associated with cutting speed and Square width is 2% and 0%
respectively.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 315


The weight assigned to the selected quality characteristics was different for different characteristics. However
with a different set of weights, a different set of optimal parameters for the quality characteristics will results.
The optimal set predicted will be closer to the set predicted for the single characteristics which is having the
largest weight.

The model can be extended to any number of quality characteristics provided proper utility scales for the
characteristics are available from the realistic data.

REFERENCES

1. Aggarwal, A., Singh H., Kumar, P.(2009)Simultaneous optimization of conflicting responses for CNC
turned parts using desirability function, International
Journal of Manufacturing Technology and Management, 18, pp. 319-332.

2. Gauri, S.K. and Chakraborty, S. (2009), Multi-response optimization of wedm process using principal
component analysis, International Journal of Advanced Manufacturing Technology, 41, pp. 741-748.

3. Hari Singh and Rohit Garg (2010), Effects of process parameters on gap current in WEDM, International
Journal of Materials Engineering and Technology, Vol. 3, No. 1, pp. 95-106

4. Yadav Avadhesh & Singh Jitender (2009), Optimization of Machining Characteristics During Wire Electric
Discharge Machining of Punching Die NIT Kurukshetra

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 316

EFFECT OF SHIELDING GASES ON METAL TRANSFER IN
SYNERGIC MIG WELDING

1
Vinay Kumar,
2
Reeta Wattal,
3
Arjyajyoti Goswami
1,2,3
Delhi Technological University, (Formerly Delhi College of Engineering), Delhi, India
1
aj87.goswami@gmail.com

ABSTRACT
The objective of this paper is to determine the effect of shielding gases on the mode of metal transfer in synergic
MIG welding of Aluminium alloy 6061, using different shielding gases. The effect was observed for pure Ar,
pure He and 50% Ar-He mixture. The semi automatic synergic MIG welding machine was connected with DSO,
which in turn was connected to a printer to obtain the print outs of the voltage transients. The voltage transients
thus obtained were used to study the mode of metal transfer at different combinations of parameters.
Keywords: Metal transfer, Aluminium alloy 6061, Synergic MIG, Shielding gases

INTRODUCTION
Aluminium alloy 6061 is a precipitation hardening aluminum alloy, containing magnesium and silicon as its
major alloying elements. It has good mechanical properties and exhibits good weldability. It is one of the most
common alloys of aluminum for general purpose use. Aluminium alloy 6061 is widely used for construction of
aircraft structures. Because of its wide and critical applications, it is needed to investigate the different aspects
of its welding.
The quality, efficiency and overall operating acceptance of the welding operation are strongly dependent on the
shielding gas, since it dominates the mode of metal transfer [1]. The shielding gas not only affects the properties
of the weld but also determines the shape and penetration pattern as well. One of the most up-to-date and
comprehensive reviews of metal transfer modes during arc welding was written by Lancaster [2]. According to the
International Institute of Welding (IIW) nomenclature referenced in his book [3], metal transfer can be classified into
three main groups: free-flight transfer, bridging transfer and slag protected transfer. The modes of metal transfer,
which are operative at any instance during welding, are also dependent upon several forces that act upon the molten
droplet growing at the tip of the electrode [4, 5]. In work of a more qualitative nature, Cooksey [6] investigated
transfer with many metals, different shielding gases, and with both reverse and straight polarity. The controlling
factors for the metal transfer mechanism are the welding current, welding wire feed speed, electrode extension,
shielding gas, electrode polarity and welding wire diameter. In gas metal arc welding, the impact of metal drops
is known to affect the weld pool penetration [7]. Drop velocities and accelerations have been measured for the
globular transfer mode over a limited current range by some authors [4, 8]. Many other investigators have been
concerned with various aspects of the metal transfer process [9, 10]. In a study [11], it was found that in MIG
welding of Aluminium alloy 7005, mostly axial spray transfer is observed with short circuiting and mixed mode of
transfer at some combination of parameters. There are various techniques, which have been developed for
studying the metal transfer behavior. These include: artificial separation of droplets of metal, the probe method,
high speed cinematography, X-ray photography, recording of arc voltage and welding current transients etc. In
the present study the instantaneous method of recording arc voltage transients was used for metal transfer
studies.
METHODOLOGY
It was decided to study the metal transfer behavior by recording the electrical transients of the welding arc for
GMAW of Aluminum. The effects of welding wire feed rate (W), arc voltage (V), nozzle to plate distance (N),
welding speed (S) and gas flow rate (G) were studied on the modes of metal transfer. The parameters with their
levels are given in Table-1. A design matrix of eight weld runs evolved to study the effect of all the five welding
parameters on V transients is given in Table-2.




Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 317

Table 1: Welding parameters and their limits Table 2: Design Matrix
Parameter Unit Limits
Low (-1) High (+1)
Welding wire feed rate m/min. 4.9 6.8
Arc voltage Volts 26 29.5
Nozzle to plate distance mm 20 30
Welding speed cm/min. 30 60
Gas flow rate l/min 15 25

W V N S G
1 + + + _ +
2 _ + + + _
3 + _ + _ _
4 _ _ + + +
5 + + _ + _
6 _ + _ _ +
7 + _ _ + +
8 _ _ _ _ _


EXPERIMENTAL PROCEDURE
The experiment was carried out using micro processor based Synergic MIG Welding machine EWM Force Arc
521. Aluminium alloy 6061 plates of 6 mm & 10 mm thickness were cut in to the size 50mm X 150mm by
using hydraulic power hacksaw.
Aluminium wire of 1.2mm diameter and pure Helium, 50% Helium-Argon mixture & pure Argon gas as the
shielding gas was used in experiments. Direct current electrode positive (DCEP) with electrode to work angle of
90
0
was maintained during welding.
Welding was carried out in single pass by using bead-on-plate technique. Weld beads were deposited as per the
design matrix. Three sets of 8 trials for each gas were taken. Voltage transients were taken by coupling the DSO
parallel to synergic gas metal arc welding machine. With the help of a printer connected to DSO, prints of
transients were taken to analyze the types of metal transfers.
From the review of literature, it is evident that the main welding parameters affecting the mode of metal transfer
are the arc length and welding current that are effectively controlled by arc voltage and welding wire feed rate
respectively. Thus keeping these two parameters as control variables, the eight trials of the design matrix were
divided into four groups, as shown below:
High welding wire feed rate and high arc voltage (Rows 1 and 5)
High welding wire feed rate and low arc voltage (Rows 3 and 7)
Low welding wire feed rate and high arc voltage (Rows 2 and 6)
Low welding wire feed rate and low arc voltage (Rows 4 and 8)

RESULTS
V transients and weld bead profiles and cross section for different rows of design matrix are shown in figures 1
to 4.

DISCUSSIONS
Effect of high welding wire feed rate & high arc voltage
In maximum cases purely axial spray type of metal transfer is observed during welding which is also showed by
recorded transients. The axial spray type of metal transfer is the guarantee of good weld bead.
High welding wire feed rate and high arc voltage produced purely axial spray type of transfer of all three types
of shielding gases with more NPD and gas flow rate (fig.1a, 1b,1c) but at lower NPD and lower gas flow rate,
there are some short circuiting transfer and with free flight (mixed mode) with Helium (fig.1e), this may be due
to lower NPD. But for all the cases bead is of good appearance. As concerned to bead dimensions in first case
the all other dimensions are almost same but weld penetration is more in the case of Argon with finger like
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 318

appearance. For the second case there is noticeable difference in bead width for Helium being on higher side but
all the other dimensions for all the gases are almost same.

a. Row 1 (Argon)
p w H
Top view
Cross sectional view

7.84 13.21 2.74


b.Row 1 (Helium)
p w H Top view
Cross sectional view

6.44 11.86 3.49


c. Row 1 (Argon+Helium)
p w H Top view
Cross sectional view

5.36 12.87 2.81


d. Row 5 (Argon)
p w H Top view
Cross sectional view

3.28 7.86 3.09


e. Row 5 (Helium)
p w H Top view
Cross sectional view

4.42 11.02 2.76


f. Row 5 (Argon+Helium)
p w H Top view
Cross sectional view

3.56 7.84 2.57



Figure-1(a to f) : Voltage transients and weld bead profiles and cross section for high welding wire feed
rate and high arc voltage
Effect of high welding wire feed rate & low arc voltage
High welding wire feed rate and low arc voltage produced pure axial spray transfer with pure Argon (Figure.2
a,), some short circuiting with spray type of metal transfer in first case with pure Helium (figure 2b) and with
Argon & Argon +Helium mixture in second case (figure 2 d and 2f). But in other cases there is short circuiting
with free flight (mixed mode) metal transfer. The weld width is on the lower side but weld height is on the
higher side for pure Argon as compared to pure Helium & Argon+Helium in first case. For second case the weld
penetration & weld width are higher side and weld height lower side for Helium comparing to other
gas/mixture. Weld beads appearance is good.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 319






a. Row 3 (Argon)
p w h
Top view
Cross sectional view

6.18 10.10 4.43


b. Row 3 (Helium) p w h Top view Cross sectional view

6.18 12.46 3.14


c. Row 3 (Argon+Helium) p w h Top view Cross sectional view

5.54 11.56 3.43


d. Row 7 (Argon) p w h Top view Cross sectional view

3.33 8.41 3.38


e. Row 7 (Helium) p w h Top view Cross sectional view

4.90 10.33 2.45


f. Row 7 (Argon+Helium) p w h Top view Cross sectional view

2.97 9.97 3.34


Figure 2 (a tof): Voltage transients and weld bead profiles and cross section for high welding wire feed
rate and low arc voltage

Effect of low welding wire feed rate & high arc voltage
Low welding wire feed and high arc voltage produced pure axial spray metal transfer in both the cases for all.
High arc voltage ensured longer arc length, thereby eliminating the possibility of any short circuiting during
welding. It was clearly supported by the transients (figure 3a to 3f). Weld bead penetration & weld width on
higher side but weld height lower in both cases with pure Helium. Argon + Helium mixture showed mediatory
results.


Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 320

a. Row 2 (Argon) p w h Top view Cross sectional view

1.74 4.53 2.32


b. Row 2 (Helium) p w h Top view Cross sectional view

2.43 7.53 2.34


c. Row 2 (Argon + Helium) P w h Top view Cross sectional view

1.45 5.84 2.01


d. Row 6 (Argon) p w h Top view Cross sectional view

0.81 4.56 3.23


e. Row 6 (Helium) p w h Top view Cross sectional view

3.73 10.63 1.42


f. Row 6 (Argon + Helium) p w h Top view Cross sectional view

1.09 6.57 2.90



Figure-3(a to f): Voltage transients and weld bead profiles and cross section for low welding wire feed
rate and high arc voltage
Effect of low welding wire feed rate & low arc voltage
Low wire feed and low arc voltage produced axial spray transfer for both the cases with all gases/ gas mixture
showed no signs of excessive fluctuations indicating an axial spray mode of metal transfer without interruptions
of the arc. Weld bead penetration & weld width on higher side but weld height lower in both cases with pure
Helium. Argon + Helium mixture showed mediatory results. Weld bead appearance is good.



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 321


a. Row 4 (Argon)
p w h Top view
Cross sectional view

1.56 6.32 1.92


b. Row 4 (Helium)
p w h Top view
Cross sectional view

1.98 7.26 1.68


c. Row 4 (Argon +Helium)
p w h Top view
Cross sectional view

1.59 5.13 2.12


d. Row 8 (Argon)
p w h Top view
Cross sectional view

1.31 5.88 2.74


e. Row 8 (Helium)
p w h Top view
Cross sectional view

2.97 8.61 2.07


f. Row 8 (Argon + Helium)
p w h Top view
Cross sectional view

1.64 7.82 2.53


Figure-4(a to f): Voltage transients and weld bead profiles and cross section for low welding wire feed
rate and low arc voltage.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 322


CONCLUSIONS
1. In most cases axial type of metal transfer is observed which results in a high quality weld bead.
2. At high NPD with high arc voltage and high wire feed rate, purely axial spray type of metal transfer is
observed, while at low NPD with high arc voltage and high wire feed rate short circuiting transfer and free
flight transfer is observed with pure He.
3. With low arc voltage and high wire feed rate spray type transfer is observed with pure Ar when parameters
are at a state as governed by row 3 .At same states of arc voltage and wire feed rate with parameters as per row
7, weld penetration & weld width are on the higher side while weld height is on lower side for Helium as
compared to other gas mixture.
4. At low welding wire feed rate and high arc voltage, axial spray type transfer is observed for all the
combinations when parameters are as per row 2 and 6.
5. Weld penetration and weld width is higher than the weld height at parameters as per row 2 and 6.
6. Same observation is made at low wire feed and low arc voltage for all the gas types when the other
parameters are as per row 4 and 8.

REFERENCES

[1] Sindo Kou. Welding metallurgy. 3rd ed. New York: John Wiley & Sons Inc.;2003. p. 1922, 6882, 103
14, 2329.
[2] Lancaster J. F., The Physics of Welding. International Institute of Welding, Oxford, Pergamon Press, 1986.
[3] Anon., Classification of metal transfer. International Institute of Welding, DOC XII: 636-76, 1976
[4] Ludwig H., Metal transfer characteristics in gas-shielded arc welding. Welding Journal, 1957; 36(1): 23s
26s.
[5] Waszink J. H. and Gratt L. H. I., Experimental investigation of the forces acting on a drop of weld metal.
Welding Journal. 1983; 62(4): 108s - 116s
[6] Cooksey C. J. and Milner D. R., Metal transfer in gas shielded arc welding. Ibid, 1962: 123-132.
[7] Essers W. G., Heat transfer and penetration mechanism with GMAW and plasma-GMAW. Welding Journal.
1981; 60(2): 37s 42s
[8] Caron V., Study of drop motion in the mild steel argon arc welding system. Canadian Metallurgical
Quarterly. 1962; 9(1): 373-380.
[9] Needham J. C. Cooksey C. and Milner D., The transfer of metal in inert gas shielded arc welding. British
Welding Journal.1966; 7: 101-114.
[10] Ma J. and Apps R. L., Analyzing metal transfer during MIG welding. Welding and Metal Fabrication. 1983;
51: 119-128.
[11] Reeta Wattal and Sunil Pandey, Metal Transfer in GMAW of Aluminium Alloy 7005,All India
Conference on Recent developments in Manufacturing & Quality Management, Punjab Engineering
College, Chandigarh, 2007, 5-6 October, 106-111.



Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 323

VARIATION OF PROCESS PARAMETERS ON
ABRASIVE JET MACHINING

Harichand Tewatia
1
, Nikhil Kumar
2

1,2
Department of Mechanical Engineering, Rawal Institute of Engineering & Technology, Faridabad India
1
harichand88@gmail.com,
2
gju.nikhil@gmail.com


ABSTRACT

The advent of Abrasive Jet Machining (AJM) is necessary for the material removal from brittle materials and
heat sensitive materials like glass, quartz, sapphire, semiconductor materials, mica and ceramics, with the
application of a high speed stream of abrasive particles carried in a gas medium (Air , CO2 ,N2) from a nozzle
with accurate specifications of the work material. The material removal process in AJM is mainly carried by
erosion. The Abrasive Jet Machining is mainly used to cut shapes in hard and brittle materials like glass,
ceramics etc. In this paper testing and analyzing of various process parameters (Pressure, Nozzle tip Distance)
for abrasive jet machining are discussed and concluded that there is a great variation on MRR with change in
pressure.

Keywords : Carrier Gas , Abrasive , Velocity of abrasive, Work Material, Nozzle Tip Distance (NTD).

INTRODUCTION

Abrasive jet machining (AJM) is a process of material removal by mechanical erosion caused by the
impingement of high velocity abrasive particles carried by a suitable fluid (usually a gas or air) through a shaped
nozzle on to the workpiece. An AJM set-up may be of two types: one employing a vortex-type mixing chamber
and the other employing a vibratory mixer. In the former, abrasive particles are carried by the vortex motion of
the carrier fluid, whereas in the latter type abrasive particles are forced into the path of the carrier gas by the
vibrating motion of the abrasive particle container. The erosion phenomenon in an AJM study may be
considered in two phases. The first phase consists of transportation problem, that is, the quantity of abrasive
particles, and the direction and velocity of impinging particles as determined by the fluid flow condition of
solid-gas suspension. The second phase of the problem is the determination of the material removal rate or the
erosion rate. The erosion of a surface by impacting solid particles is a discrete and accumulative process. Hence,
the models are first made on the basis of a single particle impact. The type of abrasive used was garnet of mesh
size 80. The cutting variables were stand off distance of the nozzle from the work surface; work feed rate and jet
pressure. The evaluating criteria of the surface produced were width of cut, taper of the cut slot and work
surface roughness. It was found that in order to minimize the width of cut; the nozzle should be placed close to
the work surface. Increase in jet pressure results in widening of the cut slot both at the top and at exit of the jet
from the work. The width of cut at the bottom was always found to be larger than that at the top. It was found
that the taper of cut gradually reduces with increase in standoff distance and was close to zero at the stand off
distance of 4 mm. The jet pressure does not show significant influence on the taper angle within the range of
work feed and the stand off distance consideredThe mechanism of erosion in such cases is complex, involving
mechanical, chemical and material properties.
The erosion is a function of several variables such as
(i)Impinging particle diameter to work-material,thickness ratio;
(ii) Speed and angle of impact
(iii) Elasticity of the material
(iv) Shape and geometry of impinging particles
(v) Ductility and brittleness of the impinging particles
(vi) Average flow
(vii) Material and density
(viii) Distance between the nozzle mouth and workpiece or stand-off distance
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 324

Thus this process is used chiefly to cut intricate shapes in hard and brittle materials which are sensitive to heat
and have a tendency to chip easily. The process is also used for deburring and cleaning operations. AJM is
inherently free from chatter and vibration problems. The cutting action is cool because the carrier gas serves as a
coolant.
In this paper, the effect of carrier fluid (air) pressure on the MRR, Abrasive flow rate(AFR), and the material
removal factor (MRF) have been investigated experimentally on an indigenous AJM set-up developed in the
laboratory.
Variables in Abrasive Jet Machine:
The variables that influence the rate of metal removal and accuracy of machining in this process are:
1. Flow rate of abrasive
2. Types of abrasive
3. Shape of cut and operation type
4. Velocity of abrasive jet
5. Work material
6. Carrier gas
7. Geometry, composition and material of nozzle
8. Size of abrasive grain
9. Nozzle work distance (stand off distance)
Table 1: Characteristics of different Variables
Nozzle size 0.08 to 0.45 mm
Abrasive SiC, Al2O3 (of size 20 to 50 )
Flow rate of abrasive 4 to 20 gram/min
Velocity 140 to 290 m/min
Pressure 2 to 9 kg/cm2
Stand off distance 0.25 to 15 mm (8mm generally)
Material of nozzle Sapphire, Tungsten Carbide
Nozzle life 13 to 350 hr
part application Drilling, cutting, deburring cleaning
Work material Non Metals like glass, ceramics, and granites. Metals and alloys of hard materials like
germanium, silicon etc
Medium Air , CO2 ,N2

EXPERIMENTAL SET-UP
The nomenclature for an abrasive jet machining is shown in Fig 1 and the experimental set-up is shown
schematicallyin Fig. below .


Fig. 1 Schematic layout of abrasive jet machine

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 325

The compressed air from the compressor enters the mixing chamber partly prefilled with fine grain abrasive
particles. The vortex motion of the air created in the mixing chamber carries the abrasive particles to the nozzle
through which it is directed on to the workpiece. The nozzle and the workpiece are enclosed in a working
chamber with a perspex sheet on one side for viewing the operation.
The abrasive particles used were SiC (grain size 60 microns and 120 microns).Phe nozzle material was
stainless steel and the nozzles used were of diameters 1.83 mm and 1.63 mm. This type of set up has the
advantage of simplicity in design, fabrication and operation. The equipment cost is much less except the
compressor.



Fig. 2 Schematic layout of metal removal by abrasive jet
The abrasive particles (silicon carbide) were mixed with air stream ahead of nozzle and the abrasive flow rate
was kept constant throughout the machining process. The jet nozzle was made of tool steel to carry high wear
resistance. Drilling of glass sheets was conducted by setting the test rig on the parameters listed in Glass was
used as a work piece material because of its homogeneous properties. The test specimens were cut into square
and rectangular shape for machining on AJM unit having thickness 4mm, 5mm, 6mm, 8mm and 9mm. In
machine the initial weights of glass specimens were measured with the help of digital balance. After machining
the final weights were measured with the help of digital balance to calculate the material removal rate. In our
machine the movement to specimens in x-y directions is provided with the help of cross slide and in z direction
with help of worm and worm wheel drive. First the abrasive that was alumina in powder form was fed in the
hopper carefully. After that compressor connections were checked. The glass specimen was properly clamped
on cross slide with the help of various clamps. As the compressor was switched on, the hopper gate valve was
opened so that abrasive grains were mixed with air jet coming from the compressor and focused on the
specimen with help of nozzle. Different readings were taken using different process parameters on the glass
specimens of different thickness and all results were tabulated. All results were compared with the theoretical
results also to check the validity of our results.


TABLE 2
Abrasive Jet Machining Experimental Parameters
S.No AJM Parameter Condition
1 Type of abrasive Silicon carbide
2 Abrasive size 0.20-1.75 mm
3 Jet pressure 6.5-8.5 k g/cm 2
4 Nozzle tip distance 7-20 mm

Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 326

RESULTS

A. Experimental Results
Table 3 shows the relationship between nozzle tip distance and diameter of hole at a set pressure of 6.5 kg/ cm
2

Relationship between nozzle tip distance and diameter of hole at a set pressure of 6.5 kg/ cm 2
Table 3
S. No Nozzle tip distance (mm) Top surface diameter (
mm)
Bottom surface diameter
( mm)
1 7 7.65 4.65
2 10 9.85 5.85
3 15 11.45 6.00
4 20 12.00 6.15
5 21 12.24 6.18
Table 3 shows the relationship between nozzle tip distance and diameter of hole at a set pressure6.5 kg/ cm
2

Table 4
Relationship between nozzle tip distance and diameter of hole at a set pressure 7.5 kg/ cm 2
S. No Nozzle tip distance (mm) Top surface diameter (
mm)
Bottom surface diameter
( mm)
1 7 7.72 5.01
2 10 9.95 5.97
3 15 11.45 6.01
4 20 11.81 6.81
5 21 11.97 7.00

Table 5
Relationship between nozzle tip distance and diameter of hole at a set pressure8.5 kg/ cm 2
S. No Nozzle tip distance (mm) Top surface diameter (
mm)
Bottom surface diameter
( mm)
1 6 7.82 5.05
2 12 10.01 5.75
3 15 11.67 5.96
4 18 11.9 6.75
5 20 12.1 7.02
Table 5 shows the relationship between nozzle tip distance and diameter of hole at a set pressure 8 kg/ cm 2
Table6
Relationship between pressure and material removal rate (MRR) at thickness 8 mm and NTD 14mm
S. No Pressure (kg/
cm 2)
Initial weight
(gm)
Final weight
(gm)
Time ( sec) MRR ( mg/min)
1 5.5 140.90 140.86 20 120.12
2 6.5 141.20 141.13 20 210.21
3 7.5 137.53 137.43 20 300.30
4 8.5 134.12 134.01 20 330.33
Table 6 shows the Relationship between pressure and material removal rate (MRR) at thickness 8 mm and NTD
14 mm



A. Theoretical Results

In AJM the abrasive flow rate affect the machining process, the higher the abrasive flow rate, the higher the
number of particles involved in the mixing and cutting processes. An increase in abrasive flow rate leads to
proportional increase in the depth of cut.
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 327



Fig.3
Table 8 Effect of pressure on material removal rate (MRR)
Table 8
Effect of pressure on material removal rate (MRR)
S. No. Gas pressure Material removal rate( mg/min)
1 6 21
2 7 23
3 8 26
4 9 29
Fig. 4 and Table 9 show the effect of nozzle tip distance (NTD) on diameter of hole. As the distance between the
face of nozzle and the working surface of the work increases, the diameter of hole also increases because higher
the nozzle tip distance allows the jet to expand before impingement which may increase vulnerability to external
drag from the surrounding environment. It is desirable to have a lower nozzle tip distance which may produce a
smoother surface due to increased kinetic energy
Table9 Effect of NTD on diameter of hole
Table9
S.No Nozzle tip distance (mm) Diameter of hole (mm)
1 0.75 0.5
2 1 0.7
3 1.5 1
4 2 1.5


Fig. 4 Effect of nozzle tip distance (NTD) on diameter of hole
Fig. 5 shows the relationship between NTD and rate of material removal. Small metal removal rates at a low
NTD is due to a reduction in nozzle pressure with decreasing distance, whereas a drop in material removal rate
Proc. National Conference on Recent Trends in Mechanical Engineering 2011 Page 328

at large NTD is due to a reduction in the jet velocity with increasing distance. So we have to select a optimum
value of NTD to get maximum material removal rate in AJM process.



Fig. 5 Effect of nozzle tip distance (NTD) on material removal rate

CONCLUSIONS
This paper concluded by experimental work that abrasive jet machining with WC Nozzle and SiC abrasives is
suitable for hard and brittle materials such as porcelain, glass ceramics, and granites. . The use of stainless steel
nozzles, though with comparatively shorter life, is justified by their low cost therefore for longer nozzle life WC
nozzles are to be used. The changeover of a nozzle after it has been eroded takes not more than half a minute. It
is seen that MRR are more at higher stand-off distances. Thus, a higher stand-off distance would be preferable
where material removal rate is of prime importance. However, in precision work a higher pressure and a lower
stand-off distance may be adopted to attain a, higher accuracy and penetration rate.

REFERENCES
[1] Finnie. 'Erosion of Surface by Solid Particles'. Welr, vol 3,1960.
[2] GL Sheldon and I Finnie. 'On DuctileBehaviourofNormal.lY -Brittle Materials during Erosive Cutting'.
Transactions of ASME, Journal of Engineeringfor Industry, vol B 88, 1966.
[3] M.Kantha Babu,O.V. Krishnaiah Chetty, A study on recycling of abrasives in abrasive water jet
machining, Wear 254(2003)763- 773.
[4] G L Sheldon and I Finnie. 'The Mechanics of Material Removal in the Erosive Cutting of Brittle Materials'.
Tn.nsc.cticns of ASME, Journal of Engineeringfor Industry, vol B 88, 1966.
[5] M. Roopa Rani & S. Seshan AJM - Procees Variables And Current Applications, Publication- Journal of
Metals Materials& Process,1995 Vol.7 No. 4 PP.279-290
[6] JGABitter. 'A Study of Erosion Phenomenon'. Wear,voI6,1963.
[7] R. Balasubramaniam, J. Krishnan and N. Ramakrishnan , An experimental study on the abrasive jet
deburring of cross drilled holes , Publication: Journal of Materials Processing Technology, Volume 91,
Issues 1-3, 30 June 1999, Pages 178-182
[8] M L Neema and P C Pandey. 'Erosion of Glass When Act Upon by An Abrasive Jet' Proceedings of
InternationGI ConfneTice on Wear of Materials, 1977.
[9] Neema, M.L., Pandey, P.C., Erosion of glass when acted upon by abrasive jet machining , Int.
Conference on wear of materials,1977. [10]Venktesh V.C., Parametric studies on abrasive jet
machining,CIRP Ann 1984;33(1):109-120. [13]Ramachandran N, Ramakrishnan N., A review on
abrasive jet machining, J Mater Proc Tech 1993;39:21-31. [11]M.A. Azmir,A.K. Ahsan, A study of
abrasive water jet machining process on glass/epoxy composite laminate, Journal of Materials
Processing Technology 209(2009)6168-6173.
[10] Imanaka, et al. 'Machining with Continuous liquid Jets a1 Pressures up to 10 kbar'. Proceedings of
Internalionel Corfire.nce on
[11] Production Engineering (part-I), Tokyo, 1974.
[12] P C Pandey and H S Shan. 'Modern Machining Processes'. Tata McGraw-Hill publishing Co, New Delhi,
1980.
[13] P K Sarkar and P C Pandey. 'Some Investigations on the Abrasive Jet Maqhining 48 (2008) 932-945.
[14] An experimental study on the abrasive jet deburring of crossdrilled holes Authors: R. Balasubramaniam, J.
Krishnan and N. Ramakrishnan Publication: Journal of Materials Processing Technology, Volume 91,
Issues 13, 30 June 1999, Pages 178182

Вам также может понравиться