Вы находитесь на странице: 1из 267

VINAYAKA MISSIONS UNIVERSITY

V.M.K.V. ENGINEERING COLLEGE,

Sankari Main Road, Periaseeragapadi,

SALEM – 636 308.

National Conference on

“RECENT TRENDS IN MANUFACTURING TECHNOLOGY”

“RTMT ‘09”

Date: 19th March 2009

Organized by

DEPARTMENT OF MECHANICAL ENGINEERING


INDEX

S.NO TITLE OF THE PAPER

1. STUDY OF HYDROGEN FUELLED SPARK IGNITION ENGINES

2. OPTIMISATION OF PROJECT MANAGEMENT PARAMETERS

OPTIMIZATION OF PROCESS PLANNING FUNCTIONS BY


3.
GENETIC ALGORITHMS AND SIMULATED ANNEALING

SIMULATION OF SOLENOID ACTUATOR INFLUENCING THE


4.
MAGNETIC FORCE

DESIGN AND FABRICATION OF WALL CLIMBING ROBOT


5. WITH TWO DEGREE OF FREEDOM HAVING MINIMAL
SUCTIONS CUP AND ACTUATORS.

OPTIMISED MASTER PRODUCTION SCHEDULE USING


6.
MODIFIED THEORY OF CONSTRAINTS

NUMERICAL SIMULATION OF FLOW INSIDE A LID DRIVEN


7.
SQUARE CAVITY

8. SMART MATERIALS

OPTIMIZATION OF TRUSS STRUCTURE USING LINEAR


9.
PROGRAMMING TECHNIQUE

10. ERGONOMIC DESIGN AND DEVELOPMENT OF GRASS


CUTTING TOOL IN THE AGRICULTURE LAND FIE
OPTIMAL DESIGN PARAMETERS OF
11.
NATURAL DRAUGHT COOLING TOWERS USING CFD

PARAMETER AFFECTING FACTORS MACHINING OF NON


12.
CONDUCTIVE MATERIALS IN WEDM

LINKING FINITE ELEMENT MODELS WITH EXPERIMENTAL


13.
MODAL ANALYSIS USING ORTHOGONAL ARRAY TECHNIQUE

ANALYSE A DUST RISK AND DESIGN OF SAFE WORK


14.
ENVIRONMENT IN CEMENT INDUSTRY

15. TECHNOLOGY FOR HILL RIDING


16. TECHNOLOGY FOR HILL RIDING

17. ELECTRONICALLY SENSED HYDRAULIC CLUTCH

18. ELECTRONICALLY SENSED HYDRAULIC CLUTCH

IDENTIFICATION AND COMPENSATORY CONTROL MODEL


19.
OF VOLUMETRIC ERRORS FOR CNC MACHINE TOOL

PRODUCTION FLOW ANALYSIS FOR SAFETY ASPECTS IN A


20.
FIRE WORKS INDUSTRY

DEVELOPMENT OF DISPERSION MODELLING AND


21.
EMERGENCY PREPAREDNESS PLAN FOR CHLORINE GAS

DEVELOPING SOFTWARE TOOL FOR SAFETY MANAGEMENT


22.
SYSTEM ELEMENT

SYSTEMATIC DESIGN OF FIRE SAFETY TRAINING PROGRAM


23.
FOR THE WORKERS OF HPCL AT KAPPALUR, MADURAI.

DESIGN AND IMPLEMENTATION OF QUALITY ILLUMINATION


24.
SYSTEM IN PHARMA INDUSTRY

DESIGN AND IMPLEMENTATION OF NOISE CONTROL


25.
SYSTEM IN CEMENT INDUSTRY

IMPACT OF EMOTIONAL INTELLIGENCE IN BEHAVIOUR


26.
BASED SAFETY ON REDUCTION OF ACCIDENTS

DESIGN OF SAFE PYROTECHNIC COMPOSITION TO


27.
CONTROL SO2 EMISSION OF CRACKERS

28. EXPLOSIVITY TESTING OF HIGH ENERGY CHEMICALS

ANALYSIS OF HEAT TRANSFER CO-EFFICIENT IN NANO


29.
FLUIDS

MODELLING OF WELDING FUME PLUME DISPERSION WITHIN


30.
THE WORKING ENVIRONMENT

DESIGN AND DEVELOPMENT OF POWER GENERATING


31.
SHOCK ABSORBER

REAL-TIME IMAGE SEGMENTATION ON


32.
CELL BASED NETWORK
33. CELLULAR NEURAL NETWORKS

34. QUALITY CONTROL ON ELECTOSURGERY AND ITS


EQUIPMENTS USING FMEA TECHNIQUE FOR HEALTH CARE
INDUSTRY

35. CURRENT TRENDS IN LABORATORY AUTOMATION IN


CEMENT PLANTS

STUDY OF HYDROGEN FUELLED SPARK IGNITION ENGINES

Selva Kumaran M.S.1 Christus Jeya Singh V.2

1
Department of Mechanical Engineering, St.Xavier’s Catholic
College of Engineering, Chunkankadai-629807,

kanyakumari,email: selvanwaits@yahoo.in
2
Department of Mechanical Engineering, St.Xavier’s Catholic
College of Engineering, Chunkankadai-629 807,

kanyakumari,email: christjsingh@yahoo.co.in

ABSTRACT

Legislative restrictions and aspects of future market enforce worldwide


research in the field of application-oriented hydrogen technology. The knowledge of
CO2-emission and the pollutant burden of current energy systems require the
developments of new concepts. The use of hydrogen as a fuel in internal combustion
engines incorporates high efficiency and low pollutant emission. Getting an IC
engine to run on hydrogen is not difficult. Getting an IC engine to run well however is
more of a challenge. This paper enumerates the study of hydrogen in spark ignition
engines, the fuel feeding techniques, the comparative power output and mean
effective pressure of hydrogen port fuel injection system and hydrogen direct
injection system and the NOx emission.

1. INTRODUCTION

The world’s fossil fuel production, which meets about 80% of our energy
requirements today, will start to decline in near future. On the other hand, since the
demand for energy is ever increasing as the nations of the world try to better their
living standards. Researches regarding their conversion into usable forms of energy
are being accelerated. Since fossil fuels cause great damage to the environment
through the greenhouse effect, ozone layer depletion, acid rains, air pollution, oil
spills. etc., the research emphasis is on the clean energy sources and carriers. A
quick look at the currently available alternatives, they are found to be classified into
two main categories such as Long term alternatives and Short term alternatives.
Liquefied petroleum gas, natural gas, alcohol and many other hydrocarbon fuels are
considered among the short term solutions since they are finite in nature and are
derived from sources that are finite and suffering from overstress and exhaustion.
Hydrogen, on the other hand, represents the long-term solution due to its unique
properties. It is produced from variety of energy sources such as water, solar,
nuclear and fossil. It also can be converted to useful forms of energy efficiently and
with least detrimental environmental effect. In spite of the numerous advantages of
hydrogen, still more researches have to be performed to optimize the engine design
for hydrogen.

2. HYDROGEN PROPERTIES
Some of the key overall properties of hydrogen that are relevant to its
employment as an engine fuel are listed in Table 1.
Table 1 - H2 properties relevant to ICEs
Property Hydrogen CNG Gasoline
3
Density (kg/m ) 0.0824 0.72 730a
Flammability limits (volume
4–75 4.3–15 1.4–7.6
% in air)
Auto ignition T in air (K) 858 723 550
Min. ignition energy (mJ)b 0.02 0.28 0.24
Flame velocity (m/s) 0.38 b 1.85 0.37–0.43
b
Adiabatic flame T (K) 2480 2214 2580
b
Quenching distance (mm) 0.64 2.1 c
≈2
Stoichiometric fuel/air ratio 34.48 14.49 14.7
Stoichiometric vol. fraction 29.53 9.48 ≈ 2d
Lower heating value (MJ/kg) 119.7 45.8 44.79
Heat of combustion (MJ/kg
3.37 2.9 2.83
air )b
(a Liquid at 0˚C;b At stoichiometric; c Methane; d Vapor; e At 25˚C and 1 atm).
3. AIR FUEL RATIO
The stoichiometric or chemically correct A/F ratio for the complete combustion
of hydrogen in air is about 34:1 by mass. This means that for complete combustion,
34 pounds of air are required for every pound of hydrogen. This is much higher than
the 14.7:1 A/F ratio required for gasoline. Since hydrogen is a gaseous fuel at
ambient conditions it displaces more of the combustion chamber than a liquid fuel.
Consequently less of the combustion chamber can be occupied by air. At
stoichiometric conditions, hydrogen displaces about 30% of the combustion
chamber, compared to about 1 to 2% for gasoline. Because of hydrogen’s wide
range of flammability, hydrogen engines can run on A/F ratios of anywhere from 34:1
(stoichiometric) to 180:1. The A/F ratio can also be ex-pressed in terms of
equivalence ratio, denoted by phi (Φ). Phi is equal to the stoichiometric A/F ratio
divided by the actual A/F ratio. For a stoichiometric mixture, the actual A/F ratio is
equal to the stoichiometric A/F ratio and thus the phi equals unity (one). For lean A/F
ratios, phi will be a value less than one. For example, a phi of 0.5 means that there is
only enough fuel available in the mixture to oxidize with half of the air available.
Another way of saying this is that there is twice as much air available for combustion
than is theoretically required.

4. HYDROGEN MIXTURE FORMATION IN SI ENGINES:


As far as the development of a practical hydrogen engine system is
concerned, the mode of fuel induction plays a very critical role. Five dicerent fuel
induction mechanisms were experimentally evaluated these include the following:
• Carburetion.
• Continuous manifold injection (CMI).
• Timed manifold injection (TMI).
• Low-pressure direct cylinder injection (LPDI).
• High-pressure direct cylinder injection (HPDI).

4.1 Hydrogen Port Injection Systems:


The port injection fuel delivery system injects fuel directly into the intake
manifold at each intake port, rather than drawing fuel in at a central point. Typically,
the hydrogen is injected into the manifold after the beginning of the intake stroke. At
this point conditions are much less severe and the probability for premature ignition
is reduced.
In port injection, the air is injected separately at the beginning of the intake stroke to
dilute the hot residual gases and cool any hot spots. Since less gas (hydrogen or air)
is in the manifold at any one time, any pre-ignition is less severe. The inlet supply
pressure for port injection tends to be higher than for carbureted or central injection
systems, but less than for direct injection systems.

Figure 1- Electronic injector (actuated by solenoid).

4.2 Hydrogen direct injection system


More sophisticated hydrogen engines use direct injection into the combustion
cylinder during the compression stroke. In direct injection, the intake valve is closed
when the fuel is injected, completely avoiding premature ignition during the intake
stroke. Consequently the engine cannot backfire into the intake manifold. The power
output of a direct injected hydrogen engine is 20% more than for a gasoline engine
and 42% more than a hydrogen engine using a carburetor. While direct injection
solves the problem of pre-ignition in the intake manifold, it does not necessarily
prevent pre-ignition within the combustion chamber. In addition, due to the reduced
mixing time of the air and fuel in a direct injection engine, the air/fuel mixture can be
non-homogenous. Studies have suggested this can lead to higher NOx emissions
than the non-direct injection systems. Direct injection systems require a higher fuel
rail pressure than the other methods.

Figure 2- hydrogen direct injection system

5. RESULT AND DISCUSSION

5.1. Power Output

Depending on how the fuel is metered, the maxi-mum output for a hydrogen
engine can be either 15% higher or 15% less than that of gasoline if a
stoichiometric air/fuel ratio is used. The theoretical Maximum power output from a
hydrogen engine depends on the air/fuel ratio and fuel injection method used. In a
gasoline-fuelled engine, the volume occupied by the fuel is about 1.7% of the mixture
whereas a carbureted hydrogen engine, using gaseous hydrogen, results in a power
output loss of 15%. It means that under stoichiometric air/fuel ratio condition,
hydrogen will displace 29% of the combustion chamber leaving only 71% for the air.
As a result, the energy content of this mixture will be less than it would be if the fuel
were gasoline. Since both the carbureted and port injection methods mix the fuel and
air prior to it entering the combustion chamber, these systems limit the maximum
theoretical power obtain-able to approximately 85% of that of gasoline engines.
Figure 3- Combustion Chamber Volumetric and Energy Comparison
for Gasoline and Hydrogen Fueled Engines

For direct injection systems, which mix the fuel with the air after the intake valve has
closed (and thus the combustion chamber has 100% air), the maximum output of the
engine can be approximately 15% higher than that for gasoline engines.

Figure 4- power output for port Injection of hydrogen and direct injection of
hydrogen at various speeds
5.2. Improved Mean Effective Pressure
Since external injection of hydrogen displaces a noticeable amount of
air the indicated mean effective pressure is below than that of gasoline operation.
Direct injection also provides
the means of operating at higher relative air/fuel ratio. The relative air/fuel ratio can
be expressed with the following equation,

Relative air/fuel ratio = ma / mf

Where ma - mass flow rate of air,


mf -mass flow rate of fuel.
At higher Loads the relative air/fuel ratio can be richer in direct injection mode
because the air Amount remains constant due to injection occurring after the intake
valve has closed.

Figure 5- Graphically displays the results of indicated mean effective pressure


for port Injection of hydrogen and gasoline as well as direct injection of
hydrogen.

5.3 Emission
A NOx emission enhances the motivation for the direct injection method. With
direct injection the engine-out emissions can be distinctly reduced at high engine
loads by delaying the start of injection. Based on the discoveries BMW has found,
direct injection of hydrogen into the Combustion chamber may provide the means to
increase engine efficiency, and decrease emissions while maintaining an optimal
level of power output.
Figure 6 - NOx Emission Vs Indicated Mean Effective Pressure

6. CONCLUSION
From this study it is expected that hydrogen-fueled engines of the future will
be based on DCI technology. And by this method the power output of a direct
injected hydrogen engine is 15% more than for a gasoline engine and 35% more
than hydrogen port fuel injection engine. It also solves the problem of pre-ignition in
the intake manifold and reduces pre-ignition within the combustion chamber during
compression. With Direct injection the engine-out emissions can be distinctly
reduced.
7. REFERENCE

1. Yamin, J.A.A. Gupta, H.N., Bansal, B.B. and Srivastava, O.N.,


1998,“Analytical Studies to Optimize the design and Operating Parameters for
Hydrogen-Fuelled 4-Stroke Spark Ignition Engines”, Paper presented in the
Hydrogen Energy Conference, Argentina.

2. Hamori, F. and Watson, H. C., “Hydrogen Assisted Jet Ignition for the
Hydrogen Fuelled SI Engine”, paper presented at the “World Hydrogen
Energy Conference No. 15”, Lyon, FR, May 2006.

3. Dempsey, J. "Module 9: Acts, Codes, Regulations and Guidelines." Energy


Technology Training Center College of the Desert, 2001.

4. Dempsey, J. "Module 3: Hydrogen Use in Intemal Combustion Engines."


Energy Technology Training Center College of the Desert, 2001.
5. Das LM. Fuel induction technique for a hydrogen operated engine. Int J
Hydrogen Energy 1990;15(11):833-42.
ANNA UNIVERSITY: CHENNAI 600 025

COLLEGE OF ENGINEERING, GUINDY CAMPUS

ANNA UNIVERSITY

CHENNAI 600 025

ABSTRACT OF THE PROJECT REPORT

Degree and Branch : Master of Engineering

Computer Integrated Manufacturing

Month and year of submission : June 2008

Title of the Thesis : OPTIMISATION OF


PROJECT MANAGEMENT
PARAMETERS

Name of the Student : SRIRAM J.

Roll Number : 200525420

Name and Designation of the : Mr A. AZAD

Guide Lecturer (S.G)


Dept of Manufacturing Engineering
College of Engineering Guindy Campus
Anna University
Chennai – 600 025

Earned Value Management (EVM) is a project management technique that


seeks to measure forward progress in an objective manner. EVM is touted as having
a unique ability to combine measurements of technical performance (i.e.,
accomplishment of planned work), schedule performance (i.e., behind/ahead of
schedule), and cost performance (i.e., under/over budget) within a single integrated
methodology. Proponents also claim that it provides an early warning of performance
problems. Additionally, EVM promises to improve the definition of project scope,
prevent scope creep, communicate objective progress to stakeholders, and keep the
project team focused on achieving progress.

EVM emerged as a financial analysis specialty in United States Government


programs in the 1960s, but it has since become a significant branch of project
management and cost engineering. Project management research investigating the
contribution of EVM to project success suggests a moderately strong positive
relationship. Implementations of EVM can be scaled to fit projects of all sizes and
complexity.

Effects of EVM on projects done by cement plant equipment manufacturing


company are analysed. Two identical projects are done one without EVM and one
with EVM. Effects of EVM on the project success is analysed.

Place : Chennai Signature of the student


Date :
(J. SRIRAM)
INTRODUCTION

1.1 PROJECT MANAGEMENT

Project management is the discipline of organizing and managing resources


(e.g. people) in such a way that the project is completed within defined scope,
quality, time and cost constraints. A project is a temporary and one-time endeavor
undertaken to create a unique product or service, which brings about beneficial
change or added value. This property of being a temporary and one-time
undertaking contrasts with processes, or operations, which are permanent or semi-
permanent ongoing functional work to create the same product or service over and
over again. The management of these two systems is often very different and
requires varying technical skills and philosophy, hence requiring the development of
project management.

The first challenge of project management is to make sure that a project is


delivered within defined constraints. The second, more ambitious challenge is the
optimized allocation and integration of inputs needed to meet pre-defined objectives.
A project is a carefully defined set of activities that use resources (money, people,
materials, energy, space, provisions, communication, etc.) to meet the pre-defined
objectives.

1.2 EARNED VALUE MANAGEMENT

Earned Value Management (EVM) is a project management technique that


seeks to measure forward progress in an objective manner. EVM is touted as having
a unique ability to combine measurements of technical performance (i.e.,
accomplishment of planned work), schedule performance (i.e., behind/ahead of
schedule), and cost performance (i.e., under/over budget) within a single integrated
methodology. Proponents also claim that it provides an early warning of performance
problems. Additionally, EVM promises to improve the definition of project scope,
prevent scope creep, communicate objective progress to stakeholders, and keep the
project team focused on achieving progress.
Essential features of any EVM implementation include (1) a project plan that
identifies work to be accomplished, (2) a valuation of planned work, called planned
value (PV), and (3) pre-defined “earning rules” (also called metrics) to quantify the
accomplishment of work, called Earned Value (EV). EVM implementations for large
or complex projects include many more features, such as indicators and forecasts of
cost performance (over/under budget) and schedule performance (behind/ahead of
schedule). The most basic requirement of an EVM system, however, is that it
quantifies progress using PV and EV.

1.2.1 Project Tracking without EVM

It is helpful to see an example of project tracking that does not include earned
value performance management. Consider a project that has been planned in detail,
including a time-phased spend plan for all elements of work. Figure 1 shows the
cumulative budget for this project as a function of time labeled PV). It also shows
the cumulative actual cost of the project through week 8. To those unfamiliar with
EVM, it might appear that this project was over budget through week 4 and then
under budget from week 6 through week 8. However, what is missing from this chart
is any understanding of how much work has been accomplished during the project. If
the project was actually completed at week 8, then the project would actually be well
under budget and well ahead of schedule. If, on the other hand, the project is only
10% complete at week 8, the project is significantly over budget and behind
schedule. A method is needed to measure technical performance objectively and
quantitatively, and that is what EVM accomplishes.

1.2.2 Project Tracking with EVM

Consider the same project, except this time the project plan includes pre-
defined methods of quantifying the accomplishment of work. At the end of each
week, the project manager identifies every detailed element of work that has been
completed, and sums the PV for each of these completed elements. Earned value is
also commonly calculated as Percent Complete times Budget at Completion (BAC),
This accumulation is called "earned value" (EV), and it can be computed monthly,
weekly, or as progress is made.
Earned Value (EV)

Figure 1.2 shows the EV curve along with the PV curve from Figure 1.1. The
chart indicates that technical performance (i.e., progress) started more rapidly than
planned, but slowed significantly and fell behind schedule at week 7 and 8. This
chart illustrates the schedule performance aspect of EVM. It is complementary to
critical path or critical chain schedule management.
Figure 1.3 shows the same EV curve with the actual cost data from
Figure 1.1. It can be seen that the project was actually under budget, relative to the
amount of work accomplished, since the start of the project. This is a much better
conclusion than might be derived from Figure 1.1.
Figure 1.4 shows all three curves together – which is a typical EVM line chart.
The best way to read these three-line charts is to identify the EV curve first, then
compare it to PV (for schedule performance) and AC (for cost performance). It can
be seen from this illustration that a true understanding of cost performance and
schedule performance relies first on measuring technical performance objectively.
This is the foundational principle of EVM.

Figure 1.1 Project Tracking without EVM


Figure 1.2 Project Tracking with EVM

Figure 1.3 EVM with Earned value and Actual cost

Figure 1.4 EVM with Planned value, Earned value and Actual cost

Figure 1.5 Simple


implementation
METHODOLOGY

EVM is a project management system that combines schedule performance


and cost performance to answer the question, “What did we get for the money we
spent?”
• All project steps “earn” value as work is completed.
• The Earned Value (EV) can then be compared to actual costs and
planned costs to determine project performance and predict future
performance trends.
Physical progress is measured in dollars, so schedule performance and cost
performance can be analyzed in the same terms.
In a typical spend plan analysis, physical progress is not taken into account
when analyzing cost performance. Instead, a project’s actual costs to date are simply
compared to planned costs, often with misleading results.
Building Blocks of Earned Value Analysis
In addition to more accurate project status assessment, EVM makes it easy
for a project manager to analyze both schedule and cost performance in a variety of
ways. Using a limited set of basic task information, it is possible not only to
determine how a project has been performing, but to predict future performance
trends as well.
Basis for Earned Value Analysis

• Budget at Completion (BAC) = Overall approved budget for a task.


• Actual Costs (AC) = Total amount spent on a task up to the current
date.
• Percent Complete = Task progress, related as either EV/BAC, or
simply physical progress shown by the fill of the task bar.

Figure 3.4 Earned value Report


Once these three measurements have been established, the following
calculations can be performed:

• Earned Value (EV) = BAC x Percent Complete.


The budgeted cost of completed work as of the current date.
• Planned Value (PV) = The point along the time-phased budget that crosses the
current date. Shows the budgeted cost of scheduled work as of the current date.

Figure 3.5 Budgeted cost of the scheduled work as of current date


...or use an easy-to-read DataGraph for at-a-glance visual analysis of project trends.

Figure 3.6 Visual analysis of project trend


Performance Indices and Variance

Once Earned Value and Planned Value are known, they can then be used to
determine schedule and cost variance, and calculate performance efficiency.

Variance Calculations
• Schedule Variance (SV) = Earned Value – Planned Value.
The difference between what was planned to be completed and what has
actually been completed as of the current date.
• Cost Variance (CV) = Earned Value – Actual Costs.
The difference between the work that has been accomplished (in Rupees)
and how much was spent to accomplish it.

In the graph below, the project shown has a negative Schedule Variance,
because it has “earned” less value than was planned, as of the current date.
However, it has a positive Cost Variance, because the Earned Value is greater than
the Actual Costs accrued:

Figure 3.7 Schedule and cost variance


Performance Indices

• Schedule Performance Index (SPI) = Earned Value / Planned Value.


Schedule variance related as a ratio instead of a dollar amount. A ratio less
than 1.0 indicates that work is being completed slower than planned.
• Cost Performance Index (CPI) = Earned Value / Actual Costs.
Cost variance related as a ratio instead of a Rupee amount. A ratio less than
1.0 indicates that the value of the work that has been accomplished is less than the
amount of money spent.
In the schedule below, Project A has a CPI greater than 1.00. This shows us
that the project has been earning value faster than it has been accruing costs:

Figure 3.8 Schedule and cost performance index


However, Project A also has a SPI value that is less than 1.00. Although
Actual Costs are low, Task 1 is behind schedule, so the project has not earned as
much value as was planned.

Forecasting Future Performance Trends

The Schedule Performance and Cost Performance Indices not only monitor
current project performance, they can also be used to predict future performance
trends.

• To-Complete Performance Index (TCPI) = (BAC-EV) / (BAC-AC).


Indicates the CPI required throughout the remainder of the project to stay
within the stated budget.

• Estimate at Completion (EAC) = AC + ((BAC-EV)/CPI).


A forecast of total costs that will be accrued by project completion based on
past cost performance trends.

• Variance at Completion (VAC) = EAC – BAC.


The difference between the new Estimate at Completion and the original
Budget at Completion.

Table 3.1 Real time Project Data

Work
breakdown Plann Earne Performance
Sl.No. Cost Variance
structure ed d index
element
Earne Sched
Budge Actua Schedule
d Cost variance Cost ule
t l Cost variance
value
(Rs.) (Rs.) (Rs.) (Rs.) % (Rs.) % CPI SPI
(PV) (EV) (AC) (EV- (CV/E (EV-PV) (SV/P (EV/AC (EV/PV
AC) V) V) ) )
1 Pre pilot plan 63,000 58,000 62,50 (-) (-) 7.8 (-) 5,000 (-) 7.9 0.93 0.92
0 4,500
2 Checklists 64,000 48,000 46,80 1,200 2.5 (-) 16,000 (-) 1.03 0.75
0 25.0
3 Curriculum 23,000 20,000 23,50 (-) (-) (-) 3,000 (-) 0.85 0.87
0 3,500 17.5 13.0
4 Mid-term 68,000 68,000 72,50 (-) (-) 6.6 0 0.0 0.94 1.00
Evaluation 0 4,500
5 Implemen- 12,000 10,000 10,00 0 0.0 (-) 2,000 (-) 1.00 0.83
tation Support 0 16.7
6 Manual of 7,000 6,200 6,000 200 3.2 (-) 800 (-) 1.03 0.89
practice 11.4
CHAPTER 4

RESULTS AND DISCUSSION

Table 4.1 Earned Value Management Terms

Term Description Interpretation

PV(BCWS) Planned Value What is the estimated value of


the work planned to be done?

PV (BCWP) Earned Value What is the estimated value of


the worked actually
accomplished?

AC (ACWP) Actual Cost What is the actual cost incurred?

BAC Budget at How much did you BUDGET for


completion the TOTAL JOB?

EAC Estimate at What do we currently expect the


Completion TOTAL project to cost?

ETC Estimate to From this point on, how much


Complete MORE do we expect it to cost to
finish the job?

VAC Variance at How much over or under budget


Completion do we expect to be?

Table 4.2: Earned Value Management Formula and Interpretation

Name Formula Interpretation


Cost Variance (CV) EV – AC NEGATIVE is over budget

POSITIVE is under budget

Schedule Variance EV – PV NEGATIVE is behind schedule


(SV)
POSITIVE is ahead of schedule

Cost Performance EV / AC I am (only) getting paise out of


Index (CPI) every Re 1

Schedule EV / PV I am (only) progressing at


Performance Index % of the rate originally planned
(SPI)

Estimate at As of now how much do we


Completion (EAC) expect the total project to cost
Rs ?

BAC/CPI • Used if no variances


from the BAC have
occurred
AC + ETC • Actual plus a new
estimate for remaining
work. Used when
original estimate was
fundamentally flawed.
AC + BAC - EV • Actual to date plus
remaining budget. Used
when current variances
are atypical.
AC - (BAC - EV) / • Actual to date plus
CPI remaining budget
modified by performance.

Used when current


variances are typical.
Estimate to Complete EAC – AC How much more will the project
(ETC) cost?
Variance at BAC – EAC How much over budget will we be
Completion (VAC) at the end of the project?
4.1 BENEFITS OF EVM

Following are some of the benefits of EVM

1. It is a single management control system that provides reliable data

2. It integrates work, schedule and cost using a work breakdown structure


(WBS).

3. The associated database of completed projects is useful for comparative


analysis.

4. The cumulative cost performance index (CPI) provides an early warning


signal.

5. The schedule performance index (SPI) provides an early warning signal.

6. The CPI is a predictor for the final cost of the project.

7. It uses an index-based method to forecast the final cost of the project.

8. The “to-complete” performance index allows evaluation of the forecasted


final cost.

9. The periodic (e.g weekly or monthly) CPI is a benchmark.

10. The management by exception principle can reduce information overload.

4.2 LIMITATIONS OF EVM

If the implementation of EVM is not scaled to match the size and complexity of
the project at hand, it may be either too lightweight (e.g. not standard-compliant) or
too costly. The benefits of any implementation should far outweigh its cost of
implementation and maintenance. Thus, EVM is a project management discipline
that should pay for itself many times over.
EVM has no provision to measure project quality, so it is possible for EVM to
indicate a project is under budget, ahead of schedule and scope fully executed, but
still have unhappy clients and ultimately unsuccessful results. In other words, EVM is
only one tool in the project manager's toolbox.
The use of EVM presumes that stakeholders care about measuring progress
objectively. If a project team does not want to measure performance objectively, or if
the organization is performing EVM just to fulfill a customer requirement, EVM is
unlikely to help.

Since EVM requires quantification of a project plan, it is often perceived to be


inapplicable to discovery-driven or Agile software development projects. For
example, it may be impossible to plan certain research projects far in advance, since
research itself uncovers some opportunities (research paths) and actively eliminates
others. However, another school of thought holds that all work can be planned, even
if in weekly timeboxes or other short increments. Thus, the challenge is to create
agile or discovery-driven implementations of the EVM principle, and not simply to
reject the notion of measuring technical performance objectively. Applying EVM in
fast-changing work environments is, in fact, an area of project management
research.

Traditional EVM is not intended for non-discrete (continuous) effort. In


traditional EVM standards, non-discrete effort is called “level of effort" (LOE). If a
project plan contains a significant portion of LOE, and the LOE is intermixed with
discrete effort, EVM results will be contaminated. This is another area of EVM
research.

Traditional definitions of EVM typically assume that project accounting and


project network schedule management are prerequisites to achieving any benefit
from EVM. Many small projects don't satisfy either of these prerequisites, but they
too can benefit from EVM, as described for simple implementations, above. Other
projects can be planned with a project network, but do not have access to true and
timely actual cost data. In practice, the collection of true and timely actual cost data
can be the most difficult aspect of EVM. Such projects can benefit from EVM, as
described for intermediate implementations, above, and Earned Schedule.

CHAPTER 5

CONCLUSION
Earned value Analysis is a better method of program/project management
because it integrates cost, schedule and scope and can be used to forecast future
performance and project completion dates. It is an “early warning” program/project
management tool that enable managers to identify and control problems before they
become insurmountable. It allows project to be managed better – on time, on
budget.

Optimization of process planning functions by Genetic algorithms and


Simulated Annealing

K. Venkateshwaran1 K. Venkatesh Raja2 A. Karthikeyan3 K.


Krishnamoorthy4
1
Post Graduate Student, K.S.Rangasamy College of Technology, Tiruchengode
2
Lecturer, K.S.R. College of Engineering, Tiruchengode
3
Asst. Professor, VSB College of Engineering
4
Professor, K.S.Rangasamy College of Technology, Tiruchengode
Abstract
This paper presents a new Computer aided process planning (CAPP) system
which is incorporated by combining two heuristic search techniques, Genetic
Algorithms and simulated annealing for sequencing of operations for machine
components having various operations. This paper also investigates the
possibility of mixing two or more techniques for reducing the computing time. This
hybrid technique greatly reduces the computing time by 60%, rather than solving
the problem by a single search method. Primary objectives of this work also
include to develop an intelligent CAPP system that can be used by an average
operator and to produce globally optimized results. For this purpose, in this work
an attempt has been made to include hybrid techniques for process planning
applications.

Introduction

Although more than 150 CAPP systems have been reported in the literature, only
a few have considered the optimization of operation sequencing and the
alternative sequence of operations have used a precedence matrix in operation
sequencing for prismatic components after analyzing the technological and
feasible constrains. Optimization of the sequence of minimum cutting tool-
change and tool-travel times. Two important issues mentioned in the latter work
are elimination of infeasible machining operation sequences and the use of tree
structure for enumering all the paths for weeding out the infeasible sequences.
The need for using heuristic approaches for randomly generating the alternative
sequences and thereby alternative process plan is stressed in research works.
As the operations sequencing problem involves a large number of interacting
constrains, it is very difficult to formulate and solve the sequencing problem using
dedicated search techniques like integer programming, branch and bound and
dynamic programming methods. Different search methods are represented in
Figure 1.
Figure 1: Search Techniques

Genetic Algorithms

Genetic Algorithms are a family of computational models inspired by


evolution. These algorithms encode a potential solution to a specific problem on a
simple chromosome-like data structure and apply recombination (crossover)
operators to these structures so as to preserve critical information.

Figure 2: Gene, Chromosome, Population

The basic components of GA are illustrated in the figure:2 gene,


chromosome, and population. Usually the chromosome is represented as a binary
string. The real trick of GA is on the encoding of problem domain, and the selection
of next generation. Figure 3 shows the working principle of GA.
Start

Replacement

Offspring Population of
candidate Fitness Evaluate
solutions funct fitness of
Mutuation
individuals
ion
Crossover

Goal
Paren
Selection reache
ts d?
of parents
Figure 3: Working principle of GA’s

How does GA work?


1. Initialize a population of chromosomes.
2. Evaluate each chromosome in the population.
3. Create new chromosomes by mating current chromosomes. Apply Mutation and
Recombination as the parent chromosomes mate.
4. Delete members of the population to make sets for the new chromosomes (new
generations).
5. Evaluate the new chromosomes and insert them into the population.
6. If the generation is up, stop and return the best chromosome; if not, go to step 3.

Simulated Annealing (SA)


Simulated Annealing (SA) is motivated by an analogy to annealing in solids.
The idea of SA comes from a paper published by Metropolis etc al in 1953
[Metropolis, 1953]. The algorithm in this project simulated the cooling of material in a
heat bath. This is a process known as annealing. If you heat a solid past melting
point and then cool it, the structural properties of the solid depend on the rate of
cooling. If the liquid is cooled slowly enough, large crystals will be formed. However,
if the liquid is cooled quickly (quenched) the crystals will contain imperfections.
Metropolis’s algorithm simulated the material as a system of particles. The algorithm
simulates the cooling process by gradually lowering the temperature of the system
until it converges to a steady, frozen state. The law of thermodynamics state that at
temperature, t, the probability of an increase in energy of magnitude, δE, is given by
P(δE) = exp(-δE /kt) (1)

Where k is a constant known as Boltzmann’s constant.

The simulation in the Metropolis algorithm calculates the new energy of the
system. If the energy has decreased then the system moves to this state. If the
energy has increased then the new state is accepted using the probability returned
by the above formula. A certain number of iterations are carried out at each
temperature and then the temperature is decreased. This is repeated until the
system freezes into a steady state. This equation is directly used in simulated
annealing, although it is usual to drop the Boltzmann constant as this was only
introduced into the equation to cope with different materials. Therefore, the
probability of accepting a worse state is given by the equation

P=exp (-c/t) > r (2)

Where, c is the change in the evaluation function, t is the current temperature, r is a


random number between 0 and 1. The probability of accepting a worse move is a
function of both the temperature of the system and of the change in the cost function.
It can be appreciated that as the temperature of the system decreases the probability
of accepting a worse move is decreased.

Case Study

A component shown in Fig. 5 is to be machined on a CNC MC having 8


operations was taken from [2]. It is required to find optimal production time.
Specifications of the required parameters and values of the constants are given in
Table 1. The matrix represented in the following table is known as the precedence
cost matrix (PCM). The operations are listed in table 2.

Figure 5: Machine Component having 8 operations

Table 1: PCM for 8 operations

1 A1 –DRILLING OF HOLE
2 B1 – ROUGH FACING
3 B2 – FINISH FACING
4 C1 – COUNTER BORING OF HOLE
5 D1 – DRILLING OF HOLE
6 D2 – ROUGH BORING OF HOLE
7 D3 – FINISH BORING OF HOLE
8 E1 - CHAMFERING
Table 2: Operations

Implementation of algorithms in our case study


In GA terminology, a candidate solution is represented by a sequence of
number of character known as a chromosome or string. Each element in the string
is called a gene and represents a process variable. A selected number of strings is
called a population and the population at given time is a generation. A typical GA is
composed of several genetic operators such as crossover, inversion, and mutation.
There are also other types of genetic operators that yield good results. Simple
crossover involves two parents and crossover points. The operations to be done on
the casting are labeled as A1, B1, B2, C1, D1, D2, D3 and E1. For Illustration
purpose, the operations are coded as 1, 2, 3…8 corresponding to A1, B1, B2…E. A
precedence cost matrix (PCM) is generated for each pair of features based on the
relative costs corresponding to the type of attributes like machining parameters

1 2 3 4 5 6 7 8
(A1) 1 __ 100 100 1 100 100 100 100
_
(B1) 2 11 __ 0 100 1 100 100 100
_
(B2) 3 11 100 __ 100 1 100 1 1
_
(C1) 4 100 100 100 __ 100 100 100 100
_
(D1) 5 11 1 100 100 __ 0 100 100
_
(D2) 6 11 1 100 100 100 __ 100 100
_
(D3) 7 11 100 100 100 100 100 __ 100
_
(E1) 8 11 1 100 100 100 100 1 __
_
change, tool change, set-up change and machine change (table: 1). For the present
problem, the string (chromosome) is represented by a collection of eight elements
(genes) corresponding to sixteen features (operation) given in the precedence cost
matrix of the given part as 1,2,3,4,5,6,7,8.
The initial population cannot consist of simple random generated string, as the
local precedence of operation/features for each form feature cannot be guaranteed.
To create a valid initial string, an element of the string is generated randomly, from
the first operations of each form feature group (to follow the nature flow of the
operations) and the procedure is replaced by selecting elements from the remaining
operations group until all the operations are selecting elements are represented in
the string. Each string in the population should contain eight elements corresponding
to sixteen operations. The first elements of the string is generated randomly from
the first selected randomly from these five features, then the second elements of the
string has to be generated randomly from the same form features. Then any one of
the form feature is selected as the second element, the third element of the string is
selected form the form feature. This process is repeated until all the elements of the
stings are filled from the first elements of the remaining form feature groups.
Similarly, other string of the population is generated keeping their local operation
precedence.
The objective of the sequencing problem is to get an optimal operation
sequence that results in minimum production cost form the given precedence cost
matrix (PCM). The objective function is calculated for each string in the population
as the sum of the relative costs between pairs of features (operations). The relative
costs will correspond to the number of tasks that need to be performed in each
category of attribute such as machining parameters change, tool change, set-up
change and machine change and the type of constrains one features has, with
respect to the other. The fitness value of each string are calculated and the expected
count of each string for the next generation is obtained. This is represented as
follows in table 3.

String Fitness Expected count Actual


No: value Ei=(Umax)/Uavg count C
(Umax=M-
(umin)
1 100 0.447 1
2 188 0.841 1
3 387 1.731 2
4 288 1.289 1
5 298 1.333 1
6 100 0.447 0
7 0 0 0
8 198 0.886 1
9 188 0.289 1
10 388 1.736 2
Table 3: Fitness function

The actual count of each string is obtained based on the string weight age (survival
of the fittest) so that the total count becomes the population size. This genetic
operator is used to generate a new population, which has better string that, the old
population. The selection of the better string is based on the actual count arrived in
the earlier step. The reproduced population is called parent 1 and is used for the
next genetic operation, i.e. crossover. This population is shown in the first column of
(table: 4). In this analysis, a new cross over is designed to ensure the local
precedence of operation and generating a feasible offspring. To produce a feasible
offspring, two parents are randomly selected from the population. Based on the
string length, two crossover sites are randomly generated to select a segment in one
parent between these crossover sites are randomly generated to select a segment in
one parent between these crossover sites. The offspring, child1, is generated by
arranging the elements of the selected segment in this parent according to the order
in which they appear in the other parent with the order of the remaining elements
being the same as in the first parent. The role of these parents will then exchange in
order to generate another offspring, child 2. The crossover operator can be
illustrated as follows.

Parent1 Mate Parent2 Sites Offspring


(child 1)
12345867 4 82145673 2,6 12845367
58246713 9 45671823 2,7 58467123
82145673 1 12345867 1,8 81234567
82145673 10 45671823 3,6 82145673
42351867 7 52641378 1,8 45261378
41256873 1 12345867 3,5 41256873
52641378 9 45671823 2,8 52467183
56784213 8 56784213 4,6 56784213
45671823 7 52641378 0,5 56417823

Table 4: Sequence and cross over


Selected two strings randomly from the population and denote them as parent
1 and parent 2.Consider two random crossover sites as 2,6 and the segments from
parent 1 between crossover sites are 3,4,5,8. Arranging the selected elements in
the order of parent 2. Result in 8,4,5,3 then the offspring; child 1 from parent 1 is
generated. The process of crossover for the example operation-sequencing problem
is well depicted in table 4. The mutation operator makes random changes to one or
more elements of the string. Mutation is done with a small probability called
Mutation probability (PMUT). This is done to protect the loss of some potentially
useful strings and to avoid being struck at the local optimum. The mutation operator
proposed here randomly modifies two elements to obtain the resulting population.
However, there is a possibility of the string becoming infeasible by violating the local
precedence of operations for the form feature groups. Here, a new operator is
introduced to check the feasibility of the string elements obtained. If the string is
infeasible, its total cost is given a very high value so that it would not come in the
next generations. This process repeats for a specified number of generations. At the
end of generation, the string(s) corresponding to the minimum value is taken as the
optimal operation sequence. The optimal sequence was taken and it was given as
the first sequence in SA. The solution was converged very quickly and the best
sequence was found to be 56238714 with a cost of 15 units which was the same as
reported in [2]. The computational time was reduced by 60% from the previous
results. A C++ code has been Written for solving the problem. The program is
executed for few numbers of times to get optimal solution having alternate feasible
sequences for the same feature.

Conclusion

After all, if there were no limits on execution time, one could always perform a
complete search, and get the best possible solution. Most stochastic algorithms can
do the same, given unlimited time. In practice, there are always some limits on the
execution time. So there is a need of an efficient search technique like GA, SA etc.
Optimization of all process planning is one of the duties of the CAPP system. Most
of the optimization system related to process planning application has been
developed as off-line system such that they cannot be used as integrated module
within process planning packages. Therefore, optimization system need integrated
with CAPP system. The importance’s of AI techniques on the optimization of CAPP
functions have been proven by this research project too. The potential and power of
AI is very great and it is believed with that exploitation of AI methods, with this it is
possible to increase its capabilities of IMS’s. GA’s has the advantage of rapid
reaching to the region which includes the global optimum due to their parallel
structure. However, the most important drawback of the GA is that it is easily trapped
in local optima. A mixed methodology can be used to increase the performance of
the GA, by coupling the parallel computing ability of GAs with the advantages of the
SA which attempts to escape local optima. So a hybrid technique has been
developed in order to overcome the drawbacks and decrease the precious
computational time.

References

[1] Turkay Dereli, Huseyin Filiz, Optimization of process planning functions by


Genetic Algorithms, International journal of Computers and Industrial Engineering 36
(1999) 281 – 308.

[2] S. V. Bhaskara Reddy, M. S. Shanmugam, T. T. Nagendran, Operation


sequencing in CAPP using genetic algorithms, International journal of production
research 1999, VOL 37, No.5, 1063 – 1074.

[3] Damon Cook, New Mexico State University, Computer Science Department,
Evolved and Timed Ants: Optimizing the Parameters of a Time-Based Ant System
Approach to the Traveling Salesman Problem using Genetic Algorithm.

[4] Liangsheng Qu, Ruixiang Sun, Research Institute of Diagnostics and


Cybernetics, Xian Jiaotong University, A synergetic approach to genetic algorithms
for traveling salesman problem, International journal of information sciences 117
(1999) 267 – 283.
[5] Lawrence Davis , "GENETIC ALGORITHMS AND SIMULATED ANNEALING",
Pitman, London Morgan Kaufmann Publisher

[6] David E. Goldberg, Genetic Algorithms

[7] Emory W. Zimmer, Mikell P. Groover, CAD/ CAM.

SIMULATION OF SOLENOID ACTUATOR INFLUENCING THE MAGNETIC


FORCE

S.Palanisamy1 M.Tamil Selvan2* R.Nandha Kumar2*


S.Suresh Babu2*

1. Lecturer, Department of Mechanical Engineering, SSM College of Engineering,


Komarapalayam

2. Students, Department of Mechanical Engineering, SSM College of Engineering,


Komarapalayam

* Corresponding Author,

Mail: palani_mecad@yahoo.com

Mobile: 9865357377
Abstract

The work describes a numerical study using design of experiments applied


in solenoid actuator. A solenoid is a linear motor with a fixed range of travel.
Solenoids may be designed for simple ON-OFF applications, acting much like relays.
For example they are used in starters and door locks. The design and construction of
the solenoid actuator used for the investigation are described. The parameters of
solenoid actuator such as flux density, mechanical force, magneto motive force,
magnetic flux, surface energy, average surface potential, line integral of flux density
are discussed. Numerical modeling with the “Quick Field”, which uses finite element
method (FEM), gave the electromagnetic field distribution and calculation of
magnetic field and force applied to the plunger.

1. Introduction

Quick Field can solve both linear and nonlinear magnetic problem. Magnetic
field may be induced by the concentrated or distributed currents, permanent or
external magnets. This problem describes the non linear magnetic field. Solenoid
actuators are used for many of the applications .The major applications of solenoid
actuator includes valves, water relays, switches etc... The Specific applications of
solenoid actuators are automatic door locks and office equipment, printer, electric
locks, photographical, optical, medical instrumentation, and automatic teller
machines.

2. Design of Solenoid actuator

A solenoid actuator consists of a coil enclosed in ferromagnetic core with a


plunger. A solenoid is an electromechanical device which allows the electrical device
to control the flow of gas or liquid. The electrical device makes the current to flow
through a coil located on solenoid valve. The current flow in turn results in a
magnetic field which causes the displacement of a metal actuator.

Solenoid valves come in various configurations and size, solenoid valves can
be of normally open, normally closed, or a two way valve type.
2.1. Selecting a Solenoid actuator

Many mechanical, thermal, and electrical constraints should be considered


when selecting a solenoid.

 Force requirements
 Electrical requirements (current driving actuator etc.)
 Duty cycle
 Maximum envelope dimensions
 Temperature extremes
 Termination requirements
 Dimensions

2.2. Principle of solenoid actuator

Lorenz’s law of electromagnetic induction states that a magnetic flux (theta)


exists due to a magnetic field

The field strength H and the flux density B are related by the magnetic permeability
of the substance that the field is in.

Faraday’s Law of Electromagnetic Induction states that a change in the


electromotive force (emf) or Voltage across the conductor causes a change in the
flux.

Figure 1 Working model of solenoid actuator


An actuator is opened or closed by an electromagnet. This action is achieved
by the movement of a magnetic plunger to seal off or open a port when voltage is
applied.

3. Problem formulation

The problem taken is the plunger movement of solenoid actuator. In this case
the plunger movement will be controlled by using the label mover in the “Quick field”.
This is used to calculate the mechanical force, flux density and many other
parameters.

3.1. Methodology

The simulation of solenoid actuator uses the following steps

 Geometry and Meshing


 Material property
 Loading sources
 Boundary conditions
 Post processing results

A. Geometry and Meshing

The geometry of the model is drawn with specified units using Cartesian
coordinate by taking the grid as the reference. The drawn geometry is enclosed in
a close loop as shown in the FIG.2. Then the mesh is created on the geometry.
Labels are assigned to the geometric objects describing the material properties,
sources and boundary condition.

Figure 2 Quick field grid distributions for the enclosed loop array

B. Material Property
The material property for the geometry of the model is given. In the outer
surface of the model, air is acting. So for air, coil & plunger,

The permeability, μ= 1

C. Loading Source

The two types of loading sources available in the Quick-field software are,

 Field source.
 Conductor connections.
The loading source that we choose for our analysis is of the field source type.
Current density is the loading source that is available in the field source type. The
current density for iron and the plunger is given below.

Iron, i=1000000 A/m2

Plunger, i= 1.5 A/m2

D. Boundary conditions

The various boundary conditions that are available in the Quick-field software
are,

 Magnetic potential.
 Tangential field.
 Zero normal flux.

Magnetic Potential boundary condition is used to describe the solenoid


movement that is penetrated by the magnetic field.

Here,

Magnetic potential, A=Ao

Where,

A o=0

After describing the problem it is solved and the results are obtained.

E. Post Processing
The output results such as mechanical force, magneto motive force, magnetic
flux, surface energy, average surface potential, line integral of flux density, and
surface integral of strength are obtained. The typical solid actuator magnetic field is
shown in FIG. 3. The flux density of the actuator is shown in FIG.4. The movement
of the plunger is given in FIG.5.

Figure 3 Example of typical solid actuator magnetic field model

Flux Density
B (T)

0.1130

0.1017

0.0904

0.0791

0.0678

0.0565

0.0452

0.0339

0.0226

0.0113

0.0000

Figure 4 Quick field plot for solenoid actuator magnetic flux


Figure 5 Movement of the plunger influencing the magnetic field

4. Results and Discussion

The mechanical force F, flux density B and the strength D, for the solenoid
valve is obtained and listed in Table 1. The B-H curve for the core and the plunger
is shown in the FIG.6. The plot describing the flux density Vs Length of plunger is
shown in FIG. 8.

Figure.6 B-H curve for the core and the plunger


Flux Density (T)
0.064
0.062
0.060
0.058
0.056
0.054
0.052
0.050
0.048
0.046
0.044
0.042
0.040
0.038
0.036
0.034
0.032
0.030
0.028
0.026
0.024
0.022
0.020
0.018
0.016
0.014
0.012
0.010
0.008
0.006
0.004
0.002
0.000
0 10 20 30 40 50
L (cm)

Figure 7 Flux density Vs length of plunger

Table 1.Values of F, B and H

Flux
Mechanical
S.No. Step Strength(A\m) Density(Wb)
Force, F(N)

1. 1 0.073 460 0.80

2 1-2 0.029 640 0.95

3. 1-3 0.061 720 1.00

4. 1-4 0.091 890 1.10

5. 1-5 0.026 1280 1.25

6. 1-6 0.013 1900 1.40

7. 1-7 0.040 3400 1.55

8. 1-8 0.133 6000 1.65


5. Conclusion

Thus the simulation of solenoid actuator influencing the magnetic force is


made successful by quickfield. The mechanical force, flux density and strength for
different steps are obtained. The calculated force applied to the plunger F = 374.1 N.

6. Reference

[1].D. F. Ostergaard, "Magnetic for static fields", ANSYS revision 4.3, Tutorials,
1987.

[2]. K.kowalenko,"Savings lives, one landmine at a time "the institute, IEEE, vol28,
1, March 2004.

[3]. Cornelis J.Kikkert," A low cost multifrequency landmine detector ",James cook
University ,Queensland,Australia,4811.

[4].QingHui Yuan,"Self calibration of PUSH-PULL solenoid actuator in electro


hydraulic values"IMECE 2004-62109, November 2004.

DESIGN AND FABRICATION OF WALL CLIMBING ROBOT WITH TWO


DEGREE OF FREEDOM HAVING MINIMAL SUCTIONS CUP AND ACTUATORS

.K.P.RAMESH , M..E. (MANUF),VMKV ENGG COLLEGE

ABSTRACT

A wall climbing robot intended for painting, insception and cleaning application
has been developed. The robot has characteristics features of kinematic design and
is capable of moving a tool at a specific speed on complex surface. In real field
condition the labour intensive inception demands a great attention since it is
subjected to human errors and limited reliability. The robot totally uses two actuator
and four suction cups. This robot two degree of freedom on the wall, has
successfully tried to achieve a semi autonomous robot for industrial application,

Submitted by:

D.SUDARSAN.B.E.M.B.A.M.M.M.
III YEAR

M.E. (Manf). – Part time

KLN COLLEGE OF ENGG.

MADURAI.
---------------------------------

Placement Officer

KAMARAJ COLLEGE OF ENGINEERING AND


TECHNOLOGY

VIRUDHUNAGAR

Mobile: 9442325078

9842981838

Email: talk2sudarsan@gmail.com

Guided by:

Dr.A.ASHA.M.E.Ph.D
Head of the Department - Mechanical

PROFESSOR/MECHANICAL

Optimized Master Production schedule

– Using Modified Theory of Constraints

D.SUDARSAN1
III YEAR M.E. (Manf). – Part time, KLN COLLEGE OF ENGG, Madurai.

Dr.A.ASHA2
H.O.D. – Mechanical, KLN COLLEGE OF ENGG, Madurai.

ABSTRACT

In this competitive business environment, every business needs to make


money (profit) for its survival overcoming tedious constraints. They may attain it by
using Scientific Management tools like CIM, MRP, MRPII, ERP, TOC and TQM. All
these tools focus on eliminating the wastes and to manufacture the products at an
economical cost. To obtain maximum profit, products should be produced in right
Quantity, right Quality and at economical cost. Nearly 65% of cost involves in
Inventory Management. Profit can be maximized by effectively controlling the
inventory. MRP plays a vital role in controlling the Inventory cost. The effectiveness
of MRP is determined by the optimality of the product mix MPS. Optimal product mix
MPS is nothing but the decision of how much of each product should be produced
and sold to make more profit. Traditionally, Integer Linear Programming (ILP) is used
to determine optimal product mix MPS. Drawback of ILP is that, it needs high – level
expertise to formulate and more time to solve. Theory of Constraints is an alternative
approach to ILP. It gives the best solution with ease to calculate. My project aims in
obtaining an optimum product mix using modified theory of constraints considering
Money, Capacity and market constraint which gives better result than traditional
Theory of constraints and easier than Integrated linear programming.
INTRODUCTION

If a business suffers from the following

• Poor on-time performance


• Long production lead-times
• High WIP and/or finished goods inventory
• High overtime
• Lots of expediting and rescheduling
• Wandering or stationary bottlenecks
• Reluctance to take on new business

Then chances are good to modify the organization's constraint in such a way that
production (or a production-like operation) is managed aiming for a high profit.

If this is the case, then you will benefit from investigating and implementing a
constraint - based method of production management.

THEORY OF CONSTRAINTS:

Theory of constraints is an approach towards continuous improvement of an


organization, primarily developed by E.Goldratt which asserts that constraints
determine the performance of the system. E.Goldratt defines a constraint as
anything that limits the performance of a system relating to its goal of making money
now and in future. TOC considers a constraint to be focusing point around which a
business can be organized or improved. Every business has at least one constraint;
without constraint a business would earn infinite profit.

Product Mix: Companies often need to determine the quantity of each product to
produce on a monthly basis. In its simplest form, the product mix problem involves
how to determine the amount of each product that should be produced during a
month to maximize profits. Product mix must usually adhere to the following
constraints:

• Product mix can’t use more resources than are available.


• There is a limited demand for each product. We can’t produce more of a
product during a month than demand dictates, because the excess production
is a waste (for example, a perishable drug).
• Money Constraint for particular product line or process.
• PROBLEM DEFINITION

Determination of optimal product mix for maximizing the profit for a sequence

dependent flow line manufacturing system that manufactures different models of

single end product to meet market requirements

RELATED WORK:

1. Richard Lubbe et.al (1992) compared both ILP and TOC methods for solving
product mix problem. Finally they concluded that TOC methodology produce
the better result than ILP method.
2. B.Ronen et al (1992) proposed the cost utilization model to analysis the
production lines and material flow. This model combines the Parato approach
with the TOC approach.
3. Gerhard. Plenert (1993) compared both the TOC and ILP with their
limitations. Finally he concluded that ILP is much better planning tool and
comes to closer in achieving the goal minimizing the throughput.
4. Godfrey.C Onwhubolu (2001) used the TABU search based TOC product
mix heuristic to identify optimal or near optimal for small to medium size
problems. Finally they concluded that when there are multiple constraint
resources on the product mix problem, the TABU search based TOC
approach for achieving better profit maximization goal than traditional
algorithm.
5. S.Pass et.al (2003) presented a systematic approach for managing the
market –constraint environment using the HI-TECH industry case study. And
finally they suggested that a way to reduce costs in non-critical areas and
stresses the need for lead-time protective buffers.
6. V.J Mabin et.al (2003) investigated the product mix dilemma using variety
TOC approaches that complement and extended traditional treatment such as
ILP, spread sheet and graphical approaches. According to this algorithm they
find the product mix, but which doesn’t satisfy the market demand.
Methodology:

Existing TOC Product Mix Methodology

In the existing TOC product mix heuristic, they have considered the
throughput as the difference between the selling price and raw material cost.

Step 1: Identification of the system’s constraint(s) involves the calculation of the


required loads on each resource to produce all the products. The constraint or
bottleneck is the resource whose market demand exceeds its capacity.

Step 2: Decide how to exploit the system’s constraint(s) involves

(a) Calculate the ratio of the throughput to the product’s constraint hour
(TH/CH).
(b) Arranging in descending order of the product’s TH/CH, reserve the
constraint capacity to build the product until the constraint resource’s
capacity is exhausted.
(c) Planning to produce all the products that do not require processing time on
the constraint resource (bottleneck) in the descending order of throughput
ratio.
My paper involves in modifying the above, and considering more factors to
obtain an optimal solution.

Proposed Methodology:

Modified TOC product mix heuristic

In reality profit does not depend upon the unit contributory margin i.e. the

difference between the selling price and raw material cost. Hence in the modified

approach, Profit is calculated as the difference between the Sales Value and

Total cost.

The following are the steps involved in Modified TOC product mix heuristic
Step1: Identify the constraint

a) Calculate the load on each resource. The constraint or bottleneck resource


is the one, whose capacity is not enough to meet the market demand
placed on it.

Step2: Decide how to exploit the constraints

a) Calculate the ratio TH* /CH (Throughput* /Constraint Hour) to each


product. Here the Throughput* is considered as the difference between the
Sales value and Total production cost.

b) Arrange in descending order of the product’s TH*/CH ratio, revise the


constraint capacity to build the product until the constraint resource’s
capacity is exhausted.
c) Plan to produce all products that do not require processing time on the
constraints resource in the descending order (sequence) of their TH*

Step3: Check for the availability of Money.


a) Find the money needed for the production of all the products.
b) Check whether it is possible to produce all the products within the
available money and resource capacity, as per product mix obtained in
step2. If so go to step4. Otherwise revise the product mix.
Step4: Check for the Satisfaction of Market conditions.
a) Check whether the product mix obtained in step3 is satisfying the market
conditions. (Preference of products in the market).
If it satisfies, select and fix that product mix as the optimal product mix for the
stated conditions. Otherwise revise the product mix to satisfy the market conditions
in consideration with the available resource capacity and money.

Case Study:

To emphasize the performance of the Modified TOC product mix heuristic,


A Case study is conducted at M/s. Masanamoorthi spinning mills (p) Ltd., Mettilpatti,

Thoothukudi (Dt.), 80 km from Madurai., where they are producing four types of

yarns A, B, C and D in a flow line manufacturing system. Each type of yarn is


processed in six different Workstations. Fifth workstation consists of six frames and

other workstations consist of single machine each. The workstations are arranged as

per the processing sequence. The sequence of the processing operations is

common to all types. The sales value and demand for each model is different in the

market but the profit ratio is same for all the types of the single end product. Three

shifts per day and 7 day working per week is practiced. (i.e.10080min per week). The

company needs to carryout the manufacturing with optimal product mix that will give

optimum product combinations so as to attain maximum profit. The cost of operation

is Rs.18.00 per hour for all type of products. The necessary data is tabulated in table

4.9. Total money available for manufacturing the yarn per week is Rs. 30000.00 only.

Loading and unloading time for each model is 2.5min respectively. Fixed cost is to

be thirty-five percentage of production cost of the product for each model.

Solution:

With the above processing steps of TOC and Modified TOC, Profit is

calculated and the results are tabulated as follows.

Comparison of Product sequence, Product mix And Profit

Constraint Approac ILP solutions TOC Heuristic Modified TOC


s h Heuristic
considered A-B-C-D D-C-B-A D-B-A-C
Sequence

Capacity Product 1252,800,1,100 1000,1000,800, 1000,800,1252,0


alone. mix 0 700

Profit 68586.53 68187.00 68562.80

Capacity & Product 1487,777,3,23 1000,1000,800, 1000,800,992,0


Money mix 374
Profit 63767.28 55601.60 60658.80

Capacity, Product 1498,39,1000,0 1000,1000,800, 1000,800,374,100


Money & mix 374 0
Market Profit 60183.31 55628.60 55628.60

The profit obtained for the different heuristics and different conditions are
charted in figure 4.1.in this figure conditions are taken in X axis and total profit is
taken in Y-axis. In X-axis number 1 denoted the product mix for the capacity
limitations, 2 and 3 are denoted the money availability with capacity limitations,
money availability and market conditions with capacity limitations respectively.

profit comparision chart

80000
Total profit inRs

60000 ILP
40000 TOC

20000 MOdified TOC

0
1 2 3 4 5 6
Different conditions

Conclusion from the above

• Total profit obtained by Modified TOC product mix heuristic is higher than
Existing TOC product mix heuristic and nearer to the ILP profit when capacity
and money constraints are considered.
• Total profit obtained by Modified TOC product mix heuristic is lower than the
ILP profit when the market constraints is also considered. This may be due to
the conflict caused by the existence of multiple managerial constraints.
Conclusion:

The modified TOC heuristic performs better than the original TOC product mix

heuristic. The original TOC heuristic is capable of providing optimal solutions when

only capacity (physical) constraints exist. But the modified TOC product mix heuristic
provides optimal solutions when multiple constraints (physical and managerial) exist.

The profit obtained by modified TOC product mix heuristic is higher than Existing

TOC product mix heuristic and nearer to the ILP profit. When multiple managerial

constraints exists the performance of this heuristic is not appreciable in certain

cases. This may be due to the presence of conflicts caused by the multiple

managerial constraints. The performance can be further improved by using TOC

Thinking Process tools to resolve conflicts caused by the multiple managerial

constraints.

Hence the above modified approach considers all factors hinders profit and
gives the best optimum product mix strategy than the traditional TOC and also it is
easier than the ILP.

REFERENCE:

1. en.wikipedia.org/wiki/Theory_of_Constraints
2. www.goldratt.com
3. www.focusedperformance.com/toc01.html
4. www.dbrmfg.co.nz/
5. Journal of the Brazilian Society of Mechanical Sciences
Print ISSN 0100-7386

J. Braz. Soc. Mech. Sci. vol.22 no.4 Rio de Janeiro 2000

NUMERICAL SIMULATION OF FLOW INSIDE A LID DRIVEN

SQUARE CAVITY

1.J.POORNALATHA, 2.K.UMADEVI, 3.K.SUPRADEEPAN.


1 & 2 Final year students, Department of mechanical, Kamaraj College Of Engg &
Technology, Virudhunagar.

3 Project Supervisor, Department of mechanical, Kamaraj College Of Engg &


Technology, Virudhunagar.

ABSTRACT:

The paper focuses on simulation of flow inside a lid driven square cavity
containing an incompressible fluid. The governing equation (Navier-Stokes) is solved
numerically. A numerical scheme based on SOLA based algorithm is proposed for
the solution of 2D Stokes equation. The fundamental solutions of the Stokes
equations are adopted as the sources to obtain flow field solutions. The present
method is validated to numerical schemes for lid-driven flows in a square cavity. The
different cases were considered, where the Reynolds number of the flow are varied.
The objective is to choose a numerical scheme as well as the analysis of vortex
formation and to draw the velocity profile at the mid horizontal and vertical section.

Keywords: Numerical solution for 2D Navier-Stroke equation, SOLA code, Lid-


driven flow, Square cavity, Vortex formation, Velocity profile

Introduction:

It is generally accepted that the Navier-stokes equation provide a complete


description to a wide variety of fluid flow problems. This implies that if the various
initial and boundary conditions are properly described and if the equations are solved
accurately, the important flow phenomena and their effects will be automatically
predicted.

The N-S equations provide a complete description to a wide variety of fluid


flow problems. N-S equations describe the flow of viscous fluid most accurately. The
N-S equations are a system of nonlinear partial differential equations. Except very
few special cases, no analytic method is available to solve them. So, scientists and
engineers looked for approximate solutions which gave rise to the various
approximate models. Although, strictly speaking, all fluids are viscous, under certain
conditions it is possible to introduce the approximate model of inviscid fluid. At low
speeds all fluids behave as an incompressible fluid. Computational methods
developed for computing solutions of compressible N-S equations are, in general,
not applicable to problems for incompressible flow. Several approximate models like
the thin-layer model or the parabolised The N-S equations commonly classified
under the category of the reduced The N-S equations [1], have been investigated
during the seventies and eighties. None of these models are satisfactory for
problems with regions of reversed flow or for problems with large areas of
separation. Moreover, since the N-S equations may be solved for laminar flow in a
reasonable amount of time (at least in the 2-D cases) on easily available
computational equipments like the personal computers, the modern trend is to go for
the solution of the N-S equations, without introducing any approximation in the
equations or in the boundary conditions

A large number of codes have been developed for incompressible flows. It is quite
difficult to categories the various codes developed for incompressible flows. However
these codes may differ in one more aspects over a total of eight parameters.

PROBLEM DEFINITION.

Consider a square cavity with walls on three sides filled with incompressible
viscous fluid. The lid of the cavity moves to the right with uniform speed, parallel to
itself. This movement sets the fluid inside in motion. This problem has been used as
a test case for comparing different numerical methods for solving the incompressible
N-S Equations.

SOLA CODE:

The type of algorithm taken for solving Navier Stoke equation is SOLA based
algorithm. SOLA code is based on a finite difference scheme using an explicit
algorithm. the code was developed in Los Alamos Laboratory by Hirt et al.(1975). In
this method, velocities are computed by solving the momentum equations in an
explicit manner using the velocity and the pressure fields of the previous time step.
The updated velocity field, however, does not satisfy the equation of continuity.
These velocities (and pressure) are then adjusted to satisfy the continuity equation in
an iterative manner.

FORM OF EQUATION:

We frequently need dimensionless form of the basic equations for


incompressible flow, which we present here for 2-D flow. We consider this equation
without external body forces, heat or energy supply. Governing continuity and
momentum equations in rectangular Cartesian coordinates in non-dimensional form
is discussed with two forms. Navier-Stokes equations may be written in conservative
and non-conservative form. The Navier-Stokes equations in primitive variables are
given in non-conservative form as

x-momentum:

∂u ∂u ∂u 1 ∂p ∂2u ∂2u 
+u +v =− +υ 2 + 2 
∂t ∂x ∂y ρ ∂x ∂x ∂y 

y-momentum:

∂v ∂v ∂v 1 ∂p ∂2 v ∂2 v 
+u +v =− +υ 2 + 2 
∂t ∂x ∂y ρ ∂y ∂x ∂y 

Here u, v denote velocity components along x and y- axial directions, p and ρ the
dimensionless pressure and density and υ =1/Re , Re the Reynolds number. For
incompressible flows, governing equations may be used in either conservative or
non-conservative form. For compressible flows, on the other hand, it is desirable to
use equations in conservative form to ensure conservation of mass, momentum and
energy across the shock.

It may be observed in the non-conservative form of the Navier-Stokes


equations that, in each equation, the highest order derivatives of one of the basic
variables ρ , v and e with respect to the space variables are present. These
equations are non-linear and coupled. It also be noted that to each equation one
basic variable is naturally associated, the time as also the material derivatives of
which are given explicitly by that equation. For example, ρ for the continuity
equation v vector for the vectorial momentum equation and e for the energy
equation. Furthermore, the space derivatives of the highest order, i.e. second order,
are solely derivatives of the basic variables associated with the equation. Thus it
possible to gain some insight into the mathematical nature of these equations
looking at each of them separately as an equation for the determination of the
associated basic variable, assuming the other basic variables in that equation to be
known quantities.

∂u ∂v
The governing equations are ,Continuity: ∂x + ∂y = 0

The continuity equation, considered as an equation for unknown ρ , is a first-


order equation whose characteristic curves are the trajectories of fluid particles. This
equation is of hyperbolic character for steady as well as for unsteady flows [2],[3].

Boundary conditions:
The number and the type of boundary conditions to be imposed depend on
the mathematical nature of the governing equations. The main boundary conditions
are inlet and outlet boundaries and solid boundary. We took the problem as the two-
dimensional driven cavity. So we consider as that the bottom, right, left are walls and
the top is as inlet and outlet boundary. The lid of the cavity moves to the right with
uniform speed u=1, parallel to itself. This movement sets the fluid inside in motion.

TOP

LEFT RIGHT
BOTTOM

RESULTS:

Stokes flow in a square cavity with the top lid moving with a unit velocity in the
horizontal x-direction is considered. The predicted results for the velocity profile
garph are shown.

FOR Re=150 VELOCITY PROFILE

FLOW FIELD

FOR Re=250 VELOCITY PROFILE


FLOW FIELD

FOR Re=500 VELOCITY PROFILE

FLOW FIELD

FOR Re=750 VELOCITY PROFILE

FLOW FIELD

CONCLUSION:

The proposed numerical scheme is validated by solving 2D Stokes equation


in a lid-driven square cavity. Stokes flow in a square cavity with the top lid moving
with a unit velocity in the horizontal x-direction is considered as the problem for
which we have given the solution. This helps to bring out the basic numerical
solution for the Navier Stokes equation. With the help of this we can solve for a
rectangular cavity, sudden enlargement and sudden contraction physical domain and
so on. The predicted results for u–y and x–v plots are compared with the solutions in
the journals. It is observed that accurate numerical results are obtained in the flow
field.

REFERENCES:

[1].C.A.J.FLETCHER, Computational techniques for fluid dynamics, Vol.I, Springer


Verlag, Berlin, Heidelberg, New York, 1988

[2].R.COURANT and D.HILBERT, Methods of Mathematical Physics, Vol.II, Wiley,


New York, 1953

[3].P.PRASAD and R.RAVINDRAN, Partial Differential Equations, Wiley Eastern


Ltd., New Delhi, 1985

Journal of Computational Physics


Volume 227, Issue 4, 1 February 2008
SMART MATERIALS

P.G.GURUSAMYPANDIAN
Assistant Professor
Department of Mechanical
engg,
Kalasalingam University.
Email:
akitiwelcomes@yahoo.com
--------------------------------------------------------------------------------------------------
-

ABSTRACT

Now days, smart materials have found an important place in the modern
engineering applications. Smart materials or intelligent materials system include
integration of sensors, actuators and control with a material or structural component
possesses intelligent and life features .The development of smart material is inspired
by the biological structure systems and their basic characteristics of functionality,
efficiency, precision, self - repair and durability. Smart materials are not only singular
materials but also Hybrid composites or integrated systems of materials.

Shape Memory Alloys are one of the major categories of smart materials
which after being strained at certain temperature revert back to the original shape
because of unique properties such as Shape Memory effect, Pseudo elasticity and
high damping capacity. These properties in smart hybrid composites provide them
the tremendous potential for creating new paradigms for material-structural
interactions and demonstrate various successes in engineering applications like
Aeronautical engineering, in medical fields like Vascular stents and Osteosynthesis
etc., and in commercial fields also.
The main advantages of shape memory alloys are, they are Bio-compatible,
strong and good corrosion resistant. They generally have high power to weight
ratio and can withstand large amount of recoverable strain and when heated above
transition temperature, they can exert high recovery stresses of 700MPa which can
be used to perform work.

Smart materials have the potential to change engineering, technology and


design principles completely. Smartness describes self-adaptability, self-sensing,
memory and multiple functionality of the materials or structures. The paper shows
that these characteristics provide numerous possible applications for these materials
and structures in aerospace, military, sports, automobile, civil infrastructure systems
and biomechanics. They will soon be in everything from computers to concrete
bridges.

The smart materials covered in this paper are primarily piezoelectric, shape
memory alloys, electro strictive, optical fibers and magneto strictive. It also deals
with emerging market for smart materials, India’s development in this field & future
perspectives.

1. INTRODUCTION
What is a smart material? We could define it as one whose properties or shape
may change in response to some stimulus from the environment. What makes a
material smart is that changes like this happen by design. Typically they might
respond to stimuli that would leave most materials unchanged, such as exposure to
a particular chemical reagent or to light. Typically the magnitude of their response is
large.

The concept of smart materials may be new, but smart materials themselves go
back a long way. Piezo electrics produce an electrical signal when squeezed. Some
natural minerals are piezoelectric, such as quartz. Smart materials have the potential
to chane engineering, technology and design principles completely. They do away
with mechanical machines as such, and give us a new breed of device for which we
don't yet have a proper word. Smart materials are particularly attractive for doing
engineering on nano scales. It's possible now to make machines like this with
moving parts too small to see with the naked eyes.
The materials, which have the ability to perform sensing and actuating functions
and therefore are capable of imitating living systems are called “smart” materials.
The “I.Q.” of smart materials is measured in terms of their “responsiveness” to
environmental stimuli and their “agility.” The first criterion requires a large amplitude
change, whereas the second assigns faster response materials with higher “I.Q.”

Today the drive to innovation is stronger than ever. Novel technologies and
applications are spreading in all fields of science. Consequently, expectations and
needs for engineering applications have increased tremendously, and the prospects
of smart technologies to achieve them are very promising.

2. CLASSIFICATION OF SMART MATERIALS

Smart materials can be grouped into the following categories:

A. Piezoelectric:

When subjected to an electric charge or a variation in voltage,


piezoelectric material will undergo some mechanical change, and vice versa. These
events are called the direct and converse effects.

B. Electrostrictive:

This material has the same properties as piezoelectric material, but


the mechanical change is proportional to the square of the electric field. This
characteristic will always produce displacements in the same direction.

C. Magnetostrictive :

When subjected to a magnetic field, and vice versa (direct and


converse effects), this material will undergo an induced mechanical strain.
Consequently, it can be used as sensors and/or actuators. (Example: Terfenol-D)

D. Shape Memory Alloy;.


When subjected to a thermal field, this material will undergo phase
transformations, which will produce shape changes. It deforms to its martensitic
condition with low temperature, and regains its original shape in its ‘austenite’
condition when heated (high temperature). (Example: Nitinol TiNi.)

E. Electrorheological and Magnetorheological Fluids (Active Fluids)

Active fluids respond to an electric or a magnetic field with a change in


viscosity. Magnetorheological fluids are active fluids that respond to magnetic fields,
whereas electrorheological fluids respond to electric fields. Active fluids can adapt
and respond almost instantly and have been used in damper, valve, clutch, and
brake applications with few or no moving parts.

F. Soft Smart Materials:

Many soft materials are polymer hydro gels of cross-linked polymers


that will swell and shrink reversibly in water. These “volume transitions” can be very
abrupt, like freezing or melting transitions and can be inducted in some gels by
changes in environmental conditions: temperature, pH, electric fields, light, or the
presence of some chemical substance.

3. SMART STRUCTURE

A smart structure is a system that incorporates particular functions of sensing


and actuation to perform smart actions in an ingenious way. The basic five
components

1
1 Data Acquisition (tactile sensing):

The aim of this component is to collect the required raw data


needed for an appropriate sensing and monitoring of the structure.

2 Data Transmission (sensory nerves):

The purpose of this part is to forward the raw data to the local
and/or central command and control units.

3 Command and Control Unit (brain):

The role of this unit is to manage and control the whole system
by

analyzing the data, reaching the appropriate conclusion, and determining the actions
required.

4 Data Instructions (motor nerves):

The function of this part is to transmit the decisions and the

associated instructions back to the members of the structure.

5 Action Devices (muscles):

The purpose of this part is to take action by triggering the


controlling devices/units.

4. SIGNIFICANCE

Smart materials and systems open up new possibilities, such as clothes that
can interact with a mobile phone or structures that can repair themselves. They also
allow existing technology to be improved. Using a smart material instead of
conventional mechanisms to sense and respond, can simplify devices, reducing
weight and the chance of failure. Smart materials research is of long standing but
commercial exploitation has been slow. The Foresight report concluded that “smart
materials technology provides an excellent opportunity for the UK. However, despite
significant progress over the last five years, supported by various government
programmes, it the UK] remains relatively poorly positioned worldwide”.

5. APPLICATIONS

A. Structural Health monitoring

Embedding sensors within structures to monitor stress and damage can reduce
maintenance costs and increase lifespan. This is already used in over forty bridges
worldwide.

B. Reducing Food Wastage

Food makes up approximately one fifth of the UK’s waste. One third of food grown
for onsumption in the UK is thrown away, much of which is food that has reached its
best before date without being eaten.6,7 These dates are conservative estimates
and actual product life may be longer. Manufacturers are now looking for ways to
extend product life with packaging, often using smart materials. As food becomes
less fresh, chemical reactions take place within the packaging and bacteria build up.
Smart labels have been developed that change colour to indicate the presence of an
increased level of a chemical or bacteria. A ripeness sensor for pears is currently
being trialled by Tesco. Storage temperature has a much greater effect than time on
the degradation of most products. Some companies have developed ‘time-
temperature indicators’ that change colour over time at a speed dependent on
temperature, such as the Onvu™ from Ciba Speciality Chemicals and TRACEO® by
Cryolog. French supermarket Monoprix has

been using time-temperature indicators for many years, but they are not yet
sufficiently accurate or convenient for more widespread introduction.

C. Vibration reduction in sporting goods

A new generation of tennis rackets, golf clubs, baseball bats (Figure 4)


and ski boards have been introduced to reduce the vibration in these sporting goods,
increasing the user’s comfort and reducing injuries.
D. Noise reduction in vehicles

Filaments of piezoelectric ceramic fibre shaped into various geometries are


used in conventional fabric or material processing to counter noise in vehicles,
neutralize shaking in helicopter rotor blades, or nullify or at least diminish vibrations
in air conditioner fans and automobile dashboards.

E. Smart shock absorber

Current research is focused on vibration suppression in automobiles using


smart shock absorbers. Inside the smart shock absorber developed by Toyota (fig.5)
is a multilayer piezoelectric ceramic that has about five layers for sensing road
vibrations.
Fig.5 Smart Shock Absorbers

F. Military applications

Smart Skin - In battle soldiers could wear a T-shirt made of special tactile
material that can detect a variety of signals from the human body, such as detection
of hits by bullets

Autonomous Smart System- The carriage systems, whether manned or unmanned,


and equipped with sensors, actuators and sophisticated controls, will improve
surveillance and target identification and improve battlefield awareness.

Smart Aircraft- Figure 6 presents a few potential locations for the use of smart
materials and structures in aircraft.
5. FUTURE BENEFITS

The potential future benefits of smart materials, structures and systems are
amazing in their scope. This technology gives promise of optimum responses to
highly complex problem areas by, for example, providing early warning of the
problems or adapting the response to cope with unforeseen conditions, thus
enhancing the survivability of the system and improving its life cycle. Moreover,
enhancements to many products could provide better control by minimizing distortion
and increasing precision. Another possible benefit is enhanced preventative
maintenance of systems and thus better performance of their functions. By its
nature, the technology of smart materials and structures is a highly interdisciplinary
field, encompassing the basic sciences physics, chemistry, mechanics, computing
and electronics as well as the applied sciences and engineering such as aeronautics
and mechanical engineering. This may explain the slow

progress of the application of smart structures in engineering systems, even if the


science of smart materials is moving very fast.

6. RESEARCH IN WORLD WIDE


The US is the world leader in smart materials research, mainly because of the
large defence research and development budget. The US Defense Advanced
Research Projects Agency has had an in-house programme of smart materials and
structures research since the early 1990s, in contrast to the UK.3 However,the UK is
strong in many areas and is at the forefront of research into structures that can repair
themselves. Other countries have other strengths - Japanese research is very strong
in electronics and packaging; Germany has a lead in biomimetics (science that
imitates nature) and France is active in packaging research and development. The
EU funds some smart materials and systems research through the Seventh
Framework Programme.

7. PROGRAMME RELATED TO SMART MATERIALS

India embarked on the national programme on smart materials in 2000 with a


total investment of Rs 750 million over five years. As part of the programme,
research facilities in the country, including those under Defence Research and
Development Organization (DRDO), would develop smart materials and systems for
applications in defence , aerospace, civil engineering, telecom and biomedical fields.

8. ENVIRONMENTAL RISKS

Smart materials and systems are hugely varied and are applied in a wide
range of fields. It is hard to make generalisations about their environmental impact
as thisdepends on the specific materials and applications.However, recyclability is
not an issue that most researchers are addressing. They believe that smart materials
are either too early in their development or used in such small quantities that this is
not yet an issue.

9. CONCLUSION

Today, the most promising technologies for lifetime efficiency and improved
reliability includes the use of smart materials and structures. Understanding and
controlling the composition and microstructure of any new materials are the ultimate
objectives of research in this field, and is crucial to the production of good smart
materials. The insights gained by gathering data on the behaviour of a material’s
crystal inner structure as it heats and cools, deforms and changes, will speed the
development of new materials for use in different applications. Structural ceramics,
super conducting wires and nano structural materials are good examples of the
complex materials that will fashion nanotechnology. New or advanced materials to
reduce weight, eliminate sound, reflect more light; dampen vibration and handle
more heat will lead to smart structures and systems, which will definitively enhance
our quality of life.

10. REFERENCES

1. S C Pradhan, T Y Ng, K Y Lam and J N Reddy (2001), “Control of Laminated


Composite

Plates using Magnetostrictive Layers,”Smart Materials and Structures, Vol.10,


pp657-

667.

2. R.Balasubramaniam(2007),” Callister’s Materials Science and Engg” WSE WILEY


Publications.

OPTIMIZATION OF TRUSS STRUCTURE USING LINEAR PROGRAMMING


TECHNIQUE

Rajan N.

Lecturer in Department of Mechanical Engg.,

Vinayaka Missions Kirupananda Variyar Engg Colllege, Salem.


Email ID: nrajaned@gmail.com,

Mobile: 09360373102

ABSTRACT

This paper attempts to reduce the weight of the total truss structure by using the
optimization technique. The Linear programming technique is used to formulate the
objective function and the constraint variables. The objective function of the truss
structure is to reduce the weight of the total structure by certain values, so that the
cost of the structure is reduced. The forces acting in the members, the allowable
stress of the member, buckling load of the members and the deflection of the
members are taken as the constraints. The problem that is dealt with is that of
minimizing the weight of the truss structure. The most important contribution of this
model is that it can be operated by any individual those who have little knowledge
about the truss structure. This is achieved by running the optimization model on a
user friendly personal computer system and by using the solver tools to analyze the
problem.

Key Words: Truss structure, Linear Programming, Solver – MS Excel

1.1. Introduction:

The truss structure is used in many civil engineering applications like bridges,
buildings and roofs. In today’s complex environment, deign engineers are faced with
thousands of daily decisions and they must rely on a myriad of processes and
conflicting data to meet the industries needs. These are the major decisions and if
any one data is not correct then the entire decision will lead to significant
consequences.

1.2. Introduction of Truss structure (Frames):


The truss structure is used in many civil engineering applications like bridges,
buildings and roofs. There are two types of optimization problems in a truss structure
design.

1. The topology of the truss structure: to determine the optimal connectivity of


the elements in a truss (number of joints) structure. The objective is to
minimize cost of materials and construction.
2. Optimal layout of truss: to determine the optimal cross-section of all elements
in order to achieve a minimum cost of materials and construction.

Although both these problems attempt to achieve the same objective, the search
space and the optimization algorithm required to solve each problem are different,
hence, we discuss the latter problem.

A structure made up of several bars or members riveted or welded together is known


as frame. If the frame is composed of such members, which are just sufficient to
keep the frame in equilibrium, when the frame is supporting an external load, then
the frame is known as perfect frame. Though in actual practice the members are
welded or riveted together at their joints, yet for calculation purposes the joints are
assumed to be hinged or pin-joined.

The condition for the perfect frame:

n = 2j – 3

where,

n – Number of members

j – Number of joints

The condition for imperfect frame:

1. 2j – 3 ≥ n the frame is a deficient frame


2. 2j – 3 ≤ n the frame is a redundant frame
The analysis of the forces in the members of the frame consists of two steps

a. Determination of the reactions at the supports


b. Determinations of the forces in the members of the frame (by the method
of joints)
Buckling Load:

The failure of the long column takes place due to buckling (or bending) is known as
buckling load or crippling or critical load. The force in the member is compressive in
nature then the buckling condition is applied to that member.

The condition for buckling load for any type of end condition is as follows,

P = π2EI

L2

Deflection:

The member which is subjected to compressive load will undergo deflection. The
deflection in the truss is calculated using the Virtual Load method.

Virtual Load Method: δ = ∑ PUL

EA

1.3. Technology to Support Decision-Making

Technology can play a variety of roles in supporting design engineers decision-


making processes. One role is facilitating collaboration. Many decisions ought to be
made on the basis of information that is held by different owners in the processes.
The Technology capability can be used in increasing demand is to recommend
specific actions based on the optimization of mathematical models of the decision
problem. For example, deciding the optimal layout of the truss structure, to
determine the optimal cross section of all elements in order to achieve a minimum
cost of materials and construction.

2.1. Optimization
Merriam-Webster defines optimization as “the mathematical procedures involved in
the act, process or methodology of making something as fully perfect or effective as
possible”.

This definition makes clear the appeal of optimization to an engineer. Using scientific
or mathematical procedures to arrive at the perfect or most effective decision offers
the possibility of dramatically improving performance. In practice, optimization has
come to mean packaged software applications that postulate a model for optimal
layout of truss structure, estimate various parameters that govern the behavior of the
truss structure model for each specific instance and then apply a mathematical
technique to determine the best cross sectional value in order to reduce the cost of
the material. Techniques can include linear and non linear programming, dynamic
programming etc.

2.2. Mathematical Optimization:

The strengths of mathematical optimization include the ability to consistently


evaluate far more alternatives than a human can, and the faculty to highlight and
account for the complex trade-off’s that are inherent in many decisions. It also
provides the decision-maker access to thousands of human years of academic
research and development spent in the service of making better decisions.

When mathematical optimization is implemented in software technology, its greatest


weakness is that it requires the decision problem to be highly structured and the
relationships among moving parts in problem must be quantified.

Then there are other decision problems where mathematical optimization may be
valuable, but where there are multiple objectives, or constraints and trade – off’s that
are difficult to make explicit or quantify. For these decision problems, the
optimization approach may be employed, but it is important to understand its
limitations and couple it with other decision – support methods.
2.3. Linear programming:

Linear programming applies to optimization models in which the objective and


constraint functions are strictly linear.

Constrained optimization models consists of three major components,

a. Decision variable
b. Objective function
c. Constraints

Decision variables: They are the physical quantities that an operations manager can
control. The optimal values of these variables will be determined after solving the
problem through a constrained optimization problem.

Objective Function: It is a mathematical function of the decision variable and it states


what is to be maximized or minimized.

Constraints: The practical limitations that restrict the choice of the decision variables
of a problem are stated as constraints. These constraints can be mathematically
represented by less than (<), greater than (>), less than equal to (<=) or equal to (=)
or greater than equal to (>=).

3. Application of the Solver Tool in Linear Programming:

Solvers, or optimizers, are software tools that help users find the best way to allocate
scarce resources. The resources may be raw materials, machine time or people
time, money, or anything else in limited supply. The "best" or optimal solution may
mean maximizing profits, minimizing costs, or achieving the best possible quality.
An almost infinite variety of problems can be tackled this way, but here are some
typical examples:

1. Finance and Investment

2. Design and Manufacturing

3. Distribution and Networks

3.1. To use a solver, you must build a model that specifies:


1. The resources to be used, using decision variables,

2. The limits on resource usage, called constraints, and

3. The measure to optimize, called the objective.

The solver will find values for the decision variables that satisfy the constraints while
optimizing (maximizing or minimizing) the objective.

3.2. Using Spreadsheets:

Spreadsheets such as Microsoft Excel provide a convenient way to build such a


model. Anyone who has used a spreadsheet is already familiar with the process:
Cells on a worksheet can hold numbers, labels, or formulas that calculate new
values -- such as the objective of an optimization. Constraints are simply limits
(specified with <=, = or >= relations) on formula cells. And the decision variables are
simply input cells containing numbers. Frontline's Premium Solver products provide
powerful tools for solving, or optimizing, such models.

Linear programming problems -- where all of the relationships are linear, and hence
convex -- can be solved up to hundreds of thousands of variables and constraints,
given enough memory and time. Models with tens of thousands of variables and
constraints can be solved in minutes (sometimes in seconds) on modern PCs.

3.3. Essential Steps

To define an optimization model, the following are the essential steps:

1. Choose a spreadsheet cell, or a variable in your program, to hold the value of


each decision variable in your model.

2. Create a spreadsheet formula in a cell, or an assignment statement in a


program function, that calculates the objective function in the model.

3. Similarly, create formulas in cells, or assignment statements in a program


function, to calculate the left hand sides of your constraints.

4. Use the dialogs in Excel, or function calls in the program, to tell the Solver
about the decision variables, objective and constraint calculations, and
desired bounds on constraints and variables.

5. Click Solve in Excel, or call optimize () in the program, to find the optimal
solution.

Within this overall structure, it has a great deal of flexibility, either in a spreadsheet or
in a custom program, in how to choose cells or variables to hold the model's decision
variables and constraints, and which formulas and built-in functions to use. Since
decision variables and constraints usually come in groups, we want to use cell
ranges in your spreadsheet, or arrays in your program to represent them.

4. Analysis:

Consider the five bar truss structure, which carries a load of 1 KN as shown in the
figure.

30 2 60 5 30

5m

1 KN

7.5 m

Objective Function:

Once the connectivity of the truss is given, the cross-sectional area and the material
properties of the members are the design parameters. Let us choose the cross-
sectional area of the members as the design variables. There are five design
variables, each specifying the cross-section of a member (A1 to A5). This completes
the first task of the optimization.

Constraints:

(a). Force in the members:


The next step is to formulate the constraints. In order for the truss to carry the given
load P=1 KN, the tensile and compressive stress generated in each member must
not be more the corresponding allowable strength Syt and Syc of the material.

Let us assume that,

1. The material strength for all elements is Syt = Syc = 500 MPa, and
2. The modulus of elasticity E = 200 GPa.
The force in the members is found out using the method of joints.

Member Magnitude of Nature of Force


Force

1 0.666 KN Compressive

2 0.5767 KN Tensile

3 1.1547 KN Tensile

4 1.334 KN Compressive

5 1.155 KN Tensile

Thus the first set of constraints can be written as

0.666 < Syc

A1

0.5767 < Syt

A2

1.1547 < Syt

A3

1.334 < Syc

A4
1.155 < Syt

A5

The other set of constraints arises from the stability consideration of the compression
members 1 and 3. Realizing that each of these members is connected by pin joints,
we can write the Euler buckling condition for the axial load in members 1 and 3 as
follows,

0.666 < πEA12

18.75

1.1547 < πEA22

6.25

In most structures, deflection is a major consideration. In the above truss structure,


let us assume that the maximum vertical deflection at the point of application of the
load is

δmax = 2 mm

By using the Castiglianos’s theorem and by the Virtual load method, the deflection
constraint is obtained as follows,

2.884 + 2.884 + 2.8868 + 5.776 + 2.8875 ≤ δmax

EA1 EA2 EA3 EA4 EA5

The variable bound are set as follows

10 x 10-6 ≤ A1, A2, A3, A4, A5 ≤ 500 x 10-6


In the following, the above truss structure problem is presented in Non Linear
Programming form, which is suitable for solving by using an optimization algorithm.

Minimize: 4.33 A1 + 5A2 + 2.5A3 + 4.33A4 + 2.5A5

Subjected to

Syc - 0.666 ≥ 0

A1

Syt - 0.5767 ≥ 0

A2

Syt - 1.1547 ≥ 0

A3

Syc - 1.334 ≥ 0

A4

Syt - 1.155 ≥ 0

A5

πEA12 - 0.666 ≥ 0

18.75
πEA22 - 1.1547 ≥ 0

6.25

δmax - 2.884 + 2.884 + 2.8868 + 5.776 + 2.8875 ≥ 0

EA1 EA2 EA3 EA4 EA5

10 x 10-6 ≤ A1, A2, A3, A4, A5 ≤ 500 x 10-6

This shows the formulation of the truss structure problem. The above NLP is solved
using the solver tool in the MS – Excel sheet.

5. Conclusion:

Mathematical optimization is a powerful and valuable method for supporting some


decisions. However, the temptations to apply optimization methods to unstructured
and only marginally quantified decisions should be restricted. Optimization should be
viewed as one of several means to reach the end of high quality decision making.
Optimization based decision support is a key weapon in a design engineer’s arsenal.

“ERGONOMIC DESIGN AND DEVELOPMENT OF GRASS CUTTING TOOL IN

THE AGRICULTURE LAND FIELD”

Senthil Kumar .N *

*Lecturer, Dept. of Mechanical Engineering, V.M.K.V Engineering College,

senthilnatarajanpdd@yahoo.co.in, 9994322766

ABSTRACT
The main objective of this work is to design and development of a new ergonomic
grass-cutting tool in the agriculture land fields. In the agriculture land grass cutting is
very essential, because it affects the growth of the crops. Existing grass cutting tools
are not more user friendly to the farm workers because they has to bend their back
and have to cut the grass, thus causes lack of efficiency of workers towards the
work. To increase the efficiency and reduce the time taken to cut the grass, the new
concept tool has been taken into account for this work. The new concept tool will
reduce the time, and increase the efficiency .The workers need not to bend their
backs during grass cutting. The new tool has sharp edges in the bottom position long
lengthy portion handle is introduced, so grass is more viable.
The tool has the shape of ‘ L ‘ with the edge being curved. A handle is
placed on the top, for having a complete grip of the tool. The handle is made up of
fiber material, so that the stress on the human hand is much reduced. The material
required is low as compared with other standard models. The fabrication process is
simple, as compared with other models. The tool is formed by the bending
operation’s steel rod is formed to the required shape using the ‘V’ bend in the
mechanical or hydraulic press. The tip of the tool is sharpened at the two side edges
and the tip as well. The cost of the tool is low as compared to other models. The
application of the tool for the agriculture purpose is easy, when compared with other
tools because, the worker do not bend while using the tool, it increase the worker
productivity.
Since the all these edges are sharpened and is backed up with enough load the
output is increased. (Grass cutting). It can be handled easily by all the persons
without any difficulty (Tool).
KEYWORDS: ANTHROPOMETRY, DESIGN, CONCEPT GENERATION, AND
WORKERS PRODUCTIVITY

Optimal Design Parameters of


Natural Draught Cooling Towers using CFD
P.SELLAMUTHUa, PROF.R.VIJAYANb
a
II M.E, Thermal engineering, Government college of Engineering, Salem
b
Professor, Mechanical Engineering Department, Goverment College of
Engineering, salem

(Corresponding email: selsrikanth29@rediffmail.com phone: +91 9940982025)

ABSTRACT
The effect of windbreak walls on the thermal performance of
natural draft wet cooling towers (NDWCT) under crosswind has been investigated
numerically. The three dimensional CFD model has utilized the standard k–e
turbulence model as the turbulence closure to quantify the effects of the locations
and porosities of the wall on the NDWCT thermal performance. Moreover, the
improvement in the NDWCT thermal performance due to windbreak walls has
been examined at different crosswind directions. Results from the current
investigation have demonstrated that installing solid impermeable walls in the rain
zone results in degrading the performance of the NDWCT. However, installing
solid walls at the inlet of the NDWCT has optimized the natural draught cooling
tower performance at all of the investigated crosswind velocities. Similarly,
installing walls with low porosity has shown improvement in the performance of the
NDWCT. A reduction of 0.5–1 K in the temperature of the cooling water coming
from the tower to the condenser has been achieved at all of the investigated
crosswind velocities by installing porous walls both inside and outside the rain
zone.

1. INTRODUCTION
A natural draft wet cooling tower (NDWCT) is the
cornerstone of the cooling system in use in large modern thermal power plants. In
NDWCT, a combination of heat andmass transfer effects is used to cool the water
coming from the turbine’s condenser. The hot water, coming from the condenser,
is sprayed on top of splash bars or film fills in order to expose a very large portion
of water surface to the cooling ambient air. The moisture content of the cooling air
is less than the moisture content of saturated air at the hot water temperature,
which results in evaporating an amount of water. The energy required for
evaporation is extracted from the remaining water, hence reducing its
temperature. The cooled water is then collected at the basin of the NDWCT and
pumped back into the condenser, completing its circuit.

As the heat of the water is transferred to the air passing through


the tower, the warmed air tends to rise and draw in fresh air at the base of the tower,
which makes the cooling process dependent on crosswind conditions. Inefficiency in
the cooling process of these towers results in a continuous loss of power generation.
Even the loss of a few megawatts, representing a fraction of a percent of the total
plant generation, may amount to millions of dollars per year.This continuous power
loss, however, may be in significantion comparison to load reductions that may be
required to achieve an internal temperature limit during extremely hot meteorological
conditions. The degradation in thermal performance of cooling towers after
installation has highlighted the importance of cross winds. Although crosswind
effects on the performance of cooling towers is well known, the corresponding
amount of research is still very small.

2. EXPERIMENTAL APPROACH

Experimental approaches conducted for a full scale cooling tower


would be costly and time consuming. It would be difficult to obtain an accurate
measurement of the air distribution and flow resistance within the tower’s harsh
environment. Scale modeling of these transport processes within an entire tower, on
the other hand, would be virtually impossible. This is because not all the necessary
conditions of similarity, including two phase flow, could be fulfilled adequately.
Analytical solutions of these processes in a cooling tower would also be difficult to
obtain but could be achieved using numerical modeling. Recent advances in
computer technology and computational fluid dynamics (CFD) have led to the
development of fast and reliable numerical codes, which allow optimum design of
cooling towers to be obtained. The effects of windbreak walls on the thermal
performance of natural draft dry cooling towers (NDDCTs) have been investigated
by researchers who utilized CFD techniques.
The results from these investigations have highlighted the
improvement in the thermal performance of NDDCTs due to windbreak walls. One of
the few publications on the effect of windbreaks on the thermal performance of wet
cooling towers belongs to Bender et al. They have investigated the effect of
crosswinds and windbreak walls on a double cell mechanical induced cooling tower.
They reported that the location and porosity of the wall were the dominant
parameters that affect the tower’s intake flow rates, whereas the wall’s height was
less important. The importance of windbreak walls in reducing the negative effect of
crosswinds on the performance of cooling towers has been demonstrated in the
early research. How-ever, the effect of windbreak walls on the performance of
NDWCTs has not yet been reported. The current investigation focuses on
conducting numerical experiments by using CFD techniques in an effort to
understand the effect of crosswinds on the thermal performance of NDWCTs more
clearly. Furthermore, it focuses on developing curative devices capable of reducing
the negative effect of cross-winds on NDWCTs.

GOVERNING EQUATIONS
In FLUENT , the air flow is solved as a continuous phase
using the Eulerian approach. However, droplet trajectories are solved as a dispersed
phase using the Lagrangian approach The air flow equations that describe heat,
mass and momentum transfer can be written as a general equation having the form
of:

Ρmauф-Ґф^ф = Sф + Spф

where qmais the moist air density, u is the velocity vector, /is the scalar
quantity for u, v, w, T, Yv, k and e, C/is the diffusion coefficient, S/is the source term
for the air phase and Sp/is the additional source due to the interaction between the
air and the water droplets. According to the Lagrangian reference frame, the
equation of motion relates the water droplet velocity toits trajectory.

BOUNDARY CONDITIONS

The cylindrical numerical domain has a height and a radius of 500 m.


The NDWCT under investigations is 129.8 m high with a base diameter of 95.2 m
and an inlet height of 8.6 m. The numerical domain consists of600 thousand
structured and unstructured (hybrid) mesh elements. The number of mesh elements
has been kept constant for all cases under investigation. In addition, the mesh
element size has been smoothly stretched to resolve the high gradient regions and
to ensure an accurate resolution of both the temperature and velocity fields.

FILL ZONE:

The main characteristics of any film fill are the heat and mass transfer
in addition to the pressure drop within it. The heat and mass transfer are presented
via heat and mass transfer coefficients. The pressure drop, on the other hand, is
presented via a pressure loss coefficient. Because of limitations in the current CFD
code, the water flow at the fill zone has been approximated by droplets flow instead
of film flow

PRESSURE LOSSES
As the air flows through the NDWCT, it su ffers pressure
losses that can be expressed in terms of a pressure loss coefficient, the air density
and the perpendicular velocity component across the surface as defined in Eq. (10).
The main pressure losses throughout the NDWCT are caused by the shell supports,
fill, water distribution pipes and drift eliminators. Pressure losses due to the drag
force from water droplets at both the rain and spray zones are calculated internally
by FLUENT.

WINDBREAK WALLS

Windbreak walls have been used for centuries to reduce wind speed,
to control heat and moisture transfer and to improve climate and environment.
However, only within the last few decades have systematic studies considered the
aerodynamics and shelter mechanisms of shelterbelts windbreak walls. The primary
effect of any windbreak wall is to reduce the wind speed. Throughout the current
paper, different windbreak walls have been examined.

RESULTS AND DISCUSSION

Windbreak walls have been installed both inside and outside the
NDWCT. The dimensions and the geometry of both the windbreak walls and the
NDWCT and listed in Table 1 In the following sections, the effects of wall location,
porosity and wind direction on the thermal performance of the NDWCT represented
by change in water temperature due to crosswind (DTwo) are investigated..

TABLE:1

SUMMARY OF WIND BREAK WALLS CHARACTERISTICS

Wind break wall characteristics

Inside Outside
CASE
α KL α KL

1.00 0.0 1.00 0.00


NO CD
CD_1 1.00 0.0 0.00 ∞

CD_2 0.00 ∞ 1.00 0.00

0.00 0.00 ∞
CD_3 ∞

CD_4 0.53 11.0 0.53 11.0

0.6
CD_5 0.53 11.0 5.6

CD_6
0.53 11.0 0.7 2.2

CONCLUSION:

It has been found that cross winds have significance effect on


the thermal performance of NDWCT. At velocities higher than 7.5 m/sec. The cross
wind has been found to enhance the thermal performance of NDWCT. At velocities
lower than 7.5m/sec however crosswinds degrade the thermal performance of
NDWCT. The highest thermal performance has resulted from walls with porosity of
53% for the outside wall and porosity of 70% for the inside wall. Finally the
installation of wind break walls around the inlet of the natural draught cooling tower
is simple means of optimizing the thermal performance of natural draught cooling
tower.

REFERENCES

 Study of a proposed 200 m high natural draught cooling tower at power plant
Frimmersdorf/Germany. BUSCH D. (1) ; HARTE R. (2) ; NIEMANN H.-J.
 Optimization of cooling tower shells using a simple genetic algorithm -
Institute of Structural Mechanics, Department of Environmental Engineering,
Cracow Academy of Agriculture, Al. A. Mickiewicza 24, PL–30-059 Cracow,
Poland¶e-mail: rmpiecza@cyf-kr.edu.pl, PL
1
 Thermal optimization of a natural draft wet cooling tower -N. Williamson
, M. Behnia 2 , S. W. Armfield 1
 Shape Optimization, Design and Construction of the 200m Niederaussem
Cooling Tower Shell - by Reinhard Harte, Wilfried B. Krätzig, and Ulrich
Montag section 26, chapter 2, (doi 10.1061/40558(2001)53)

PARAMETER AFFECTING FACTORS MACHINING OF NON CONDUCTIVE


MATERIALS IN WEDM

M.NARASIMHARAJAN

*Lecturer, Dept. of Mechanical Engineering, V.M.K.V Engineering College,

ABSTRACT

Micro wire electrical discharge machining (Micro-WEDM) has proved to be a


versatile micro-machining technology to produce complex part. This paper deals with
a new method of machining insulating material like EPOXY by WEDM. In this
method, a metal plate or metal mesh is arranged on the surface of insulator as an
assisting electrode. The Epoxy can be machined very easily wire electrode in WEDM
using kerosene as working fluid. Electrical conductive compounds involving cracked
carbon from working oil are generated on the surface of the Epoxy. It keeps electrical
conductivity on the surface of the work piece during the machining. Some examples
of machined products with this method are presented. The mechanism of the
machining of insulating epoxy is discussed with the principle in the surface
modification technique by EDM which has been developed in recent years.
LINKING FINITE ELEMENT MODELS WITH EXPERIMENTAL MODAL ANALYSIS
USING ORTHOGONAL ARRAY TECHNIQUE

K. Senthilkumara, B. Raja Mohamed rabib


a
II M.E., Cad/Cam , Mepco Schlenk Engineering College Sivakasi
b
Senior Lecturer., Mechanical Department engineering , Mepco Schlenk
Engineering College Sivakasi.

(Corresponding email: senrajmalli@yahoo.co.in Phone: +91 9942779769)

Abstract

To validate Finite Element models, test data, e.g. from an experimental modal
analysis, may be utilized. An important requirement in dynamic analysis is to
establish an analytical model capable of reproducing the experimental results. For
this purpose, experimental modal analysis and finite element models that describe
the behaviours of the structure in terms of frequencies and mode shapes were
compared. Many model updating methods [4] have been developed, but model
updating by artificial neural network has been developed in the last decades only.
One unique feature of neural network is that they have to be trained to functions. In
developing an iterative neural network methodology, it has been number of
parameter to be updated increases. To reduce the number of training samples and
to obtain a well trained neural model, orthogonal array method is developed [1, 5].
Training the neural network using these samples becomes a time-consuming task.

In this paper, we investigate the use of orthogonal arrays for the sample selection.
The results indicate that the orthogonal arrays method can significantly reduce the
number of training samples without affecting too much the accuracy of the neural
network prediction.

Key words: Orthogonal array, Model updating, Neural networks.

1. INTRODUCTION:

1.1 Experimental Modal Analysis


The fidelity of structural mechanical Finite Element analyses (FEA) can be evaluated
by using data from static or dynamic tests. Especially, eigenfrequencies and
eigenvectors are employed, which can be identified from vibration tests by means of
experimental modal analysis (EMA) [6, 7]. Experimental modal analysis is the
process of determining the modal parameters (frequencies, damping factors, modal
vectors and modal scaling) of a linear, time invariant system by way of an
experimental approach. The modal parameters may be determined by analytical
means, such as finite element analysis, and one of the common reasons for
experimental modal analysis is the verification/correction of the results of the
analytical approach (model updating). Often, though, an analytical model does not
exist and the modal parameters determined experimentally serve as the model for
future evaluations such as structural modifications. Predominately, experimental
modal analysis is used to explain a dynamics problem, vibration or acoustic that is
not obvious from intuition, analytical models, or previous similar experience. It is
important to remember that most vibration and/or acoustic problems are a function of
both the forcing functions (and initial conditions) and the system characteristics
described by the modal parameters. Modal analysis alone is not the answer to the
whole problem but is often an important part of the process. Likewise, many vibration
and/or acoustic problems fall outside of the assumptions associated with modal
analysis (linear superposition, for example). For these situations, modal analysis
may not be the right approach and an analysis that focuses on the specific
characteristics of the problem will be more useful.

1.2 Model Updating


Modal updating is the process of correcting the numerical values of individual
parameters in a finite element model using data obtained from an associated
experimental model such that the updated model more correctly describes the
dynamic properties of the subject structure [2].

1.3 Need for Model Updating


The uncertainty in result between FEA and Experimental may due to the
assumptions made in defining inappropriate boundary conditions or element material
and geometrical property ( for example, modeling non-linear behavior with the liner
FEM theory).
These ‘errors’ are in practice rather due to lack of information than plain modeling
errors. Their effects on the FEA results should be analyzed and improvements must
usually be made to reduce the errors associated with the FE model. Model updating
has become the popular name for using measured structural data to correct the
errors in FE models.

Model updating is done by modifying the mass, stiffness, and damping parameters of
the FE model until an improved agreement between FEA data and test data is
achieved. Unlike direct methods, producing a mathematical model capable of
reproducing a given state, the goal of FE model updating is to achieve an improved
match between model and test data by making physically meaningful changes to
model parameters which correct inaccurate modeling assumptions. Theoretically, an
updated FE model can be used to model other loadings, boundary conditions, or
configurations without any additional experimental testing. Such models can be used
to predict operational displacements and stresses due to simulate loads.

In choosing updating parameters, the following parameters are widely used for
updating the model based on the sensitivities of the total parameters of the structure
or by pre-known assumed parameter by the analyst:
(a) Material Properties – Young’s modulus (isotropic or orthotropic), Poisson’s
ratio, shear modulus and mass density.
(b) Geometrical Element Properties – Spring stiffness, plate thickness and beam
cross-sectional properties.
(c) Lumped Properties – Lumped stiffness (boundary conditions) and lumped
masses.
(d) Damping Properties – Modal damping, Rayleigh damping coefficients, viscous
and structural damper values.

2 NEURAL NETWORKS
Fortunately, Artificial Neural Networks (ANN) offer solutions to problems that are
very difficult to solve using traditional algorithmic decomposition techniques. The
potential benefits of neural nets are:
 Learning from the interaction with the environment
 Few restrictions on the functional relationships
 An inherent ability to generalize training information to similar situations
 Inherently, they ensure parallel design and load distribution.
Neural network has the ability to derive relations from complicated or imprecise data,
can detect trends that are too complex for human to recognize by any other
computing technique. Neural network uses a training rule where the weight and
biases of the connections adjust based on the outcome. A trained NN is an expert on
the information it analyses. One of the inherent strengths of a NN is its ability to
forecast or predict an outcome.

The MATLAB neural network toolbox [4] was used to perform the network analysis.
The tool box contains necessary functions for generating the network algorithm. The
function performs the network generation, network training, pre-processing data in to
the NN, and post-processing of data coming out of the network.

2.1 Training algorithm


The back propagation algorithm is used to train the network. Multilayer feed forward
network is the commonly used with back propagation algorithm.
Figure A : Multilayer perceptron = Multivariate Multiple Nonlinear Regression

2.2 Updating Using Neural Networks


The objective of the network object is to predict the structural parameters by
simulating the trained network for required networks for required modal parameters.
Mathematically, the NN model represents a nonlinear mapping between the inputs
and outputs. The figure (1) shows the Pre-training process of NN model.
2.3 Training of Neural Network

This process as shown in figure (2) begins by feeding the measured dynamic
characteristics Xm into an NN model which is trained beforehand. The outputs of the
NN model are the identified structural parameters Yi. These identified structural
parameters are then fed into the finite element (FE) model to produce a set of
calculated dynamic characteristics Xc. A comparison between the calculated
dynamic characteristic Xc and the measured dynamic characteristics Xm is made. If
these two sets of parameters differ significantly, then the NN model will be retrained
on-line using adjusted parameters differ significantly, then the NN model will be
retrained on-line using adjusted training samples that contain Xc and Yi. The
retrained NN model is then used to identify the structural parameters again by
feeding in the measured dynamic characteristics Xm. This identification and on-line
retraining procedure is repeated until the difference between Xc and Xm becomes
insignificantly small or until Yi converges. At the end of the iteration the final
identified parameters are guaranteed to produce the dynamic characteristics that are
very close to the measured ones. When compared to the original design, these
structural parameters can be used to infer the location and the extent of damage in
the structure.

Fig (2) Iterative NN process

3. ORTHOGONAL ARRAYS
Orthogonal Arrays (often referred to Taguchi Methods) are often employed in
industrial experiments to study the effect of several control factors.

Popularized by G. Taguchi. OtherTaguchi contributions include:


 Model of the Engineering Design Process
 Robust Design Principle
 Efforts to push quality upstream into the engineering design process.
The aim of the OA method is to provide a systematic way of studying the effects of
the individual factor on the outcome as well as how these factors interact. These
factors come with several levels of parametric variation and may have interaction
effects which mean that two or more factors together produce a result different from
their separate effects. In the following, the notation OA(N, k, s, t) is used to represent
an OA that has N number of experiments runs, k factors with s levels each and a
strength of t. The strength represents the number of columns where all the
possibilities can be seen equal number of times.

Table 1
Orthogonal array OA (4, 3, 2, 2)

Factor
Response
Tests A B C
(results)
1 0 0 0 R000
2 0 1 1 R011
3 1 0 1 R101
4 1 1 0 R110

As an example, Table 1 shows the orthogonal array OA(4, 3, 2, 2) that outlines four
experiment runs for three 2-level factors (A, B, and C) with strength 2. The response
or the results of the experiments are also attached in the last column of the table.
The levels of factors are indicated by 0 (for low level) and 1 (for high level). This OA
has four rows and three columns (excluding the response column). Each row
represents a test setup with specified factor levels. It can be seen that each column
(factor) contains two level 0 and two level 1 conditions. Note that any two columns in
this OA have the same level combinations (0,0), (0,1), (1,0) and (1,1). Thus, the
three columns in this OA are orthogonal to each other. This orthogonality provides a
fully balanced experimental arrangement which is comprehensive in terms of test
results and efficient in terms of the number of tests required.
For instance, after performing these four experiments, the response for low level
factor A, RA0, and the response for the high level of factor C, RC 1, can be found,
respectively, as
RA0 = (R000 + R011)/2, RC1 = (RC011 + R101)/2.
3.1 EXAMPLE

A four-step procedure for the determination of an appropriate OA


 Define the number of factors and their levels
 Determine the degrees of freedom
 Select an orthogonal array
 Consider any interactions.
 The degree of freedom determined the minimum number of experimental
runs. For a test condition that involves k factors each with s levels, the degree
of freedom is k(s – 1) + 1.
Let’s look at an example where orthogonal arrays have been employed

Example 1:
Consider the process of mixing concrete; we have a choice of different mixtures of
sand, cement and water and we do not know which to choose. We decide to try two
different levels of each, as listed below:

C1 = 1Kg of cement
C2 = 1.5Kg of cement
S1 = 500g of sand
S2 = 750g of sand
W1 = 1 litre of water
W2 = 2 litres of water
We can try every combination of sand, cement and water and test each different
combination to see which is the hardest. If we do this, there will be a total of eight
combinations.

Taguchi experiments reduce the number of experiments required to find the best
levels for each factor. The method works by calculating the statistical properties of
orthogonal arrays.

We can draw up a table for the cement mixing example, with 3 factors (cement, sand
and water) and 2 levels (for each) in an orthogonal array.
Table 2:
Trial Number Factors
C S W
-------------------------------------------------------
Y1 1 1 1
Y2 1 2 2
Y3 2 1 2
Y4 2 2 1
Where 1 and 2 are the levels of each factor. For example, in trial 1, we make our
mixture with all the ingredients at level 1.

A full set of experiments for this process would require eight different experiments (=
23) as opposed to the four which are needed for the Taguchi version of the
experiment. The saving involved in using the Taguchi method becomes more
significant as the number of levels or factors increases [1].

To analyze the results, we must have a way of finding which experiment produced
the best answer. In our example, we would have to measure the hardness of the
cement. Assume that a lower result indicates harder cement. (In Neural Network
terms, we would find the error associated with each experiment. The lower the error
is, the better the result.)

So having undertaken the experiments and obtained the results, we can now
calculate the best levels to use with each factor. Let us assume, for example, that
the results obtained are as shown below:

Table 3:

Experiment number Result (hardness)


Y1 11
Y2 20
Y3 5
Y4 7

We can find the effect of each level in each factor by averaging the results which
contain that level and that factor.
C1 = (Y1 + Y2) / 2 = (11+20) / 2 = 15.5
C2 = (Y3 + Y4) / 2 = (5+7) / 2 = 6
S1 = (Y1 + Y3) / 2 = (11+5) / 2 = 8
S2 = (Y2 + Y4) / 2 = (20+7) / 2 = 13.5
W1 = (Y1 + Y4) / 2 = (11+7) / 2 = 9
W2 = (Y2 + Y3) / 2 = (20+5) / 2 = 12.5
Combination of factors: C2, S1, W1. These are the factors which produce the
lowest results and hence the hardest mixture.

Example 2:

Example taken from students of Alice Agogino at UC-Berkeley

Airplane Taguchi Experiment

This experiment has 4 variables at 3 different settings. A full factorial experiment


would require 3 4 =81 experiments. We conducted a Taguchi experiment with a L9 (3
4
) orthogonal array (9 tests, 4 variables, 3 levels). The experiment design is shown
below.

Table 4: The Experimental Design Values:

EXPERIMENT WEIGHT(A) STABILIZER(B) NOISE© WING(D)


1 A1 B1 C1 D1
2 A1 B2 C2 D2
3 A1 B3 C3 D3
4 A2 B1 C2 D3
5 A2 B2 C3 D1
6 A2 B3 C1 D2
7 A3 B1 C3 D2
8 A3 B2 C1 D3
9 A3 B3 C2 D1

Table 5: L9 Standard Array

(Refer: Taguchi Techniques for Quality Engineering by Phillip J. Ross, Page No:
279)
TRAIL NO 1 2 3 4
1 1 1 1 1
2 1 2 2 2
3 1 3 3 3
4 2 1 2 3
5 2 2 3 1
6 2 3 1 2
7 3 1 3 2
8 3 2 1 3
9 3 3 2 1

50
40 3-DColumn1
30
1
20
2
10
0 3
Wt.Distri Stabilizer NoseLeng WingAngle

Fig. 3: Mean

60
50
40
1
30
2
20
3
10
0
Wt. Distri Stabilizer Nose Leng WingAngle

Fig. 4: Variance

60
50
40
1
30
2
20
3
10
0
Wt. Distri Stabilizer Nose Leng WingAngle

Fig. 5: S/N Ratios:


The students that performed this experiment suggest that the S/N ration graph
should be critically examined to select the desired variable levels.

Variable Levels: A2/B1/C3/D1

4.0 CONCLUSION

This paper presents in developing an iterative neural networks technique for model
updating of structures, it has been shown that the number of training samples
required increases exponentially as the number of parameters to be updated
increases. It is noted that the selection of training samples for NN models resembles
the design of experiments which involve several factors varying with several levels.
The orthogonal arrays have bee developed and adopted by the experimentalists for
laying out a minimal number of tests while retaining all the necessary information. In
this study, we investigate the use of orthogonal arrays the sample selection for
training NN models.

It is concluded that the use of orthogonal arrays method can significantly reduces the
number of training samples without affecting too much the accuracy of the neural
network prediction.

ACKNOWLEDGMENT

Author is grateful to the management, Principal and HOD, Department of Mechanical


Engineering, Mepco Schlenk Engineering College, Sivakasi, for their constant
encouragement for offering facilities to carry out this research work.

REFERENCES

Journals/Periodicals:

1. C.C. Chang, T. Y. P. Chang and Y. G. Xu, “ Selection of training samples for


model updating using neural networks
2. Atalla, M. J., and Inman, D. J. (1998) “On Model updating Using Neural
Networks”, journal of sound and vibration. Pp 12(1), 135-61.

3. M. I. Friswell and J.E. Mottershead (1995) “Finite Element Model Updating in


Structural Dynamics” Kluwer Academic Publishers. Vol 62 pp 81-95.

4. E. Dascotte, ‘Practical application of finite element model tuning using


experimental modal data’, dynamic engineering, Inc., U.S.A

5. C.C. Chang, T Y P Chang and Y G Xu, ‘Adaptive neural networks for model
updating of structures’ Hong Kong University of sciences and technology,
Smart Mater. Struct. 9(2000) 59-68

6. Allemang, R. J., Vibrations: Experimental Modal Analysis, Structural


Dynamics Research Laboratory, University of Cincinnatti.

7. Ewins, D. J. Modal Testing: Theory And Practice, Research Studies Press


Ltd., taunton, Somerset, England, 1995.

Conference Proceedings:

1. Lecture Notes, Peter Avitabile, Modal Analysis I & II, University of


Massachusetts Lowell.

2. Seminar Presentation Notes, Peter Avitabile.

Books:

1. Phillip J. Ross, “Taguchi Techniques for Quality Engineering”.

2. V.P. Singh, “Mechanical Vibrations”.

3. Laurene Fausett, “Fundamentals of neural networks”.

4. Howard Demuth, Mark Beale “Neural Network Toolbox”.

ANALYSE A DUST RISK AND DESIGN OF SAFE WORK


ENVIRONMENT IN CEMENT INDUSTRY

N.Vasirajaa, K..Alagurajab,
a
Lecturer, Mechanical Engineering Department, Mepco Schlenk Engineering
College Sivakasi.
b
II M.E., Industrial safety engineering , Mepco Schlenk Engineering College
Sivakasi.

(Corresponding email: alaguraja.royal@gmail.com phone: +91 9787016961)

ABSTRACT
This paper describes the cement dust exposure, its health
effects and control of cement dust in cement industry (packing plant section).
Exposure of cement dust has long been associated with the prevalence of
respiratory symptoms and varying degrees of airway obstruction in man. Apart from
respiratory diseases, it was also found to be causes of lung problems,
gastrointestinal tumours and dermatitis .By design and fabricate an optimum portable
fabric bag filter for collect a cement dust. In these filters current flow that includes
gas and dust cross through the pores are located in the stuff filter and filtrate by that
remaining on the bag. Afterward, by dust increase on the bag, the filter is shaken
until dust collecting leads to the exit hopper. In order to obtain a better operation
after introducing to the operation mechanism, the same steps like create good
situation for maintain for easy maintenance. The pressure in various part of filter
system are controlled. Also by installing a shaking system, the shaking periods of
bags were increased. In order to increases dust cake layer and better performance
of deducting and bags life time. The fabric filter bags made from cotton cloth. The
fabric filter absorbs soft micron particles with considerable operation.

Key words: Bag filter, cement dust, shaking system, dust collection.

1. INTRODUCTION:
Cement is widely used in construction. The cement is
manufactured by the combination of calcium, silicon, iron and aluminium compounds
in the form of limestone and clay. In cement production plant, due to corrosion,
grinding, discharge, replacement, baking materials in furnace and its movement
inside furnace and so on dust in produced. Origin of dust in a cement production
process in different section such as preparation of raw materials, raw materials
grinding, clinker cooler, final milling. Packing and loading sections. The cement dust
form the acute respiratory symptoms are Cough, Shortness of breath, Wheezing,
Stuffy nose, Runny nose, and Sneezing. It also form chronic respiratory symptoms
were chronic cough, Chronic sputum production, Dyspnoea, Chronic bronchitis. In
cement packing plant, the cement bags are loading into a rotary packing machine by
manually. So the workers are affected by the cement dust. The above cement dust
risk has been control by design of safe work environment with fabric bag filter. The
bag filter absorbs the cement dust from the packing plant atmosphere. The tubular
bag, filter the cement dust from air and the cement dust reloaded into the rotary
cement packing machine.

2. OCCUPATIONAL DUST EXPOSURE:


Exposure to cement dust is likely to vary in the different stages of the
production process. Workers who are in close contact with the production processes
have been reported to have high exposure to total dust (11-230 mg/m3) and
respirable dust (2-46 mg/m3) (Fairhurst et al. 1997). Cement can cause ill health
mainly, 1.Inhalation of dust, 2.skin contact.

2.1. RESPIRATORY DISORDER


The cement dust irritates the mucous membranes of the respiratory
airway, which might lead to acute post-shift reductions among exposed workers.
None of the studies that assessed acute post-shift impairment of ventilatory function
among cement workers have investigated the magnitude of exposure- response
relationships. Repeated and prolonged inhalation of cement dust is associated with
chronic respiratory symptoms and impairment of lung function. However, some
studies have not found such relationships. Furthermore, none of the previous studies
have examined the relationship between cumulative cement dust exposure and the
chronic respiratory health effects. Cumulative dust exposure is regarded to be a
better measure of prolonged dust exposure as it takes into consideration the
employment duration in the dusty areas.

2.2. SKIN DERMATITIES


Dermatitis means inflammation of the skin. Symptoms include red,
swollen, tender,
hot, sore or itchy skin. Over time the skin can become cracked and blistered, or a
rash may develop.

Irritant dermatitis is caused by the physical properties of cement that irritate the
skin mechanically. The fine particles of cement, often mixed with sand or other
aggregates to make mortar or concrete, can abrade the skin and cause irritation
resulting in dermatitis. With treatment, irritant dermatitis will usually clear up. But if
exposure continues over a longer period the condition will get worse and the
individual is then more susceptible to allergic dermatitis.

Allergic dermatitis is caused by sensitisation to the hexavalent chromium


(chromate) present in cement.

Fig 1.
3. MATERIALS AND METHODS:
FABRIC BAG FILTER
The bag filter is commonly known as baghouses, fabric collectors
use filtration to separate dust particulates from dusty gases. They are one of the
most efficient and cost effective types of dust collectors available and can achieve a
collection efficiency of more than 99% for very fine particulates. The design of fabric
bag filter, consider one of the important factor is rate of filtration which shown
velocity of guided polluted air to filters inside fabric in such a way that five parameter;
type of dust, its application, temperature, size of dust and density. Dust-laden gases
enter the baghouse and pass through fabric bags that act as filters. The high
efficiency of these collectors is due to the dust cake formed on the surfaces of the
bags.
The fabric primarily provides a surface on which dust particulates
collect through the following four mechanisms,
1. Inertial collection - Dust particles strike the fibers placed perpendicular to the gas-
flow direction instead of changing direction with the gas stream.
2. Interception - Particles that do not cross the fluid streamlines come in contact with
fibers because of the fiber size.
3. Brownian movement – Submicrometre particles are diffused, increasing the
probability of contact between the particles and collecting surfaces.
4. Electrostatic forces - The presence of an electrostatic charge on the particles and
the filter can increase dust capture.

Fig 2.

4. RESULT:
In fabric filter bags top end is mounted on the springs. The
shaker mechanism is arrange reciprocating and also vertically. So the shaker
mechanism is more effective. In the bag filter contain 49 bags around 5 cm diameter
and 100 cm length. The dust gas passes through the bottom of bags.
For continuous improvement of fabric filters of plant, the following recommendation
should be followed;
1. Executing continuous plan of repairs and preventive maintenance.
2. Periodical measuring of static pressure in determines points and control of
pressure fault in each limit.
3. Periodical measuring of output dust from fabric filters.
4. Periodical review of fans position for providing suitable pressure for absorbing
polluted air inside filter.

5. CONCLUSION:
The equipment found more effective in reducing the respirable
dust in the cement packing plant work environment. There by reducing the risk of
dust inhalation by workers. This equipment found to reduce occupational diseases in
future, from present level.

6. Acknowledgement

Author is grateful to the management, principal and HOD,


department of mechanical engineering, Mepco Schlenk Engineering College,
Sivakasi, for their constant encouragement for offering facilities to carry out this
project work.

References:
1. J. Mwaiselage, B. Moen, “Dust Exposure and Respiratory Health Effects in the
Cement Industry”

2. F.Mohsenzadeh, K.Naddafi, J.Nouri and A.A.Babaie “ Optimization of Bag filter in


Cement Factory in Order to Increase of Dust Collection Efficiency”

3. A. L. Calistus Jude, K. Sasikala, R. Ashok Kumar, S. Sudha and J. Raichel


“Haematological and Cytogenetic Studies in Workers Occupationally Exposed to
Cement Dust”

4. R. Mirzaee, A. Kebriaei, S. R. Hashemi, M. Sadeghi, M. Shahrakipour “Effects of


Exposure to Portland Cement Dust on Lung function in Portland Cement Factory
Workers in Khash, Iran”
K.S.R.COLLEGE OF ENGINEERING

TIRUCHENGODE.
PAPER PRESENTED ON

TECHNOLOGY FOR HILL RIDING

Submitted by

M.P.VENKATESH J.KARTHIKEYAN

Pre-final Mech Pre-final Mech

venki2sky@yahoo.co.in girisalem@gmail.com

9003656082 9894082785

ABSTRACT

Driving in the mountains can be a wonderful exhilarating experience, but it can also
be tiring and cause extra wear and tear on your vehicles. This will lead to many
accidents due to the loss of control of vehicles. This paper deals with technology
which is helpful for the vehicles moving in the mountain areas.

This paper consists of two technologies:

1. Reverse lock system

This system based on ratchet mechanism. The ratchet is fixed firmly to the
rear wheel or rotating axis. Plunger is fixed to stationary part of vehicles so
that the plunger is made contact with the ratchet. If it is case, vehicles stops in
the steep slopes and starts moving downwards, the plunger will lock the
ratchet and stops the vehicle suddenly.

2. Compression based speed reducer


Most of the vehicles met in accidents during downhill. Due to over speed or
some people rides the vehicles with shut off engine. This system reduces the
speed of the vehicle during downhill without the engines power. This system
consists of piston cylinder arrangement with inlet and pressure relief valve at
the top of the cylinder, this arrangement is coupled with the crank shaft and
with the help of gears, it is coupled to the wheel or rotating shaft. During
rotation of crank, piston will compress the air inside which is sucked from the
atmosphere and pressure relief valve is opened if it reaches a certain
pressure. Due to compression, most of the power is utilized and speed of the
wheel gets down. This allows the vehicle to move slowly during downhill.

REVERSE LOCK SYSTEM:

This system is based on ratchet mechanism

It consists of following parts:

1. Ratchet

2. Spring actuated plunger

3. Electromagnet or lever arrangement.

4. Speed sensor and circuit.

Construction:

Ratchet wheel is attached to the rear wheel or rotating axis of the vehicles
.Rear wheel chosen because during up moving, weight of the vehicle is acting
towards the rear wheel, so it has high contact with roads. The low tensioned
spring actuated plunger with electromagnet is fixed to any stationary part of
vehicle which is nearer to the wheel. Electro magnet is connected to the
switch and battery. Speed sensor is placed at the wheels and the circuit is
connected with electromagnet.

Ratchet mechanism:

Electro
plunger magnet

Spring

Electro magnet and spring actuated plunger

Lever

plunger

Cable

Spring and lever actuated plunger


Working:

When the switch is on. Electromagnet gets demagnetized and the plunger is
released due to spring force which made a contact with ratchet which is
attached firmly to the rear wheel or rotating axis. Ratchet is designed so that it
allows the forward motion only.

During forward motion, plunger moves upwards due to the design of ratchet.
When reverse motion takes place, tooth of the ratchet hits the plunger which
will not allow the ratchet to move in reverse direction, which in turn it arrests,
the wheel motion.

If the vehicle attains certain speed, speed sensor is attached to the wheel will
indicate circuit and allows the current to flow to the electromagnet which gets
magnetized and pulls the plunger. So that the noise produced during the
contact between ratchet and plunger made when vehicles runs at certain
speed is avoided.

Suppose, the vehicle is needed to move in reverse direction, the switch is


kept in off position. So that electromagnet is magnetized. It pulls the plunger
and allows the wheel to move is reverse direction.

Wire arrangements can also be used to hold the plunger continuously which is
not in use.

This system is useful for both two wheelers and four wheelers.

In case of two wheelers, the ratchet is attached to the rear wheel firmly, the
spring actuated plunger operated by electromagnet is attached to any
stationary parts which is nearer to the wheel, so that can able to make
contact with the ratchet.

In four wheelers, ratchet us attached to the rear wheel rotating axis. Similarly
plunger arrangements are attached to any stationary part.

For heavy load vehicles multiple plungers can be used to withstand the loads.

Advantages:
1. Simple in design.

2. Suitable for both two and four wheelers. Especially in two wheelers people
can balance the vehicles easily in the steep slopes.

3. Wear and tear of the engine is reduced, because if the vehicles move
backwards in slope, engine has to extert more torque on the wheels to
overcome it, with the help of the system vehicle can move suddenly as it locks
the vehicle at that position.

4. No need to hold the brake in the slopes, this reduces the wear of the brake
shoes and increases its life.

BLOCK DIAGRAM OF REVERSE LOCK SYSTEM


BATTERY
SWITCH

ELECTROMAGNT
OR
ELECTRONIC LEVER
CIRCUIT ARRANGEMENT

SPEED SENSOR PLUNGER

WHEEL OR
ROTATING RATCHET
AXIS

Compression based speed reducer:


This system consist of

1. Piston, connecting rod, crank shaft.

2. Cylinder with inlet and pressure relief valve at outlet.

3. Gear arrangement and lever assembly.

Construction:

First the gear1 is attached firmly to the front wheel or front wheel rotating axis.
Another gear2 is attached to the one end of crank shaft as shown below. Piston is
connected with crank shaft with help of connecting rod.

Lever assembly is attached in order to engage or disengage the gear 2 with


gear1.cylinder is fitted at the any stationary part so that piston can able reciprocate
inside the cylinder. Inlet valve is kept opened to atmosphere and pressure relief
valve is fixed at the outlet end. This arrangement will acts as a reciprocating
compressor which gets operated by utilizing the energy available at the wheel.

Assembly in two wheeler : Pressure relief


valve

Piston and
cylinder

Connecting
rod

Gear 2

Crank shaft
Gear 1

Assembly in four wheeler: Front wheel

wheel

Gear 2
Gear 1

Working:

When the wheel rotates, gear1 attached to the front wheel or wheel rotating axis also
rotates. Front wheel is chosen because during down hills weight of the vehicles acts
towards the front wheel so it has high contact with road. With the help of lever
assemblygear2 is engaged with the gear1 which in turn rotates the crank shaft. This
will move piston up and down. During downward motion of the piston, inlet valve gets
opened and atmospheric air is sucked and occupies the space in the cylinder. when
the piston starts moving upwards , inlet valve gets closed and air inside the cylinder
will be gets compressed until it reaches certain pressure, if it reaches the maximum
pressure pressure relief valve which is fitted at the outlet gets opened and allows air
to flow in to atmosphere. Due to compression piston motion resisted which in turn
resist the gear1 motion which is attached to the wheel or wheel rotating axis. Due to
this process speed of the wheel gets reduced.

During suction process, piston moves suddenly and in compression, piston moves
slowly. This will create an unbalanced reduction in speed i.e., wheel rotates fastly
during suction and slowly during compression. In order to avoid that two pistons and
cylinder is attached to the same crank shaft.

In this one piston will do the suction process and another piston will do compression
process. So that the speed is reduced uniformly in the wheels.
Lever assembly to engage and disengage gears

Advantages:

1. During down hills, no need to keep the engine in on condition. So the wear and
tear of the engine is avoided.

2. We can able to ride the vehicle in neutral position and easy to handle vehicles.

3. Fuel is conserved because there is no need for engine.

4. Speed is reduced drastically without the help of gear box and engines.

Of course, the most important technology for mountain driving is relaxed and has
fun. At the same time life is a precious one and we must save it and enjoy it.

Always remember
IF SAFETY IS NOT PRACTICED,

IT WON’T BE USED

SAFETY DOES NOT COST; IT PAYS!

Reference:

1. www.thomasnet.com

2. www.wikipedia.com

K.S.R COLLEGE OF ENGINEERING

TIRUCHENGODE - 637 209.

ELECTRONICALLY SENSED HYDRAULIC CLUTCH


Presented By,

M.GIRIDHARAN R.VENNANGKODI

Pre-final Mech Pre-final Mech

girisalem@gmail.com vennangkodi@gmail.com

9894082785 9791336410

ABSTRACT:

Now a days, automobile manufacturers have intension to perfect there vehicle


with at most comfort. Introducing an intelligent system, power assisted mechanism,
on board diagnostics and smart devices in automobiles attracted people those who
want to look for fatigue free driving.

This paper “ELECTRONIC SENSED HYDRAULIC CLUTCH” is such a one


tends to operate the clutch with ease and comfort. A handy switch provided with gear
shifting lever enhances the driver to operate the clutch with pedal free operation.

FUTURE DEVELOPMENT:

Everything needs continues development to stay in competition. Our project


can be improved in the following areas which will make it more effective and reliable.
Also it helps in improvement of overall efficiency of the system.

So far we implemented our project by using relay switch and D.C motor. In
future for exact operation the engagement and the disengagement should be
controlled by using microcontroller.

Using the 89c51 microcontroller we could make the relay circuit for first and
reverse gear operation. And for other gears we could use speed control circuit.
Here we could use stepper motor as the actuator which may get the signal from the
microcontroller. Then the stepper motor may actuate the master cylinder and slave
cylinder

INTRODUCTION:

Modern cars are having all possible features in its counterpart. So as to make
driving easy. Power assisted control mechanism is generally used in cars at the lost
of fuel efficiency. Electronically controlled devices are mandatory in a car like Marti
for its steering. This is enhancing comfort and nor affecting its fuel performance. In
this way, this paper has been proposed and devised to retrofit with any car.
A handy switch provided with gear shift lever will operate a motor to control
the functions of hydraulic cylinders by means of relay and two sensor switches.
Hence fluid pressure produced in the hydraulic cylinders; force the piston against
spring force. This directing the clutch to engage and disengage for which fluid line
will be short circuited.

This mechanism is unique in nature trouble free, cost effective and smart for
its perfection.

INTRODUCTION OF CLUTCH:

The power developed inside the engine cylinder is ultimately aimed to turn the
wheels so that the motor vehicle can move on the road. The reciprocating motion of
the piston turns a crankshaft rotating the flywheel thought the connecting rod. The
circular motion of the crankshaft is now to be transmitted to the rear wheels. It is
transmitted though the clutch, gearbox, universal joints, propeller shaft or drive shaft,
differential and axles extending to the wheels. The application of engine power to the
driving wheels. The application of engine power to the driving wheels though all
these parts is called power transmission. The power transmission system is usually
the same on all modern passenger cars and trucks, but its arrangement many vary
according to the method of drive and type of transmission units.
Figure shows the power transmission system of an automobile. The motion of
the crankshaft is transmitted though the clutch to the gear box or transmission, which
consists of a set of tears to change the speed. From gearbox, the motion is
transmitted to the propeller shaft though the universal joint and then to the differential
through another universal joint. Universal joint is used where the two rotating shafts
are connected at an angle for power transmission. Finally, the power is transmitted
to the rear wheels while the vehicle is taking a turn. Thus, the power developed
inside the cylinder is transmitted to the rear wheels though a system of transmission.

The vehicle which have front wheel drive in addition to the rear wheel drives
include a second set of propeller shafts, universal joints, final drives and differentials
for the front units.

WORKING PRINCIPLE OF ELECTRONICALLY SENSED


HYDRAULIC CLUTCH:

The electronic sensed hydraulic clutch is operated mainly by means of


hydraulic fluid pressure. And the whole unit is controlled by means of handy switch,
provided with the gear shift lever.

DISENGAGEMENT OF CLUTCH:

While the driver changing the gear, he has to switch on the handy to energies
the relay. So that, the energized relay operates the motor. The motor which in turn
pulls the lever attached to the pushrod of the master cylinder.

The fluid comes out from the master cylinder reaches the slave cylinder under
certain pressure. The pressurized fluid pushed the slave cylinder pushrod which is
connected with the clutch release fork lever. So, the disengagement of clutch takes
place.

ENGAGEMENT OF CLUTCH:

After changing the gear the driver has to switch off the handy switch. So that
the relay will change its polarity (i.e. reversed). Then the motor rotates in opposite
direction. Mean while, clutch release fork releases by means of spring force. The
fluid from the slave cylinder and master cylinder return backs to the reservoir. So the
engagement of takes place. Cylinder push rod to control the motor ON/OFF
condition. Battery supplies the power supply for the whole unit
WORKING:

SYSTEM CONFIGURATION:

Our paper consists of mainly

 Master cylinder
 Slave cylinder
 Reservoir tank
 Stepped motor
 Relay unit
 Sensor switches
 Battery

SET-UP:

A handy switch is provided with the gear shift lever. And it is connected to the
relay unit. The relay unit controls the motor.
A master cylinder is connected to the motor unit by means of its pushrod lever
and the slave cylinder is connected to the clutch unit by means of its pushrod lever.
Both the master and slave cylinders are fitted near by the clutch unit and are
supplied with fluid oil by means of a reservoir tank. Two sensor switches are provide
near the master

RELAY:

Relay is an electromagnetic device. It acts as a switch consists of magnetic


coil and contacts. Initially there are two types of contacts, normally open contact and
normally closed contact. It is operated by means of a 12 v battery. In relay, terminals
are used. The two terminals are connected to the relay coil. The other terminals are
connected to battery, motor, diode, etc.

RELAY CIRCUIT DIAGRAM:


HYDRAULIC OPERATION:

In heavy – duty mechanically operated clutches with high clutch-spring


pressure; the force required by the driver to release the clutch becomes excessive.
This can be remedied by the use of hydraulic operation. This type of operation is
also suitable for vehicles in which the clutch has to be located too far away from
each other. Hydraulically operated clutch may be either single plate type or the more
modern mutilate type. Both are described below.

HYDRAULIC SINGLE PLATE CLUTCH:


When the release fork is pressed the fluid under pressure from the master
cylinder reaches the slave cylinder which is mounted on the clutch itself. The fluid
under pressure actuates slave cylinder push rod which further operates the clutch
release fork to disengage the clutch.

CLUTCH MASTER CYLINDER:

The detailed construction of the master cylinder had been shown in figure. In
engaged condition when the clutch fork in the released position, the push rod rests
against its stop due to the pedals return spring. Also the pressure of master cylinder
spring keeps the plunger in its back position. The flange at the end of the valve
shank contacts the spring retainer. As the plunger has moved to its rear position, the
valve shank has seal lifted from its seal and seal spring compressed. Hydraulic fluid
can then flow past the three distances pieces and valve seal in either direction. This
means the pressure in the slave cylinder then is atmospheric and the clutch remains
in its engaged position.

However when the released fork is pressed to disengage the clutch, the initial
movement of the pushrod and plunger permits the real spring to press the valve
shank and seal against its seat. This disconnects the cylinder from the reservoir.

CLUTCH SLAVE CYLLINDER:


Further movement of the plunger displaces fluid through the pipeline to the
slave cylinder and disengages the clutch. The construction of the slave cylinder is
made clear by means of figure. The return spring in the slave cylinder maintains
some pressure on the release fork so that the trust bearing is always in contact with
the release levers. Moreover, in case of wear of clutch facing, the return spring and
the piston move out automatically to take up the tilt of the release fork lever.

Unlike cables, hydraulic operation does not involve frictional wear, especially
when subjected to large forces. Due to this reason hydraulic operation is particularly
suitable for heavy duty application, i.e., on large vehicles.

SENSOR SWITCH:

The sensor switch is used by the control of motor in on or off. Switch are
provide in motor movement path

STEPPER MOTOR:

Stepper motor is a machine, which converts electrical energy into


mechanically energy. Motor works under D.C circuit. 7amps, 12 v and 200 watts
capacity, fiber made worm and wheel gear for power output. The motor and the gear
box unit is usually located in nearby (master cylinder) hydraulic circuit.

DIODES:

A diode is formed by combining a N-type and P-type semiconductor material.


The point of the materials is called as junction. If a battery is connected across a
diode in such a way that positive terminal is connected to P-type material and the
negative terminal is connected to the N-type material.
The excess electron leaving the N-type material to flow into the P-type to fill
the holes threw, could be quickly replaced by electrons from the battery. Therefore a
current flow through diode would be maintained such a condition is termed as
forward bias.

The electrons will be attracted towards positive terminal of the battery which
away from the diode junction, obviously there would be any electron (i.e. current)
flow through the diode in this condition called the reversed bias.

APPLICATIONS:

Light duty vehicle

 Car
 Jeep
 Van
Heavy duty vehicles

 Bus
 Lorry
 Trucks

ADVANTAGES:

 Simple in design
 More effective
 Smooth in operation
 Required less manpower
 Comfort driving
 Easy maintenance
 Very useful for physically unable person
CONCLUSION:

By implementing our project in vehicles, it’s very useful for physically unable
persons. It will be very convenient for them to drive the vehicle and also ease to
change the gears. By taking as a base, analyzing it, we can also control brake,
accelerator by electronic means.

Identification and Compensatory Control Model


of Volumetric Errors for CNC Machine Tool
C.S.Verma , R.Purohit , A.V.Muley

Mechanical Engineering Department I.I.T.Delhi

Manufacturing Process and Automation Engineering Department,

N.S.I.T., Sec -3, Dwarka, Delhi-78

----------------------------------------------------------------------------------------------------------------

ABSTRACT: Accuracy of machine components is one of the most critical


considerations for any manufacturer. The general approach towards building
accurate machine tools is to apply error avoidance techniques during its design and
manufacturing stage so that the sources of inaccuracies are kept to be minimum.
However, this approach involves a high degree of investment as machine cost rise
exponentially with the level of accuracy involved. Such machines also tend to be
frequently over-designed. The other technique is error compensation for more
accurate machine at lower cost. For the 3-axis milling machine, development of
expression for volumetric error models accounting for geometric, thermal errors
,cutting force induced errors and fixture dependent errors etc. of the machining
center are presented to improve accuracy of final product which is generally required
in precision machines like rockets , missiles, nuclear reactors and research
machines. This paper presents a brief review and further research direction on
identification and compensatory control model of volumetric errors for CNC machine
tool.
Keywords: volumetric errors, laser interferometer, machine tool, error
compensation.
…………………………………………………………………………………………………
……………………
1. INTRODUCTION

Volumetric positional accuracy is a relative error between the cutting tool and
the work piece is created, constitutes a large portion of the total machine tool error
during machining. The extent of error in a machine gives a measure of its accuracy.
In principle, there are two strategies to improve the accuracy of multi axis machine
[6].
1. Error Avoidance
- By increasing the precision in manufacturing, and
- By specific design improvements.
2. Software error compensation
- By using software correction for systematic geometric errors, and
- By on-line computational corrections for changing geometric and
thermally induced errors.
The general approach towards building accurate machine tools is to apply
error avoidance techniques during its design and manufacturing stage so that the
sources of inaccuracies are kept to be minimum. However, this approach involves a
high degree of investment as machine cost rise exponentially with the level of
accuracy involved. Such machines also tend to be frequently over-designed. The
other technique, namely that of error compensation for more accurate machine at
lower cost [7].In master part tracking approach, the machine probe is used to track a
master component such a circular disc , a ball bar etc , instead of measuring the
individual errors and generating the mathematical representation. This is a quick way
to assess the machine volumetric error [5]. By using D-H homogeneous
transformation matrices direct volumetric error can be evaluated for multi-axis
machine et al. [9]. An automatic NC code converting software was developed so that
the developed system could be applied to practical machining for CNC machining
[1]. The online error compensation method by using back-propagation neural
network was proposed, Chana et al. [14]. Software developed for error correction
has been successfully demonstrated in machine tool laboratory for 20 years to check
its durability Christopher D. et al. [15].This paper is organized into five sections. The
first section is introduction, second section discusses the identification method of
errors, third section discusses the overview of different errors and fourth section
discusses the model for volumetric error and finally volumetric compensation
techniques.

2. ERROR COMPENSATION METHODS:


There are five different error identification methods employed
according to the type of error to be monitored. They are as follows:
1) Grid calibration method: - this method calibrates the error at discrete grid points
of the working volume and interpolates the estimated error for the actual tool
position. It is commonly used for geometric error modeling.
2) Error synthesis method :- commonly used for geometric and thermal error
modeling, this method obtains the tool error in terms of individual error
components
3) Designed artifact method: - used to model local geometric and thermal errors,
this method measures dimensions of specially designed objects instead of direct
measurement of the errors.
4) Metrology frame method: - this method is employed to measure partial geometric
and thermal errors using optical systems mounted on the machine so as to
measure errors on-line thereby eliminating some off-line measurements.
5) Finite element method: - this method is used to estimate thermal errors through
thermo elastic deformation and heat transfer analysis of the machine structure.
No experiments are involved in this method of error estimation [8].
3. OVERVIEW OF MACHINE TOOL ERROR

3.1 GEOMETRIC ERROR: Geometric errors are regarded as the machine tool
errors, which exist under cold start conditions caused by mechanical-geometric
imperfection, misalignment of the machine tool elements cause them. They all
changes gradually due to wear. They demonstrate themselves as position and
orientation errors of the tool with respect to the work piece. On assumption that the
CNC milling machine consists of rigid bodies, six degrees of freedom must be
specified for each of the three carriages (tool post, bed and column movements):
three translational and three rotational errors. This errors are depended on the
position single carriage and they do not depend on the position of the other carriages
(rigid body assumptions).

For a single carriage the translation errors are the positional errors p (ypy)
and the straightness error t {in two directions perpendicular to the moving axis of the
carriage, ytx , ytz in the figure below), the rotational errors ,r, are the pitch , yaw and
roll motion yrx, yrz , yry respectively in

fig.(1) They are thus be measured separately for each carriages. Altogether, 18
position-dependent errors and additionally, three square ness errors between the
three moving axes are to be determined thus there are total 21 geometric errors can
be determined.

3.1.1 GEOMETRIC ERROR MODEL

For reasons of clarity, it is assumed that only one carriage here the Y-
movement, is effected by errors. Each slide linkage can be considered a rigid body
moving on a designated joint. Each linkage has its own error components. Since the
whole machine system is a chain of moving linkages. The tool position can be
obtained by multiplying linkage error transformation matrices. There is various
approaches to modeled the geometric errors like analytic geometry, vector
representation, error matrices, and homogeneous transformation matrices
assumption is that rigid body kinematics. Since but Homogeneous transformation
matrices (D-H matrices) has the potential to facilitate a simple error model
formulation for an arbitrary configuration. The basic D-H matrix relates an arbitrary
vector in frame (i) to a vector in frame (i+1).Successive application of the
homogeneous transformation matrices of neighboring links in the kinematics chain of
a multi-axis machine, one may express the position of a point in the last (tool) frame
with respect to the first (global frame) frame by the transformation. The assumption
of rigid body motions of the elements, there are link geometry related errors and link
motion related errors of first, second order even high order can be expressed. The
three translational errors (linear error and straightness errors) and the three
rotational errors (roll, pitch, and yaw) are described by a 4×4 transformation matrix
for typical carriage as Tx , Ty and Tz . Similarly squareness errors are also
represented by 4×4 transformation matrix Txz and Tyz. The three dimensional
positioning error due to xy-table and spindle movement, T, is the sum of the
positioning error due to the linear and squareness errors. The total position error,

E, due to the carriage movement can be represented by,

E = Tx + Ty + Tz + Txz + Tyz (1)

For the 3-axis milling machine there are 21 errors components. The
geometric error model is constructed by using a rigid body model, small angle
approximation. The geometric error model Chana Raksiri et .(14) is given as:

Ρx = δxx+δxy+δxz-εzx y+εyx z +εyy z +Sxy y -Sxz z-δyy εzx-δyz εzx-δyz εzy+δzy εyx+δzz
εyx+δzz εyy+εxy εzx z+ε zxSyz z+ε zySyz z
(2)

Py = δyx+δyy-δzy εxx-δzz εxx-δzz εxy+δxy εzx-εzx Sxy y+δxz εzx- εzx Sxzz+δxz εzy-εzy Sxz z+δyz- Syz
z-εxxz-εxyz+εyy εzxz

(3)

Py =δzx+δzy+δzz+εxxy+δyy εxx-δxy εyx-δxz εyx-δxz εyy+δyz εxx+δyz εxy-ε xxεxyz-ε yxεyyz+εyx Sxyy –
εxx Sxzz+ε xzSyzz-ε xyδyzz+εyx Sxz z
(4)

Where x, y, z are nominal positions.δxx, δyy, δzz are their respective positional errors
along x,y and z directions, respectively. δzx,δzy,δxy ,δxz ,δyz, δyx are straightness errors,
where the first subscript refers to error direction and the second refers to moving
direction. εxx, εxy, εxz ,εyy, εyz, εyx, εzz, εzx, εzy are the angular errors, where the first
subscript refer to axis of the rotation error, and the second refers to moving
direction. Sxy, Sxz, Syz are squareness errors between each pair of axes.

3.1.2 GEOMETRIC ERROR MEASUREMENT METHOD:

A laser interferometer is commonly used in the measurement of the various


error components with an exception of angular roll. The same was measured using
an electronic level. there are also measurement equipment like step gauge, gauge
block , differential interferometer , precise artifacts, autocollimator, reference flat with
different Abbe offset etc. Accurate measurement of errors is an essential part of error
compensation. By using telescoping ball-bars, two measurement methods,
characterized by low cost, simplicity of setup, and quick measurements are
developed to directly identify the total potion errors at the tip of the tool of a multi-
axes machine tool without the use of an error model. The first method uses the well
known triangulation principle that requires three reference points. The second,
referred to as the single socket method, utilizes a single reference point et al.[4] A
displacement method is also used to shorten the measurement time and to simplify
the measurement. By measuring the positioning errors along the 15 lines in the
machine work zone, a total of 21 geometric error components can be determined et
al.[2]. Fan et al. [11] developed a software package for the analysis and calibration of
the positioning accuracy of NC machine tools. A HP 5526 laser interferometer was
used as the measuring tool.

3.2 THERMAL ERROR:

Thermal error that occurs due to continuous usage of a machine tool .When
errors due to the increase in the temperature of the machine elements need to
appraised, only those thermal deformations that lead to a relative displacement at
the cutting point and thus have an influence on the accuracy of the work being
produced, are considered. The effect of the temperature in the change in the shape
of the machine components may be determined by measuring the geometric/
kinematics behavior whereby the temperature distribution over the whole machine is
a parameter. this are generated by environmental temperature changes ( effected by
heating and cooling influence of the room, the effect of people, thermal memory from
any previous environment , and heating and cooling provided by the cooling
system ), local sources of heat from drive motors , friction in bearings , gear trains,
and other transmission devices and heat generated by the cutting process. They
cause expansion, contraction and deformation of the machine tool structure and
generate positional errors between the cutting tool and workpiece. The machine tool
elements particularly affected by self-generated thermal distortion are spindles and
ball

Screws. At heat generation at contact points is unavoidable, this source of error is


one of the most difficult to eliminate completely. Of late, new solutions that are
gaining ground are high speed machining and grinding techniques that divert the
heat onto the chip instead of the workpiece. One important suggestion in arriving at a
solution to the problem of thermal error is the use of temperature-controlled boxes.
These enclosures are designed to contain the machine and provide a controlled
atmosphere .they seem to be the better option than the design and construction of
temperature controlled rooms that are costlier and difficult to maintain which is used
for reducing thermal errors.
3.2.1 MODELING OF THERMAL ERROR :

In order to achieve the final objective of minimizing thermal error in the


machine tool, the behavior of the machine structure is first of all modeled through the
use of FEM (thermo-elastic modeling) techniques or empirical models. neural
networks is a popular method to develop empirical models between discrete
temperature measurements and thermal errors . Thermal error model includes both
first-order correction model and second – order correction model.First-order
correction model: the first –correction model allows for the length variations of the
displacement transducers caused by the uniform temperature variations. This model
basically uses the well known relations for the thermally induced length variations Δl.
of the body

Δl = α L .ΔT (5)

With α being the linear coefficient of thermal expansion, L, the length of the
body and ΔT, the temperature variation from the reference state. The second order
depends on various factors like linear expansion coefficient α, temperature, slope
and the component of the temperature gradient effective in the respective projection
plane.

3.2.2 THERMAL ERROR MEASUREMENT:

As for as instrument is concerned, researchers have used thermocouples,


platinum resistance thermometers and thermistors for measurement of temperature
variation of different elements of the machine tool. An innovative technique that was
used in measuring the temperature of a rotating workpiece was floating a
thermocouple bead on the hydrodynamic oil film adhering to the workpiece. The
thermocouples are mostly of foil type construction, either T-type or J-type
thermocouples. The sensors are pasted onto the surface of the heat source and data
monitored periodically. R. Ramesh et al. [12] suggested the need to mount 80
thermocouples at various locations on turning centers.

3.3 CUTTING FORCE INDUCED ERROR, ITS MODEL AND MEASUREMENT:

The dynamic stiffness of all the components of the machine tool (namely the
bed, column etc.) that are within the force-flux flow of machine is responsible for
error caused as a result of cutting action .as a result of the forces, the position of the
tool tip with respect to workpiece varies on account of the distortion of the various
elements of the machine. Depending on the stiffness of the structure under the
particular cutting conditions, the accuracy of the machine tool would vary. Thus for a
machine with a given stiffness a heavy cut would generally produce more inaccurate
components than a light cut. Most of the current error compensation research has
not considered the error generated due to cutting forces. The argument that has
been used to neglect cutting force induced errors is that in finish machining, the
cutting force is small and the resultant deflection could be neglected. Modern
machine techniques involve the machining of hardened steel directly to its final form
without the customary grinding operations. In such cases, the cutting forces could be
very large thereby making it impossible to neglect the generated forces. Force
sensors, play a major role in the elimination of cutting force induced errors.
Piezoelectric sensors or strain gauges are used for this purpose. A cutting force
sensor is developed and applied to measure the cutting forces. The result of
machining error due to cutting force is measured by a camera. Cutting force is
responsible for elastic strain on the machine tool structure. In the case of turning
center, these are usually mounted in the spindle assembly. Once this sensor is
mounted, they need to be calibrated in order to record the forces properly. HTMs
(homogeneous transformation matrices) are used to combine all the error
components and thus drive the error synthesis model.

3.4 FIXTURE DEPENDENT ERRORS:

In case where the work piece is restrained by a small area of contact with the
fixture, the error due to deformation at contact region or lift-off/slip of the workpiece
could cause significant errors. workpiece displacement is dependent on several
factors like position of the fixturing elements , clamping sequence , clamping
intensity , type of contact surface etc. Thus workpiece displacement could be a
significant source of machine errors. if the workpiece is insufficiently restrained or if
the fixture is weak in comparison with the cutting force , slip or deformation ,
respectively , are bound to occur at the fixture-workpiece interface. Thus proper
design of fixture is required. In the setup used, the workpiece should be placed in
contact with the locators.

3.5 OTHER DEPENDENT ERROR:

Other errors like tool wear and load induced error. There are three types of
force present during machining process (1) workpiece weight (2) forces resulting
from cutting process and (3) gravity forces resulting from the displacement of
masses of the machine components. They all cause elastic strain on the machine
tool structure.

4.6 COMBINED POSITIONAL ERROR AT THE TIP OF THE MACHINE TOOL TIP:
There is mathematical formula for volumetric error given below. If E X , EY
and EZ are the volumetric error compensation components in the X, Y and Z
directions, respectively. The resultant volumetric error can be determined by the
following equation et al. [1]:

ERV = ( E²X + E²Y +E²Z )½ (6)

5. ERROR COPENSATION TECHNIQUES

There are on line as well as off line error compensation techniques with on-
line compensation, all geometric errors are measured and compensated in real time.
This concept makes it unnecessary to measure temperature and separately correct
for temperature induced errors. There is a multiple-degree-of-freedom laser system
(MDFLS) for the simultaneous measurement of several machine kinematics errors.
Similarly recursive software was developed by assuming structure is non rigid shown
in fig (2) by Shih-Ming Wang [12]. In addition an automatic NC code converting
software was also developed so that the developed system could be applied to
practical machining for CNC multi-axis machines et al.[1] .Application of above said
method show that the average of machining error is improved from -273 to -8
micrometer. thus a significant improvement in the accuracy of the machine tool can
observed as a result of the compensation. A PC - based compensation controller (as
shown in fig. 3) was used for real time error compensation.

The error correction vector R PCorrection with respect to refence coordinate frame can
be obtained from the folling matrix equation [1]:
R
PCorrection = R Ptool - R Pwork (7)

CONCLUTION:

The paper reviews that error compensation is a powerful and economical way
to upgrade the accuracy of Multi-Axis machine tools. Obtaining such improvement
requires a correct geometric model, a correct thermal model, cutting force induced
error model, fixture dependent error model, tool wear dependent error model and
careful machine calibration. We found particular attention is still requiring for
squareness and angle errors, tool wear dependent error and thermal behavior. If all
errors could be found accurately we can modify the CNC G-Code command for
obtaining accurate products.

REFERENCES

1. A.C. Okafor ,Derivation of machine tool error models and error compensation
procedure for three axes vertical machining center using rigid body kinematics,
International journal of machine tools and manufacturer 40 (2000) 1199-1213.
2. Chana Raskin , Manukid Parnichkun, Geometric and force errors
compensation in a 3-axis CNC milling machine, International journal of
machine tools and manufacturer 44 (2004) 1283-1294.
3. Christopher D. Minze , Durability evalution of software error correction on
machining center , International journal of machine tools and manufacturer 40
(2000) 1527-1534.
4. Christopher D. Mize , Durability evalution of software error correction on
machining center , International journal of machine tools and manufacturer 40
(2000) 1527-1534.
5. Guiquan chen , Jingxia Yuan , A displacement measurement approach for
machine geometric error assessment, International journal of machine tools
and manufacturer 41 (2001) 149-161.
6. John A. Boach , Coordinate measuring machines and systems, Giddings &
Lewis Dayton , Ohio, Marcel DEKKER , Inc. Newyark pp 279-299.1991.
7. K.F. Eman ,B.T. Wu, A generalized geometric error model for multi –axis
machines, Annals of the CIRP Vol. 36/1/1987.
8. Mahbubur Rahman , Jouko.Heikkala, Modeling , Measurement and error
compensation of multi –axis machine tools , Part 1 : Theory , International
journal of machine tools and manufacturer 40 (2000) 1535-1546.
9. P.D.Lin and Kornel F. Ehmann , Direct volumetric error evaluation for multi-axis
machines, International journal of machine tools and manufacturers 33(1993)
675-693.
10. R. Ramesh ,M.A.Mannan , A.N-Poo , Error compensation in machine tools- A
review part 1. geometric , cutting force induced and fixture- dependent errors.
International journal of machine tools and manufacturer 40 (2000) 1210-1256.
11. R. Ramesh ,M.A.Mannan , A.N-Poo , Error compensation in machine tools- A
review part 2. thermal errors. International journal of machine tools and
manufacturer 40 (2000) 1257-1284.
12. shih-Ming Wang , Kornel F. Ehmann , Measurement methods for the position
errors of a multi –axis machine. Part 1, Principles and sensitivity analysis,
International journal of machine tools and manufacturer 39 (1999) 951-964.
13. shih-Ming Wang , Kornel F. Ehmann , Measurement methods for the position
errors of a multi–axis machine. Part 2, applications and experimental results,
International journal of machine tools and manufacturer 39 (1999) 1485-1505.
Positioning error ypy 14. Shih-Ming Wang , Yuan-
Liang Liu, An efficient
or δyy
Z Y error compensation
Straightness error t system for CNC multi-axis
machines , International
a) horizontal ytx or journal of machine tools
δxy X and manufacturer 42
(2002) 1235-1245.
b) vertical ytz or 15. V.S.B. Kiridena, P.M.
δzy Ferreira, Computational
approaches to
compensating quasi-static
errors of three-axis
machining centers,
International journal of
rotational machine tools and
manufacturer ,vol. 34,No.
errors r 1, pp. 127-145 , 1994.
rotation about
Identif
a) moving axis yry y
desire
or εyy cutter
b) horizontal axis
yrx or εxy
c) vertical axis yrz
or εzy
Z Y
Squareness

Errors w

a) plane XY xwy
or Syx X Fig 2. Concept of the software
compensation scheme.
b) plane XZ xwz
or Szx

c) plane YZ ywz
or Szy
Load
Start
NC
codes
CNC Recursiv
controller e
Rewrite software
Inverse NC codes
ANN MODEL
Kinematic compens Data Bank
kinemati Model
ation
cs
A/D Board Q/D Board Digital I/O
system
x,,y,Cutter Δx Δy Δz
z Positio Delta 20 T

Machine servo n
CNC controller
End
system Encoder
feed
back
signal

Thermocouples

Fig. 3. Block diagram of the error compensation

Scheme

Fig.1 Errors of the Y-carriage and squareness errors


from

C.S.Verma

Asst professor, N.S.I.T. Delhi 75 csvnsitd@yahoo.co.in


REFERENCES:

Internal Combustion Engine - V. Ganesan

Automobile Engineering - K. K. Ramalingam

Automobile Engineering - Dr. Kirpal Singh

Automobile Engineering - P.M. Heldt

Automotive Electronics – Young Griff’s


Title : “PRODUCTION FLOW ANALYSIS FOR SAFETY ASPECTS IN A FIRE WORKS
INDUSTRY”

Author: Mr.Ganesan1, P.G. Research Scholar, Dr.R.Maheswaran2.Ph.D, Professor –


Mechanical

Mepco Schlenk Engineering College, Sivakasi

Abstract

Fire Works became the symbol of happiness in all festival and happy
occasions. Many Fancy Fire works are now days became quite common in all
wedding functions which in turn provide good global market to the fire works
industries round the year. Almost 35 percentage of global demand is satisfied by
Indian fire work industries through export.

Manufacturing Fire works require high level of safe environment which may affect
the productivity and make the operation costly. Fire Work mixtures are more
sensitive to friction, shocks, impact, sunshine, moisture and electricity. Workers are
involved directly with such explosive chemicals in preparing the fireworks. Their
safety is the most important key factor to be considered while producing fire works. A
good plant layout and safe environment has to be provided to the workers which may
in turn increase the production cost.

My project aims in manufacturing the fireworks in a safe environment but at high


productivity without compromising quality but at economical cost. Production flow
analysis is being carried out in a fire works industry and unnecessary interruptions in
the production flow are eliminated.

INTRODUCTION

Any festival or Occasion for celebration is incomplete without fireworks.


Almost 90% of the country’s requirements in fireworks are met by Industries in
Sivakasi. There are more than 650 factories producing fire works and its allied
materials in and around Sivakasi. As a safety measure, electricity is not permitted as
a source of energy. Fireworks industries make use of manual methods for all its
production activities. The Department of Explosives has laid certain rules and
regulations in connection with the working of the industry. There is growing need for
adopting better safety measures.

The integrated approach of obtaining good Productivity also includes Quality


and Safety (i.e.) P,Q,S forming the 3 sides of the Productivity triangle.

Produ ctivity Safety

Quality

PRODUCTIVITY TRIANGLE
When due consideration is given for safety, it automatically promotes
productivity. Safety, not only refers to fire protection, but also involves in

i. Good plant layout


ii. Good Material Handling Systems and
iii. Good ergonomics.

Project Objective:

The goal of this project is to collect information, assess the overall material
flow, noting the constraints and incorporating the safety measures to obtain the
following:

1. Minimize the Material Flow for both Raw Material and Work-in-Progress.
2. Eliminate the Constraints in material flow.
3. Improve Material Handling Systems
4. And Ensure high level of safety in all aspects economically.

Constraints:

1. Mixing of chemicals to be done outdoors only in humid weather (Relative


humidity above 50%); isolated, away from people & building.
2. Only cotton clothing to be worn when mixing.
3. Jewels / metal (like belt buckles) not to be worn.
4. Anti – Static Spray (aerosol cans) to be sprayed. This eliminates the chances
of static electricity.
5. The chemicals should be screened separately to avoid the risk of friction
ignition.
6. In chemical mixtures involving titanium, it is to be added last (because chance
for static charge is high).
7. Wear a dust respirator when mixing or charging.
8. Clean up any chemical spills immediately.
9. Don’t mix in plastic bags & don’t store in plastic containers (static charge).
10. Use only wooden or aluminium tools.
11. Chemicals with potassium chlorate should not be mixed with sulphur,
antimony sulphide or titanium.
12. Mixing should not be done indoor where aluminium dust is suspended in air. It
will be ignited by the electric spark of appliances or light switches.
13. Too many workers should not be exposed to mixing operation. Number of
workers to be limited to only those necessary (usually 1 or 2) to do the job.
The operations should be carried in separate sheds or, one work room for one
operation at a time.
Proposed Methodology:

The following methodology is applied during the study and analysis:

Survey of existing operations:

A survey of existing operation is done to assess the current process flow for
work-in-process and raw material flow. Future needs are identified to assess the
space and equipment requirements. These information are gathered through
observation and direct interview with the employees.

Analysis of overall layout:

• The Information gathered and physical building constraints are added with
safety legal issues.
• The one way distance trips for work in process and raw materials flow among
all departments are analysed.
• This analysis improves the part flow of the largest one – way distance trips
thro’ the departments and potential areas of improvements using optional
handling equipment.
• Based on this analysis, the compounding area is suggested to be reallocated,
reducing the distance moves by 30% for all production lines.
• Improving the efficiency of access to the packaging area reduces congestion
and crossings of work – in – process.
Facilities layout suggestions:

After the completion of this initial effort, the company employees are involved
in a review to analyse and identify the impact of these changes in the current
methods for all the operating procedures of the manufacturing departments. Flow
diagrams are used as a basis for understanding these operating procedures.
Material handling equipments are recommended to improve the efficiency of the
material flows from the production areas.

Point – of - Use:

Based on physical constraints of space, raw material and work – in – process


requirements, POU system can be introduced. This is applied based on demand –
pull point – of – use delivery directly to the point – of – use of the work – in – process
and materials. But this is complicated since there are multiple work centres.

Potential common storage locations are identified that allow reduction in


carrying costs due to pooling of safety stocks. But it should not involve any additional
material handling. Optimum packaging in shape, size and quantity are to be
considered.

Major improvements:

1. The material handling was restricted to one movement from receiving dock to
storage; and from finished goods to stocking place.
2. The material handling for receiving and shipping is decentralized to avoid
congestion and unnecessary moves.
3. Floor locations are identified and painted on the shop floor to avoid
unnecessary moves and improve visibility.

CONCLUSION:
• Safety oriented production flow has more benefits. It represents important
opportunities of improvement in the organization.
• Simulation tools can be adopted for assessment of changes in production
flow. This gives a substantial reduction in cycle time.
Also, this will allow a greater inventory control with less investment and
cost reductions in material handling with minimum need for quality
control.

References:

1. Fire woks safety: Proceedings of the National Seminar held on July 18th &
19th, 1999.
2. Fireworks Safety Manual, B&C Associates, 1991, U.S.A.
************

DEVELOPMENT OF DISPERSION MODELLING AND EMERGENCY


PREPAREDNESS PLAN FOR CHLORINE GAS
I.A.SATHISH a S.SAMBATH b

a- Mepco Schlenk Engineering College, Mechanical Dept, ME-ISE, Student.


iasathishme@gmail.com

b- Mepco schlenk engineering college, Mechanical Dept, Asst.Professor.

Abstract

Atmospheric dispersion modelling of dense gases and estimation of


downwind extension of vulnerable zones due to accidental release of toxic
chemicals in an industry form an integral part in framing an emergency management
plan for that industrial estate. In industrial estate, various types of toxic, flammable
and explosive chemicals are stored and used in the process. Accidental release of
any such chemical would have various extensions of impact zones depending not
only on the pre-release process conditions of the chemicals, but also on the
meteorological and topographical features of the downwind area. The present work
undertakes the development and validation of conceptually simple and
computationally efficient dense gas dispersion model for chlorine tonner which could
be used for emergency response.

Key words: Dense gas dispersion, Wind speed profile, downwind impact distance.

1. INTRODUCTION
Dangerous materials and in particular toxic gases such as ammonia, chlorine
are often used in industry. Therefore it is necessary to pay particular attention to
these compounds in order to improve the safety of plants, storage and transportation
of such products. This paper deals with a possible leakage from a container and the
subsequent dispersion of chlorine. Chlorine is frequently used as a basic raw
material especially to avoid algal formation in ICW sump.

Its frequent use increases the probability of a potential incident. Accidents in


storage vessel scan occur for various reasons, including transport accidents or
human failure. An accident involving chlorine can cause injuries, either severe or
fatal. Even small releases of chlorine relative to the personal use of chlorine
derivatives can lead to a fatal issue.

The accidents analysis report gives limited information on the dispersion of


the gas. Indeed, the absence of measurements of the concentrations in real time
makes expertise and experienced feedback difficult. Toxic effects on the people
injured can be an indicator of the likely concentrations released. Nevertheless the
relationships between toxicological effects and concentrations are somewhat
inappropriate to evaluate the concentrations with sufficient accuracy.

As a heavy gas with (vapour density of 2.48), chlorine remains ground-based


and is therefore more likely to be dangerous to people. Thus it is of great importance
to study its dispersion .This paper deals with release of chlorine (about 950 kg)

2. MODEL DESCRIPTION

Dispersion is a term which includes moving and spreading. A dispersing


vapour cloud will generally move in a downwind direction and spread in a crosswind
and vertical direction. A cloud of gas that is denser or heavier than air (called a
heavy gas) can also spread upwind to a small extent.

The software ALOHA (Areal Locations of Hazardous Atmospheres) was used


to evaluate concentrations .It is composed of an integral model suited for dense gas
concentration predictions. This model is based on the DEGADIS (DEnse GAs
DISpersion) model. ALOHA models the dispersion of a cloud of pollutant gas in the
atmosphere and display a diagram that shows an overhead view of the regions, or
threat zones, in which it predicts that key hazard levels (LOCs) will be exceeded.
This diagram is called a threat zone plot. There are two separate dispersion models
in ALOHA: Gaussian and heavy gas.
2.1 Heavy gas

When a gas that is heavier than air is released, it initially behaves very
differently from a neutrally buoyant gas. The heavy gas will first "slump," or sink,
because it is heavier than the surrounding air. As the gas cloud moves downwind,
gravity makes it spread; this can cause some of the vapour to travel upwind of its
release point (Figure 3.1). Farther downwind, as the cloud becomes more diluted
and its density approaches that of air, it begins behaving like a neutrally buoyant
gas. This takes place when the concentration of heavy gas in the surrounding air
drops below about 1 percent (10,000 parts per million). For many small releases, this
will occur in the first few yards (meters). For large releases, this may happen much
further downwind.

2.1 Cloud spread as a result of gravity.

The heavy gas dispersion calculations that are used in ALOHA are based on
those used in the DEGADIS model (Spicer and Havens 1989), one of several well-
known heavy gas models. This model was selected because of its general
acceptance and the extensive testing that was carried out by its authors.

2.2 Classification of heavy gases

A gas that has a molecular weight greater than that of air (the average
molecular weight of air is about 29 kilograms per kilo mole) will form a heavy gas
cloud if enough gas is released. Gases that are lighter than air at room temperature,
but that are stored in a cryogenic (low temperature) state, can also form heavy gas
clouds. If the density of a gas cloud is substantially greater than the density of the air
(the density of air is about 1.1 kilograms per cubic meter), ALOHA considers the gas
to be heavy.
3. METEOROLOGICAL AND TOPOGRAPHICAL MEASUREMENTS

3.1 Meteorological data

Temperature Max Min

Mean monthly during summer 37.5ºc 23.7ºc

Mean monthly during winter 31.3ºc 19.8ºc

Absolute temperature 42.8ºc 11.1ºc

Humidity Max Min

Relative humidity 76.2% 30.0%

Wind velocity

Max 12.1 km/hr

Min 4.3 km/hr

Mean 8.5 km/hr

Wind direction

October-march North-East
April-September South-West

3.2 Topographical data

Latitude 11º 35ˈ 11 º 40ˈ N

Longitude 78 º 0ˈ 78 º 5ˈ E

4. CHLORINE TONNER SPECIFICATIONS

Shape - cylinder

Chemical state - liquid

Diameter - 780mm

Length - 2080mm

Pressure - 10bar

Gross Weight - 1567kg

WC - 768kg

TW - 653kg
Copper tube - 10mm

Outflow - 3.5 - 4 kg/hr

4.1 Copper Tube Connection

5. SIMULATION AND ITS RESULTS

5.1 ALOHA Text summary


5.2 Source Strength
5.3 Threat Zone

6. MISSION AND RESPONSE PLAN OF THE EMERGENCY RESPONSE SQUAD

Development of a comprehensive ERP requires a systematic review of the


hazards on-site, and the assumption of worse case scenarios. According to a
previous incident analysis, when a leak occurred, staff in the process plant could not
promptly deal with the upset situation, because a suitable plan which detailed
responses had not been established. Therefore, a complete ERP must be effectively
developed and distributed to the appropriate workers to prevent delaying corrective
actions.

6.1 Structure and responsibilities of emergency response


6.2 Job Duty of Each Related-Staff During An

Incident in a Process Plant

1. Incident commander actions

 Executing and planning the emergency response.


 Realizing the hazard potential of the upset situation and coordinating the
teams.
 Initiating the evacuation order to the staff.
 Assigning manpower resources.
 Providing a budget for the rescue process.

2. Coordinator

 Coordinating the rescue team and offering the response measures.


 Bridging between the incident commander and rescue team for assisting to
dispatch each task.
 Coordinating the task on the scene of the chemical disaster.

3. Government liaisons
 Contacting and reporting information to related governmental agencies
 Contacting the department of toxic response center to request safety and
health equipment for other departments to use to control the upset situation

4. Rescue team

 Protecting the staff, dealing with the toxic materials, stopping the leaks,
repairing damage, and controlling fires.
 Requesting and getting the necessary resources for executing emergency
rescues.

5. Information Team

 Providing and checking out the safety and health equipment.


 Recording rescue information.
 Assisting the incident analysis.
 Reinforcing the role of technical members.
 Environmental monitoring.

6. Safety and security team

 Guiding and evacuating the staff and vehicles.


 Safely guiding the support-personnel into the plant.
 Crowd control and keeping order inside the plant.
 Maintaining security inside the plant.
 Evacuating visitors and onlookers to a safe location.

7. Medical team

 Providing first aid and transporting the injured to a hospital


 Alerting the nearby hospital of potential patients.
8. Spokes person

 Issuing and explaining the incident information.


 Explaining the status of the emergency response process.
 Setting up and participating in a press conference.

6.3 Emergency Response Steps for a Chlorine Gas Incident Occurring In a


Typical Process Plant.

7. CONCLUSION

The role of vertical variation of wind speed within atmospheric boundary layer
on the extension of vulnerable zone in the downwind direction with various surface
characteristics has been studied utilizing specialized software, viz., ALOHA
developed by EPA. A failure scenario of a chlorine tonner having 950 kg of liquid
chlorine has been considered. The surface characteristics corresponding to
roughness parameter and atmospheric stability conditions with varying surface wind
speeds have been taken into account in finding the extension of impact distances
traversed by the chlorine vapour cloud in the downwind direction. This result has an
important implication that the extent of vulnerable zone with respect to downwind
direction.

While chlorine gas is leaking, the staff must lessen the degree of hazard
classified as AL in limit time by this ERP. All of the ERTs also need to comply with
the designated responsibilities for accidents after the ERP has been initiated by the
incident commander.

References

1. Small scale field experiments of chlorine dispersion. Aure´lia Dandrieux, Gilles


Dusserre, James

2. Modelling and simulation of heavy gas dispersion on the basis of modifications in


plume path theory. Faisal I. Khan, S.A Abbasi

3. Modelling and control of the dispersion of hazardous heavy gases. Faisal. I. Khan,
S.A. Abbasi

4. Influence of wind speed profile and roughness parameters on the downwind


extension of vulnerable zones during dispersion of toxic dense gases.

Asit Kumar Patra


DEVELOPING SOFTWARE TOOL FOR SAFETY MANAGEMENT SYSTEM
ELEMENT

P.Subashª., R.Malkiya Rasalin Princeª¹.

ª2nd Year M.E., Industrial safety engineering, Mepco Schlenk engineering


College, Sivakasi.

ª¹Lecturer, Mechanical Department, Mepco Schlenk engineering College,


Sivakasi.

(Corresponding Email id: safetysubash@yahoo.com Phone no: 9944906003)

BSTRACT

The present safety management system is manual which leads to malfunction


of safety and tough to manage. Hence the system is developed by software. This
can be functioned more accurately and can be easily managed. This system can
retrieve data wherever we need. Time consumption is very low compare to manual
system. We can update any data at any instant. We can reduce malpractice of
employees. Large quantity data can be stored
Key Words: safety management software, work permit, accident analysis, accident
analysis, iow, contractor clearance certificate, cost of accident.

1. Introduction:

In recent years a great deal of effort has been devoted to understanding how
accidents happen in industries. It is now generally accepted that most accidents
result from human error. It would be easy to conclude that these human errors
indicate carelessness or incompetence on the job but that would not be accurate.
Investigators are finding that the human is only the last link in a chain that leads

To an accident. We will not prevent accidents by changing people; we will

Only prevent accidents when we address the underlying casual factors.

A safety management system is businesslike approach to safety. It is a systematic,


explicit and comprehensive process for managing safety risks. As with all
management system, a safety management system provides for goal setting,
planning and measuring performance. A safety management system is woven into
the fabric of an organization. It becomes part of the culture, the way people do their
jobs.

Safety management system includes work permit system, Iow, accident


investigation, accident analysis, contractor clearance certificate, cost of accident,
incident analysis. Safety Induction forms, cost of accident, height pass.

1.1 WORK PERMITS SYSTEM

Safe working permits systems enable all operation to enhance safety


procedures and this section provides information on the requirements associated
with safe work permits systems including:
1. The authority to issue safe work permits
2. the situation where a permit is required
3. things to be considered prior to the issue of a permit
4. the conduct of the working accordance with the permit
5. The closure of the permit.

Work permit provide a system of identification, control and review of hazards


within any work environment and can be valuable tool. Examples where safe
work permit are required include:

1. Entry to confined space


2. work in or around confined spaces
3. working at height
4. Hot work permit
5. cold work permit

1.2 GENERAL PERMIT

 Allow the maintenance work to be carried out only after verifying the work
permit-operation
 Allow the maintenance work to be carried out only after verifying the work
permit-maintenance
 Incase the validity of work permit is be extended , get approval from safety
dept- maintenance

 In case of emergency , inform to respective department

 Inform fire Brigade station, safety dept, plant control and HOD about
emergency condition.

1.3 PLANT SAFETY INSPECTION

 Many inspection to be carried over in the plant


 Safety officer will raised NCR during an inspection

 Plant in charge has to clear that NCR within the allocated time period.
1.4 ACCIDENT INCIDENT REPORTING

Accident reporting and investigation is an important part of occupational health


and safety management. Investigations allow the identification of unsafe
practices and unsafe acts, which may need to be controlled in, ordered to prevent
a recurrence. In addition, accident information is required from a litigation point of
view. It may be several years before a civil or criminal action is taken against an
employee or the employer, and therefore an accurate record of the accident and
any action taken as a result of the accident needs to be documented and kept on
file. Although the investigation of accident is important, equally important are the
notification, recording and investigation of “near-miss” incident. Statistically, for
every serious accident that occurs in the work place, approximately 300 “near-
misses” occur.

Gathering information about these is therefore extremely in terms of controlling


risk in the work place

These are two main items of legislation relating to accident reporting and
investigation. These are: the reporting of injuries, diseases and dangerous
occurrences Regulations 1995(RIDDOR), and the management of health and
safety at work regulations 1999.

1.5 HEIGHT PASS

A document to be issued to a worker for working at height after imparting him


necessary training to be given.

1.5.1 Safety requirements/precaution

Before starting a work at height, the entire safety requirement (like safety
belts, productive helmets and safety nets) shall be decided as per the need of the
area/site by the executing agency in collaboration with the safety officer and the
contractor. These shall be documented.

1.5.2 Requirements of workers for


Working at height
Every worker to be deployed for working at height shall be examined and
certified by a registered medical practice-nor.

1.6 SAFETY IN CONTRACT


WORK

Safety is the responsibility of the contractor and his staff/employee/workmen


engaged/ deployed for execution of work under the contract, individually and
collectively. For this purpose, the contractor’s staff means and includes all its
associates/ sub-contractors/vendors/sub-vendors and their
staff/employees/workmen deployed for execution of the work covered under the
contract. The contractors shall ensure that his workmen participate in the safety
awareness, healthcare and safety training programmers are organized by the
employer or the contractor.

The contractor’s scope of work shall include, but not be limited to execution of
work /contract, adequate safety arrangement for men, machines and materials,
etc, engaged during the execution of contract.

While executing the contract, the contractor/his supervisor has to ensure


safety of surroundings with regard to employer’s workplace/site and other
contractor’s men/machine/material/system etc.

1.6.1 Detailed Procedure:

Before starting work, a safe work procedures/ protocol shall be prepared and
signed jointly by the executing department, representative of safety department
and the contractors or his representative. This procedure/protocol shall be
prepared by breaking the whole job into small elements and listing them
separately in the sequence. Against these elements, the agency responsible for
doing it would be mentioned. Any other details about these elements may also be
mentioned in the remarks column.

2. SOFTWARE USED

 Technology : J2EE
 Front-End : JSP
 Frame Work : STRUTS
 DBMS : MySql

2.1TESTING METHOD

 BLACK BOX TEST


 WHITE BOX TEST

3. SIMILAR OUTPUT

3.1 General Information:

3.2 Loss Information

3.3 Analysis:
3.4 Recommendations:

4. Result and Discussion:

Input such as employee details, nature of injury, work permit system, accident
data, incident data and contractor data is given to the software and output taken
in printable format; data can be stored and retrieve at any time.
6. Conclusion:

At a time multiple users can work in this software. This software is to


be more effective in safety management system. Time consumption is very
less by this software has minimized malfunction of safety management
system.

7. Acknowledgement:

Author is grateful to the management, principal and HOD, department


of mechanical engineering, Mepco Schlenk Engineering College, sivakasi, for
their constant encouragement for offering facilities to carry out this research
work

8. References:

1. Plant safety management by “Don Petersen”. Tata macro-hill.

2. Plant safety manual by “Salem steel plant”.

3. J2EE manual by “jack duckless”.

SYSTEMATIC DESIGN OF FIRE SAFETY TRAINING PROGRAM FOR THE


WORKERS OF HPCL AT KAPPALUR, MADURAI.
M.Anandhana, P.S.Balaji b

a
Senior Lecturer, Mechanical Engineering Department, Mepco Schlenk Engineering

College Sivakasi

b
II M.E., Industrial safety engineering , Mepco Schlenk Engineering College

Sivakasi.

(Corresponding email: balaji.ps@honeywell.com phone: +91 9620073765)

1. Introduction:

Hindustan Petroleum Corporation Limited is a LPG loading and unloading unit, which
fills the LPG in the Bottles (Domestic Cylinders). Loading and Unloading of LPG in
the domestic cylinders is a hazardous process. Therefore, the employees working
for HPCL requires a good skills, knowledge and attitude in the safety measures. The
employees who lack in safety measures should be identified and improved with
proper training.

The basic purpose of the project is to study and measure the knowledge level
using parameters like Need analysis, Entry Behavior analysis and Job safety
analysis. In addition, to improve safety, quality and productivity, which lead to a
better working environment, a study on fire safety training program, gathers its
importance.

2. Objectives:
2.1 To identify the target population who are in need of Training
Program on Industrial Safety.

This objective is achieved by dividing the questions into four groups as


Special, High, Medium and Low focus questions.

2.2 To determine the safe job sequences for selected job.

This objective is achieved by mere observation of work


activities, identification of the hazards and giving safe practices.

2.3 To identify the hazards in the jobs and to evolve safe practices for the
selected jobs.

This objective is achieved by instructing the employees on safe


practices while training.

2.4 To find the trainees pre-course expectations and detailed expectation of


the training content.

This objective is achieved by providing a set of questions asked


to the trainees.

2.5 To tests the knowledge, skills and attitudes of the workers by the way
of pre-assessment and post-assessment questionnaire.
This objective is achieved by giving a set of questions to the
employees based on parameters like Fire protection systems, Gas,
Earthing and Personal protective equipments etc.,
2.6 To identify the topics to be covered in the training courses.

This objective is achieved by taking the replies given by the


employees on pre assessment questionnaire.

2.7 To design training modules for the selected topics.

This objective is achieved by selecting topics on training


methods and modules..

2.8 To design methods, strategies and lesson plans for all the topics to be
covered in the training courses.

This objective is achieved by collecting information from


various resources like Fire safety consultants.

2.9 To design criterion tests to check the trainee’s attainment of the


objective for some selected topics.

This objective is achieved by post assessment questionnaire.

2.10 To evaluate the training course from the results obtained.

This objective is achieved by comparing the pre-assessment


and post assessment questionnaire results.
3. Need analysis:

A need is a discrepancy or deficiency between what is and what ought to be.


If there is a need there must be some sort of discrepancy or deficiency in the job
performance and need analysis is nothing but a systematic process of measuring
needs and deciding on priorities on action.

How to measure the needs?

 Perception discrepancy survey.

 Data discrepancy survey.

4. Task analysis:

Task analysis is the process of breaking down or analyzing the task into
smaller and more detailed constituent units and of then sequencing these units of
analysis in an order of priority based on their importance in the learning.

There are three main types of task analysis

 Job analysis

 Topic analysis

 Skill analysis

5. Analysis of entry behavior:


The main aim of doing this is to find what the trainees know, what skills they
have and what their attitudes are when they begin their training.

5.1 Methods:

The different methods adopted for analyzing entry behavior are:

 Questionnaire

 Tests

 Discussions and interviews

6. Analysis of resources and constraints on training:

Analysis of resources gives a good idea of the resources which can be


available for training. In resource analysis certain needs and problems are identified
with some suggestions for suitable remedial or resolving action.

7. Aims and objective analysis:

Aims and objectives analysis is one of the most significant of all the steps
involved in the systematic design of training program.

8. Building up criterion test:

Building up of criterion test is the beginning of system activity of synthesis.


9. Building up the content of the training:

Based on design system sources like need analysis, task analysis, aims and
objectives analysis the content is designed.

It includes,

 Developing a lesson plan

 Types and selection of instructional media

 Media selection

 Selection procedure

10. Building up methods and strategies of training:

There are basically two approaches in training. They are

 Trainer-centered strategy

 Trainee centered strategy

11. Assessment of the trainees and evaluation of the training:

It includes,

 Assessment of the trainees

 Evaluation of the training

 Methods of summative evaluation

 Methods of formative evaluation


12. Reviewing, revising and improving the training:

The system function of improvement involves two basic processes namely


reviewing and revising.

 Where the design system and the training are not working well.

 Where the design system and the training are working adequately.

 Where the design system and the training are working well.

13. Conclusion:

 Employee’s knowledge, skill and attitude can be improved by effective


training.
 Trainee’s demands and expectations can be fulfilled by making the training
program highly personalized one.
 Accidents can be reduced by educating the employees in the lacking area.
 Effective training programs can increase productivity and quality.

14. Suggestions:

 The company may recruit a permanent trainer to help the employees on


safety.
 The company may retain or promote experienced persons to improve the
working condition.
 A systematic study can be conducted at regular intervals will help the
employees to gain knowledge in various safety areas.
 This study can be conducted by taking all employees with more
parameters for better results.

15. Limitations:

 It was difficult for me to approach the illiterate employees.


 It was difficult for me to meet the employees due to their busy schedule.

16. Acknowledgement:

 Author is grateful to the management, principal and HOD, department of


mechanical engineering, Mepco Schlenk Engineering College, Sivakasi, for
their constant encouragement for offering facilities to carry out this research
work.

References:

 Training requirements in OSHA standards and training guidelines by OSHA


 Introduction to safety training by OSHA
 System approach to training and development by A.K.SAH
 Handbook of training and development by JOHN PRIOR
DESIGN AND IMPLEMENTATION OF QUALITY ILLUMINATION SYSTEM
IN PHARMA INDUSTRY

J.Karthika, R.B.Jeen Robertb


a
II M.E, Industrial safety engineering, Mepco Schlenk Engineering College, Sivakasi.
b
Lecturer, Mechanical Engineering Department, Mepco Schlenk Engineering College
Sivakasi

(Corresponding email: safetykarthik@gmail.com phone: +91 9886990568)

ABSTRACT

The Objective of this paper is to make the task easy to see and to create a
good visual environment by careful planning of the brightness and colour pattern
with in both work area and surrounding .It includes screening the unwanted area by
illumination survey analysis and measuring and comparison with various standards
like IS-3646/1966 and NBC-2005.Then controlling the direct and reflected glare from
light sources to eliminate the visual discomfort by calculation of various parameters
like adaptation luminance, veiling luminance, solid angle. The obtained value is
compared with glare-index study. Based on the comparison, design the number of
light fitting required for glare free work environment.

Key words: Illumination survey, Adaptation luminance, Veiling luminance, Solid


angle

1. Introduction:

Good lighting is one of the important factors in safeguarding and conserving


the health and life of the workers. The effects of good light, both natural and artificial,
and of bright and cheerful interior surroundings include the following:

2. Detailed illumination studies:


First we conduct a detailed illumination survey throughout the pharma industry
Based on the survey, we just find out the insufficient illumination area over an
industry.

2.1 Measurement of illumination:

Measuring the illumination at various working level by using Lux meter

After measuring the illumination, adequate illumination is compared with


factories Act 1948, IS3646 & National building code-2005.

2.2 Calculation of Uniform Lighting:

A Uniform distribution of lighting is desirable for most workrooms. Distribution of


light requires two problems to solve (1) Uniformity of illumination and (2) elimination
of shadows.

2.3 Assessing glare-Study index:

The Glare index for any installation may be derived from the basic formula, but
the procedure is lengthy.

G=E ((Bs^1.6) (w^0.8)/Bb)*(1/p^1.6)

Based on the calculation we get the value of glare study index. The following
method also used to find out the glare index system.
Table I
Shielding Angle Glare Limit Lamp
Luminance Cd/m2 B D E Fluorescent lamp.
L ≤ 2.104 10º 0 0 HP discharge lamp
2.104 < L ≤ 50.104 15º 5º 0º LP Sodium lamp
L > 50.104 30º 15º 0º HP Discharge clear

Luminance limits for luminaries critical angles, γ are 45º < γ < 85º.
2.4 Calculation of Colour Temperature:

Colour temperature, expressed on the Kelvin scale (K), is the colour


appearance of the lamp itself and the light it produces.
2.5 Calculate the Colour rendering index for specific job:

The ability of a light source to render colours of surfaces accurately can be


conveniently quantified by the colour-rendering index. This index is based on the
accuracy with which a set of test colours is reproduced by the lamp of interest
relative to a test lamp, perfect agreement being given a score of 100.

2.6 Design of lighting based on absorption co-efficient of room, equipments


and color criticality of product.

One aspect of good lighting is the prudent use of electrical energy. The
lighting
Industry has a long record of continuous improvement in the efficiency of lamps,
control gear and luminaries. When lamp types are being selected for a new
installation, the following are the principal characteristics which should be taken into
consideration:

1. Color appearance and color rendering


2. Efficacy and light output
3. Service period
2.7 DESIGN OF LIGHTING PROVISIONS:

Step-1: Decide the required luminance on work plane, the type of lamp and
luminaries:

A preliminary assessment must be made of the type of lighting required, a


decision most often made as a function of both aesthetics and economics.
Economics consideration is here important thing.
Step 4: Calculating the Utilization factor.

The ratio of luminous flux received on the work plane to the total luminous flux
emitted by the source

This is the proportion of light that is reflected collectively by all the surfaces in
a room.

It accounts for light directly from the luminaries as well as light reflected off the
room surfaces. it is possible to determine the utilizations Factor for different light
fittings if the reflectance of both the walls and ceiling is known. For twin tube fixture,
utilization factor is 0.66, corresponding to room index of 2.5.

Reflection of surface - %Utilization

Ceiling Walls Floor Furniture Coefficient %

65 40 12 28 29

85 72 85 50 57

Step-5: To calculate the number of fittings required uses the following


formula:

LLF = Lamp lumen MF X Luminaries MF X Room surface MF

Typical LLF Values


So, 6 numbers twin tube fixtures are required. Total number of 36-Watt lamps is 12.

Step 6: Space the luminaries to achieve desired uniformity:

In modern designs incorporating energy efficiency and task lighting, the


emerging concept is to provide a uniformity of 1/3 to 1/10 Depending on the tasks.

Recommended value for the above luminaries is 1.5. If the actual ratio is more
than the recommended values, the uniformity of lighting will be less.

Spacing between luminaries = 10/3 = 3.33 metres

Mounting height = 2.0 m

Space to height ratio = 3.33/2.0 = 1.66

This is close to the limits specified and hence accepted.

3. Conclusion:

Based on the survey results we just found out the areas which is insufficient
and over illumination.
Calculation of veiling luminance and adaptation luminance to find out the glare
index study. Based on the study we just compare the maximum allowable glare
index study as per illumination standards. In the glare areas we just design the
repositioning of lamp to give a comfort visual environment. Then further research is
going on to design of light fitting required.

4. Acknowledgement:

Author is grateful to the management, principal and HOD, Department of


mechanical engineering, Mepco Schlenk Engineering College, Sivakasi, for their
constant encouragement for offering facilities to carry out this research work.

5. Reference:

1. Illumination standards by miles A. Tinker.

2. Practical standards by factory illuminations by L.B.Marks.

3. Visual comfort by energy saving By Mike Wilson.

4. Code of practice for interior illumination by BIS.


DESIGN AND IMPLEMENTATION OF NOISE CONTROL SYSTEM IN CEMENT
INDUSTRY
K.R.Srinivasana, C.S.Sabareeshb
a
Selection Grade Lecturer, Mechanical Engineering Department,
Mepco Schlenk Engineering College, Sivakasi.
b
II M.E., Industrial safety engineering ,
Mepco Schlenk Engineering College, Sivakasi.
(Corresponding email: sabi_sabari@yahoo.co.in phone:
+919916314384)

ABSTRACT
This paper deals with the reduction of noise levels in cement industry
machines using engineering control .It is advisable and better to consider noise
control measures at the design stage but if this is not done other engineering
controls such as barriers and enclosures must be adapted to reduce the noise level.
The noise level in various equipments like crusher, cement mill, kiln, coal mill,
vertical roller mill, etc and the absorptive effect of the present materials around each
equipment are measured. The amounts of sound absorption by various materials are
studied based on their absorption coefficient and a barrier is designed and
implemented with a suitable material to reduce the noise level and the noise
reduction ratio is calculated.

Keywords: Barriers, Enclosure, Absorption coefficient


1. ntroduction:
Noise is an unwanted sound, generally of random nature, the spectrum, the
spectrum which does not exhibit clearly defined frequency compositions.
Noise control is the technology of obtaining an acceptable noise environment,
consistent with economic and operational considerations. The acceptable
environment may be required for an individual, a group of people, an entire
community, or a piece of equipment whose operation is affected by noise.
The following steps are taken to determine the amount of noise reduction
required:
a. Evaluate the noise environment, under existing or expected condition.
b. Determine what noise level is accepted.

c. The difference between the levels in above two steps gives the noise level to
be reduced.

In this paper, I design a noise control system in the cement industry. This
includes various steps to complete the paper. We can see one by one.

2. Conducting Noise survey in various equipment and locations:

A noise survey takes noise measurements throughout an entire plant to


identify noisy areas. Noise surveys provide very useful information which enables us
to identify:

• Areas where employees are likely to be exposed to harmful levels of noise and
personal dosimeter may be needed,
• Machines and equipment which generate harmful levels of noise,

• Employees who might be exposed to unacceptable noise levels, and

• Noise control options to reduce noise exposure.

Noise survey is conducted in areas where noise exposure is likely to be


hazardous. This survey is conducted in various places in industry like
a. Crusher

b. Rotary kiln

c. Cement mill

d. Coal mill

e. Fan& compressor area

f. DG set area

Noise level refers to the level of sound. This is usually done with a sound level meter
(SLM)

Sound level meter:

The SLM consists of a microphone, electronic circuits and a readout display.

Sound level meter

The microphone detects the small air pressure variations associated with
sound and changes them into electrical signals. These signals are then processed
by the electronic circuitry of the instrument. The readout displays the sound level in
decibels.

The current International standard for sound level meter performance is IEC
61672:2003. Based on the survey, we are preparing a noise survey report. This
report includes:

• Location plan of proposed development

• Methodology used including location of noise monitoring locations, Equipment


used, weather conditions

• Deviations from methodology/standard

• Full table of results

• Assessment of results according to standards used

• Recommendations for noise control measures

• Full calculations of the noise reductions expected to support any Suggested


noise control measures.
3. Computation of employee noise exposure:

Noise dose is computed using the following formula:

Where
Lex,8 is the equivalent sound exposure level in 8 hours,
∑ is the sum of the values in the enclosed expression for all activities from i = 1 to
i = n,
i is a discrete activity of a worker exposed to a sound level,
ti is the duration in hours of i,
SPLi is the sound level of i in dBA,
n is the total number of discrete activities in the worker’s total workday.
4. Survey is compared with various standards like Factories act/ Noise
regulations rules-2000, OSHA:
Based on the comparison statement, we finalise the various noise control zones
then we precede the further research.
Duration in hr Sound level dB(A)
8 90
6 92
4 95
3 97
2 100
1 102
¼ or less 115

5. Calculation of the noise level to be reduced:


This can be calculated by comparing the measured noise level with
various standards. The exceeding noise level is the noise level to be reduced.
Calculation of sound absorption:

The absorption coefficient is a basic quantity used in calculations of the


penetration of materials by quantum particles or other energy beams. It is a measure
of attenuation. The present sound absorption is calculated using the following
formula:

A = S1 α1 + S2 α2 +... + Sn αn = ∑ Si αi

Where

A = the absorption of the room (m2 Sabine)

Sn = area of the actual surface (m2)

αn = absorption coefficient of the actual surface

The mean absorption coefficient for the room can be expressed as:

am = A / S

Where

am = mean absorption coefficient

A = the absorption of the room (m2 Sabine)

S = total surface in the room (m2)

6. Acknowledgement
Author is grateful to the management, principal and HOD, department of
mechanical engineering, Mepco Schlenk Engineering College, Sivakasi, for their
constant encouragement for offering facilities to carry out this research work.

7. Conclusion
The noise levels at the noise producing areas are measured using noise
level metre and the employee noise exposure is calculated. This is compared with
the standards like OSHA, Factories act. The absorption coefficient of the room where
exceeding noise level exists is calculated. The project is in progress and wants to
complete the remaining steps.
8. References.
1. Cyril M,Harris “Handbook of noise control” second edition, PP-5-1,7-11
2. John M.Handy “Noise control for industry” industrial acoustical company USA, PP
2-7
3. Noise Figure Measurement Accuracy – The Y-Factor Method, agilant
technologies, PP 5-8

4. Hearing Protection manual from OSHA/ www.osha.com

6. Noise pollution regulations-2000

IMPACT OF EMOTIONAL INTELLIGENCE IN BEHAVIOUR BASED SAFETY ON


REDUCTION OF ACCIDENTS

Dr.T.Prabaharan & R.Prabhu

Professor, Mechanical Engineering Department, Mepco Schlenk Engineering


College

& II M.E., Industrial safety engineering, Mepco Schlenk Engineering College,


Sivakasi.

(Corresponding email: prabhugerald@gmail.com phone: +91


9443473927)

ABSTRACT

Emotional Intelligence is an applied behavior analysis technique that involves


interpersonal interaction to understand and manipulate environmental conditions that
are directing and motivating consequences of safety-related behavior. This method
discusses the evidence-based “ability model” of emotional intelligence and its
relevance to the interpersonal aspect of the safety coaching process. Results:
Emotional intelligence has potential for improving safety-related efforts and other
aspects of individuals' work and personal lives. Safety researchers and practitioners
are therefore encouraged to gain an understanding of emotional intelligence and
conduct and support research applying this construct toward accident and injury
prevention.

The ability to recognize emotion is one of the hallmarks of emotional intelligence,


an aspect of human intelligence that has been argued to be even more important
than mathematical and verbal intelligences. This paper proposes that machine
intelligence needs to include emotional intelligence and demonstrates results toward
this goal: developing a machine's ability to recognize human affective state given
four physiological signals.

Emotional Intelligence refers to capacity for recognizing our own actions and
those of others and motivating us and managing emotions. It also defined as “the
ability to monitor one's own and others' feelings and emotions, to discriminate among
them and to use this information to guide one's thinking and actions. Research
evidence reveals that is application to Industrial workers and their possible
effectiveness in enhancing their performance is yet unknown. Frames of Mind (EI):
The Theory of Multiple Intelligences introduced the idea of multiple Intelligence which
included both Interpersonal intelligence (the capacity to understand the intentions,
motivations and desires of other people) and Intrapersonal intelligence (the capacity
to understand oneself, to appreciate one's feelings, fears and motivations).

1. INTRODUCTION:

It has recently been suggested that the experience of work accidents is an


important variable to be considered as a predictor of workers’ perceptions (e.g.
causal attributions) and behaviours. Departing from the literature, this study has two
goals: (1) to analyze the relationship among work accident experience, causal
attribution of accidents and workers’ behaviour; and (2) to test causal attributions as
a mediating variable in the relationship between work accident experience and
workers’ behaviour. To test the stability of the results, the same analyses have been
performed in Tvs Srichakra Tyres ltd (Tyre manufacturing industry), is an
industrial context. In the industrial organization, the sample is composed of 200
apprentice labours. Results show that work accident experience is positively
associated with external attributions and unsafe behaviours and negatively
associated with internal attributions. Moreover, the results reveal a complete
mediation of the causal attributions in the industrial organization.

2. PROBLEM DEFINITION:

(i) Major numbers of accidents occurs due to unsafe Human acts and

Behaviour.

(ii) Most probably unsafe behaviours are noticeable at any given point

of time.

(iii) Engineering solutions have been achieved to a great extend in

Industries but behavioural Engineering in managing safety is still

More challenge.

(iv) It is difficult to change the behaviour of worker in normal way.

Work accidents constitute an extremely serious problem in our society, given the
important Psychological, health, social, economical and organizational
consequences associated with them (International Labour Organization, 2003). This
problem is reinforced by statistics, which reveal worrying numbers. Recent world
data, from 2001 (International Labour Office, 2005), indicates the occurrence of 268
million non-fatal and 351,500 fatal work accidents; in Europe the latest estimates, of
the year 2003, allude to around 4.2 million work accidents resulting in more than 3
days of absence from work (EUROSTAT, 2005).

3.0 CASE STUDY UNDERGONE IN TVS SRI CHAKRA TYRES LTD:

INDUSTRY DETAILS CRITERIA:

3.1 ACCIDENT REPORT IN INDUSTRY:


REPORTABLE ACCIDENT YEAR 2007: FIFTY-ONE

REPORTABLE ACCIDENT FROM JAN’08 TO JUNE’08: SIXTEEN

NON REPORTABLE ACCIDENT YEAR 2007: SEVENTY-NINE

NON REPORTABLE ACCIDENT FROM JAN’08 TO JUNE’08: FIFTY

NOTE: 90% of above said accidents are due to unsafe act & behaviour.

3.2 NATURE OF INJURIES:

 Hand cut
 Bone fracture
 Burn Injuries
3.3 LIST OF WORKMEN:

Total number of workers  2122

Total number of Apprentice workers  930

Note: Only apprentice workers are to be concentrated for Emotional Intelligence


sampling. Age factor is from 18 – 33 with minimum educational qualification of
STD: viii. This is because apprentice workers were majority involved in accidents
when compared to permanent workers.

3.4 SHIFT TIMINGS FOR APPRENTICE WORKERS:

First Shift  7am – 3pm (frequent report of accidents)

Second Shift  3pm – 11pm

Third shift  11pm –7am (frequent report of accidents)

4.0 METHODOLOGY FOR ANALYZING EMOTIONAL INTELLIGENCE:


1. Assess Industry Needs.

2. To find behaviour based hazards in process through fish bone


analysis.

3. To find population distribution of apprentice workers in each shift.

4.Analyzing previous Accident data & type of injuries on unsafe act.

5.Preparing Questionnaires to find out EI quotient of workers samples


with the help of psychiatrist

And Industrial Trainer

6.Short listing the Questionnaires by motivation analysis by the HR-


Manager.

7.Evaluating the samples by using SPSS16.0 (statistical package for


social science) software

8.EI based training to be given to workers by psychiatrist & Industrial


experts.

9.Again the EI quotient to be analyzed whom undergone training by


questionnaires survey.

10.Post-training raise of emotional quotient among workers will be


aimed to show.

4.1 FISHBONE ANALYSIS (BEHAVIOUR BASED HAZARDS) AT

INDUSTRY:

The Fishbone diagram (sometimes called the Ishikawa diagram)

is used to identify and list all the factors that are conditioning the problem

at hand

1) This is primarily a group problem analysis technique, but can be used by


Individuals as well

2) The process is called Fishbone Analysis because of the way in which the

Information gathered is arranged visually – like the skeleton

Of a fish

3) Usually in the Mobilize stages of the process to identify scale and scope

5.0 SPSS 16.0 SOFTWARE PACKAGE – INTRODUCTION:

SPSS (originally, Statistical Package for the Social Sciences) was released in its
first version in 1968 after being founded by Norman Nie and C. Hadlai Hull. Nie was
then a political science postgraduate at Stanford University, and now Research
Professor in the Department of Political Science at Stanford and Professor Emeritus
of Political Science at the University of Chicago. SPSS is among the most widely
used programs for statistical analysis in social science. It is used by market
researchers, health researchers, survey companies, government, education
researchers, marketing organizations and others. The original SPSS manual (Nie,
Bent & Hull, 1970) has been described as 'Sociology's most influential book'. In
addition to statistical analysis, data management (case selection, file reshaping,
creating derived data) and data documentation (a metadata dictionary is stored with
the data) are features of the base software.

Statistics included in the base software:

• Descriptive statistics: Cross tabulation, Frequencies, Descriptives, Explore,


Descriptive Ratio Statistics
• Bivariate statistics: Means, t-test, ANOVA, Correlation (bivariate, partial,
distances), Nonparametric tests
• Prediction for numerical outcomes: Linear regression
• Prediction for identifying groups: Factor analysis, cluster analysis (two-step,
K-means, hierarchical), Discriminant

5.1 SAMPLE OUTPUT OF SPSS 16.0 FOR QUESTIONAIRES:


Question sample:

On seeing any accidents on road I imagine same thing happened to me in my


work…

Choices:

1. Never

2. Often

3. Most of time

4. Rarely

6.0 VARIANCES OF EMOTIONAL FEELINGS BY CHI-SQUARE & ANOVA TEST:

Sample chi square variance test for two choices questions:

Objective: To test the emotional feelings of the employees

Observed Values:
Question 1 Question 2 Total

Yes 104 52 156

No 96 148 244

Total 200 200 400

Null Hypothesis:

H0: There is no significant difference between emotional feelings of employees.

Alternate Hypothesis:

H1: There is significant difference between emotional feelings of two employees

Expected values

Question 1 Question 2 Total

Yes 78 78 156

No 122 122 244

Total 200 200 400

X2 = ∑

(Chi-Square)

= (104-78)2 + (52-78)2 + (96-122)2 + (148 – 122)2

78 78 122 122
= 28.4

From Chi-square table valuation for 1 degree of freedom

X2 0.05 = 3.841

X2 0.01 = 6.635

Calculated value > table value

Result:

So, the difference between emotional feelings of workers is highly


significant. So, the null hypothesis is rejected.

Sample ANOVA variance test for multi choices questions:

Objective: To test the significance of difference between the emotional factors of


workers and their satisfaction level.

Q1 Q2 Q3 Q4 Total

Choice 1 162 40 164 32 398 (V1)

Choice 2 30 132 32 160 354 (V2)

Choice 3 8 28 4 8 48 (V3)

Total 200 200 200 200 800


Null Hypothesis H0:

There is no significant difference between emotional feelings of worker

ANOVA Table

Source of Sum of square D.O.F Mean squares F-Ratio


Variation

SSR 18173 (3-1) = 2 MSR=SSR/df FR = MSR/MSE

= 18173/2 = 2.808

= 9086.5

SSE 29119 9 MSE = SSE/df

= 29119 / 9

= 3235.4

Tabulated Value for F (2,9) at 5% level of Significant is 19.4 Ftab = 2.808.

Result:

So, the Null hypothesis H0 is accepted. There is no significant difference


between emotional feelings of workers.

6.1 POSITIVE & NEGATIVE EMOTIONAL RESPONSE FROM WORKERS:

Overall positive emotional quotient from given questionnaires = 65%

Overall negative emotional quotient from given questionnaires = 35%

7.0 WAY TO IMPROVE THE POSITIVE EMOTIONAL QUOTIENT & DECREASE


OF VARIATION OF EMOTIONAL FEELINGS BETWEEN WORKERS:
Psychological training:

• To design a counseling package in improvement of Emotional Intelligence.


• Conducting counseling sessions by psychiatrist & Industrial trainers (how to
control emotions).
• Conducting lectures sessions for stress management.

All the above stated things must be included with their regular technical training
sessions

Post evaluation after training:

• Re evaluation of worker’s Emotional quotient & variance


Test

• Decrease of variance & increase of positive emotional quotient will be the


result of counseling package.

8.0 CONCLUSION:

Thus Emotional Intelligence makes their own contributions in developing methods


for incorporating this concept into the work culture. Such efforts are needed not only
for the study of EI, but for many other constructs and theories being explored in the
human dynamics of injury prevention (e.g., people-based safety). So the main
concept of emotional intelligence is to eliminate the industrial accidents due to
unsafe act and to save the valuable human life.

LITERATURE REFERENCE:
• Douglas M.wiegand”Exploring the role of emotional intelligence in behavior-
based safety coaching”, Journel of safety research, volume 16 July
2007,pages 391-928
• Dr.H.L.Kaila ”Behavior based safety in organizations” Industrial safety
chronicle volume Dec 2006,pages 83-88
• Pedroso Goncalves “ Impact of work accidents experience on casual
attributions and work behavior” Journel of safety science , volume Nov
2007,pages 992-1001
• www.eiconsortium.org
• Psychology of teaching and learning by P.Felvia shanthi
• Richard C.Bell “Factorial Validity of emotional intelligence” journal of individual
differences Feb 07,pages 487-500
• Harald ”Testing and validating the trait EI questionnaire” journal of individual
differences Feb08,pages 1-6
DESIGN OF SAFE PYROTECHNIC COMPOSITION TO CONTROL SO2 EMISSION
OF CRACKERS

K.Lakshmanaperumala, M.Anandhanb, M.Jinnah sheik mohamadc,


a
II M.E., Industrial safety engineering, Mepco Schlenk Engineering College Sivakasi.
b
Senior Lecturer, Mechanical Engineering Department, Mepco Schlenk Engineering
College Sivakasi
c
Lecturer, Mechanical Engineering Department, Mepco Schlenk Engineering College
Sivakasi.

(Corresponding email: laxi_lax@yahoo.com phone: +91 9659435841)

ABSTRACT

Theatrical pyrotechnics are potentially capable of creating ear-damaging sound,


eye-damaging light, and airborne toxic chemicals. While damage to the ears and
eyes can be dramatic and obvious, potential health problems from inhalation of the
smoke are not usually addressed. In this paper, results of study on crackers
performance characteristics, emission of sulphur dioxide in the crackers. The sulphur
dioxide is minimized by changing the composition ratios. While doing this
experiment, keep the crackers performance constantly

Key words: pyrotechnics, sulphur dioxide, emission of crackers.

1. Introduction:

In recent years concern for air pollution effects both on short term end long term has
increased therefore, one of the most unusual sources of pollution in atmosphere is
the displacement of fireworks to celebrate festivals worldwide as well as specific
events. the burning of fireworks is a huge source of gaseous pollutants such as
ozone,sulphurdioxide and nitrogen oxides as well as suspended particles. the
aerosol particles emitted by fireworks are generally composed of
metals(pottasium,magnisum,barrium,copper),the complex nature of particles emitted
during fireworks may cause adverse health effects.

An additional effect of fireworks is the visibility reduction due to the generation of a


dense cloud of smoke that drifts downwind and slowly disperses.
2. Methodology:

• Crackers sound level is measured by noise level meter.

• Crackers are ignited in the setup.

• Flue gases are collected by Vacuum pump, Balloon, Hood.

• Measurement of sulphur dioxide in flue gases by using idoimetry titration.

• Change composition and vary the composition ratio.

• Testing the new composition ratio.

• Finally to analyze the sulphur dioxide in new composition ratio.

3. Apparatus used

• Noise Level Meter

Fig (1) Locations of noise level meter


3.1. Experimental

Noise level meter is used to find the performance characteristics of crackers.

For finding the numerical (Sound Value) values of the crackers.

Initially the cracker was placed around the Noise level meter. The noise level meter
is placed 4 metres away from the cracker. The direction of noise level meter is such
as North South East West. Before preceding the above procedure, the noise level
meter has been set. Then the crackers are fired by me simultaneously the noise
level is also found.

The readings of the measured crackers are:

East:

dbA Lmax dbCPeak dbA Lmax dbCPeak


119.2 142.5 118.4 142.3
121.5 145.8 117.2 139.5

West

dbA Lmax dbCpeak dbA Lmax dbCPeak


117.8 141.6 105.7 130.7
118.8 142.3 121 145.4

4. Collecting Device Setup

For finding the composition of gases, the vacuum pump was placed to collect the
gases when the crackers are fired. The procedure is followed:

Initial setup was done for collect the gases of crackers. Make sure that the
following component is readily available:

 Electric Heater
 Hood (Made of Steel)

 Transparent tubes

 Balloons

4.1 Procedure:

Initially one steel plate was placed on the Electric Heater then heater is heated. The
chemical composition (1gram) of the cracker is placed on the steel plate. Hood is
placed on the steel plate as well this is has to be covered the chemical composition
of cracker. One end of the transparent tubes is connected with the hood then the
other end is connected with Vacuum pump. Another transparent tube is taken then
connected with vacuum pump because of collection of the gases when the cracker’s
chemical compositions are fired. Finally the flue gases are collected by the balloons.

5. Analysis of sulphur dioxide

5.1 Principle

The amount of gas required to react 10ml of iodine is estimated.

The amount of gas required = the amount of water displaced

Apparatus

• 500ml Gas wash bottle.

• 10ml Micro pipette.

5.2 Procedure:

Initial requirement:

1 litre of water

Potassium iodide (40grms)

Iodine (13grams)
Starch (2 ml)

N/10 Iodine solution= water+Pottasium iodide+ Iodine

Initially all the apparatus are cleaned. 100 ml distilled water is poured in gas wash
bottle. 2 ml of starch is added in the 100 ml of distilled water. Then distilled water
and starch are getting mixed. 10ml of N/10 iodine solution is added with the mixture.
The collected flue gases are sent to the gas wash bottle which is having mixture of
Iodine solution. Initially it has the colour of thick blue. The colour is getting colour-
less when the flue gases are added. Now measure the volume of water collected in
the gas wash bottle.

5.3 Calculation

Use N/10 iodine

Percentage of so2 = 11.20X100/11.20+vol.of water

Volume of water = 108ml

% of sulphur dioxide = 11.20 X 100/11.20 + 108

= 9.3%

= 2.65 ppm.

6. Conclusion

In this paper we investigated the performance characteristics of cracker. Then the


SO2 level is measured in the tested cracker keeping the level of performance

Measured value of so 2 2.65ppm


characteristic constant. Measured sound level in cracker 120db

7. Acknowledgement

Author is grateful to the management, principal and HOD, department of


mechanical engineering, Mepco Schlenk Engineering College, Sivakasi, for their
constant encouragement for offering facilities to carry out this research work.

References
1 The impact of fireworks on airborne particles, by Roberta Vecchi.
2 The Combustion Reactions of a Pyrotechnic White Smoke Composition,
Jarvis.
3 Ghosh K.N (1987) ‘The Principles Of Fireworks’.
4 Indian explosives Act 1884 by Vijay Malik.
EXPLOSIVITY TESTING OF HIGH ENERGY CHEMICALS

P.Karlmarxa, Azhagurajanb
a
II M.E., Industrial safety engineering , Mepco Schlenk Engineering College
Sivakasi.
b
Senior Lecturer, Mechanical Engineering Department, Mepco Schlenk Engineering
College Sivakasi.

(Corresponding email: karlmarxsafety@ymail.com, phone: +919865019369)

ABSTRACT
The Mechanical and thermal sensitivity of pyrotechnic compositions consisting
of mixtures of potassium nitrate (KNO3 ), sulphur (S) and charcoal (C) were found by
varying different compositions of KNO3, S, C and changing the fuels and oxidizers.
This indicates that all the compositions were found to be sensitive. Impact
sensitiveness of pyrotechnic compositions is analyzed using equipment similar to
BAM (fall hammer) equipment. Results indicate that an increase in the sulphur
content of the mixture raises its sensitivity to impact. The limiting impact energy falls
in the range of 11 to 12 Joules for the compositions studied.

Key words: explosivity, impact sensitivity, friction sensitivity, high-energy chemicals

1. Introduction:
Pyrothenic mixtures are en energetic compounds susceptible to
explosive degradations on ignition, impact and friction. Several accidents have been
reported in Indian fireworks manufacturing units during processing, storage and
transportation. An analysis of accident data recorded during the past ten years in
Tamilnadu in India has shown that the main causes are in adequate knowledge of
the thermal, mechanical and electrostatic sensitiveness of fireworks mixtures.
Most fireworks mixtures consist of an oxidizer, a fuel, a colour
enhancing chemical and a binder. The chemicals employed and their compositions
vary depending upon the type of fireworks being produced. The fireworks
effectiveness depends not only on the mixture composition, but also on the factors
such as particle size, moisture content, packing density and purity of the chemicals.

As per the Indian explosives act, 1884, the use of chlorate and sulphur
mixtures is prohibited due to its ease of ignition and sensitiveness to undergo
explosive decompositions. Alternate mixtures have been widely used in the fireworks
industry.

But still accidents occur, and the main reason is the poor understanding of the
explosive nature and lack of mechanical and thermal sensitivity data for mixtures
containing nitrate and sulphur compounds. In the past researchers have studied the
thermal stability and mechanical sensitivity of sulphur and chlorate mixtures.
However, the impact sensitivity of mixtures containing potassium nitrate (KNO3),
sulphur (S), charcoal (C) has not yet been reported.

The present study has multiple objectives; the first is the classification of
mixture. The other objectives are: to study the impact sensitiveness of mixtures
containing KNO3, S, and Al using the statistical tool mixture design.

2. Chemistry and Mechanism of Gunpowder:

Gunpowder composition consists of an oxidizer, commonly potassium


chlorate or barium nitrate with charcoal. Some companies use potassium nitrate as
the oxidizer, so this paper also examines gunpowder containing potassium nitrate as
the oxidizer. Sulphur acts as the ignition source, and charcoal acts as a fuel to
oxidize potassium nitrate. When gunpowder is ignited, initially sulphur melts. During
melting, the interaction between atom increases. This results in more atoms with
energies exceeding activation energy that will be contact and react. As the reaction
rate increases, the rate of energy release increases, which lead to thermal runaway
at a lower temperature, and the gunpowder, explodes.

3. Experimental
3.1 Materials
The chemicals used for the preparation of the gunpowder were obtained
from fireworks manufacturing company situated in the southern state of Tamilnadu,
India. The purity and assay of the chemicals were: KNO3-91.6%, S-99.84%, and C-
99.71%. The chemicals were passed through a 100-mesh brass sieve. The samples
were stored in an airtight container and kept away from light and moisture.

3.2 Apparatus Required:


3.2.1 Friction sensitivity tester

The diagram of the equipment used in this study for friction sensitiveness
measurement is shown in the figure 1. The friction sensitivity was determined using
friction tester by the common test methods of BAM friction apparatus. To set the
friction tester into starting position, turn the hand wheel on the top of the motor in
such a way that the two marks at the side of the table and base are lined up. When
pressing the start button, the table has to move on time backwards and than one
time forward.

 After setting the machine in the start position, a porcelain plate is placed
into the holding assembly with the "sponge marks" in the opposite
direction of motion.

 Switch on the main power switch no 7.


 The main power light no 11 flashes (see Fig. 1)
 About 10 mm3 (approximately 10 to 15 mg) of sample is placed in the front
and under the porcelain pin. Care must be taken that sufficient material is
ahead of the pin and subjected to friction when the plate moves.

 Lo
ad the bar with
a weight, and
push the start
button no 9.
 The table with the porcelain plate moves, to and fro, over a
distance of about 10 mm with a speed of 141 r/min.
When starting a test with unknown materials, a weight is chosen approximately in the
middle of the loading range and the test is started. If two reactions are detected, then
the load is decreased. If no reactions occur, and then the load is in- creased.

Figure (1) - Friction sensitivity test setup

Friction sensitivity is a relative measurement reported in kg, when inflammation or


explosion occurs only once in six repetition. High measurements indicate low friction
sensitivity making the pyrotechnic mixture to be considered as a safety mixture for
transport.

3.2.2 Impact sensitivity testerThe diagram of the equipment used in this study for
impact sensitiveness measurement is shown in figure 2. The design and principle of
the FixedPlate
SupportingColumn
GuideRod
SolenoidControlled
releasingdevice
SlidingPlate
AC230Volt
ClampingScrew
DropWeight
A
TopAnvil
LocatingRing
SparkSensor

Sample

LED

BottomPlate
BottomAnvil
Half-Sectional Front View

TopView- Sectionat A-A


equipment is similar to that of the drop fall hammer equipment of BAM standards.
For each test 0.1g sample was placed in the anvil and a weight of mass 2kg
(standard weight) was allowed to drop from different heights.

Figure (2) - Impact Test set up

The dropping weight was controlled remotely. On triggering the remote, the
will fall on the sample through the guides fixed to the column so that the weight
dropped directly on the striking head of anvil without rebound and distortion. Ignition
of the mixture was observed using an optical sensor. The impact sensitiveness was
measured in terms of the limiting impact energy (LIE) and calculated using equation
1.

LIE = mgh ........... eqn - (1)

Where
LIE - limiting impact energy in joules (j)
m - weight of the drop mass in kilograms (kg)
g - acceleration due to gravity (9.81 m/s)
h - fall height in meters (m)

The impact sensitivity measurements were carried out


according to the procedure outlined in the United Nations (UN)
recommendations on the transport of dangerous goods.

Drying Condition:

Humidity - 69%
Temperature – 39◦C
Time - 1hr

Test Data for Impact Sensitivity & Friction Sensitivity.

Exp KNO3 S C Condition LIE Friction


.No. (wt %) (wt %) (wt %) (J) Limiting load (N)

1 75 10 15 With drying 0 0

2 75 10 15 Without 0 0
drying

3 72.5 15 12.5 With drying 11.37 0

4 72.5 15 12.5 Without 11.72 0


drying

4. Results and Discussion

Impact sensitivity testing results for 2 different compositions under wet and
dry condition, shows that they are impact sensitive and limiting impact energy (LIE)
was in range of 10.4 to 13.73 J. It was observed that the impact energy varied when
any one of the component concentrations of the mixture was changed. This
behaviour was due to the sensitivity and reactivity of each component. Varying the
quantity of potassium nitrate in the reaction mixture had only a minimal effect on
impact sensitivity. However, increasing the concentration of sulphur had a marked
influence on impact sensitivity.

Friction was not observed for any of the particle size.


5. Conclusion

Impact and friction sensitivity of gunpowder composition with varying


compositions indicated that all the composition was found to be sensitive to impact
and less sensitive to friction.

6. Acknowledgements

Author is grateful to the management, principal and HOD ,department of


mechanical engineering, Mepco Schlenk Engineering College, Sivakasi, for their
constant encouragement for offering facilities to carry out this project.

References

5 S.P Sivaprakasm and M.Surianarayanan “Effect of particle size on the


mechanical sensitivity and thermal stability of pyrotechnic flash composition”,
Journal of Pyrotechnics, Issue 23,summer 2006, PP 39-49.

6 S.P Sivaprakasm and M.Surianarayanan “Impact sensitiveness analysis of


pyrotechnic flash compositionS”. Journal of pyrotechnics, Issue 21,summer
2005, PP 51-57.

7 Jeya Rajendran and T.L Thanulingam “A new formula for environment friendly
high energy pyrotechnic mixture”.

8 J. D. Blackwood and F. P. Bowden “The Initiation, Burning And Thermal


Decomposition of Gunpowder” department of physical chemistry, Vol.213. A.
(8 July 1952) 285 1 19.

9 Indian explosives Act 1884 by Vijay Malik.

Analysis of heat transfer co-efficient in nano fluids


N.V.KAMALESHa, M.RAJAb
a
II M.E, Thermal engineering, Government college of Engineering, Salem
b
Lecturer, Mechanical Engineering Department, Government College of
Engineering, salem
(Corresponding email: nivath_mech@yahoo.com phone: +91 9994879676)

ABSTRACT

Heat transfer enhancement is very important area and several efforts

and so many researches are carried out to improve the heat transfer rates.

Traditional coolants like water, ethylene glycol, engine oil and acetone have poor

heat transfer properties compared to those of most solids. Looking at the

requirements the need for developments of the enhancement cooling technology, in

this project to improve the heat transfer rate of conventional fluids the nano particles

are suspended in the base fluid. In this project two mediums are selected namely

water and ethylene glycol. To improve the heat transfer rate of the above two

mediums water and ethylene glycol the nano particles of aluminium oxide is chosen.

We have considered the problem of forced convection flow of fluid inside a uniformly

heated tube that is submitted to a constant and uniform heat flux at the wall. The

heat transfer co-efficient was analyzed for the category of Reynolds number 10000,

20000, 30000. Finally we have prove that the heat transfer co-efficient of

conventional fluids like water and ethylene glycol shows better result when mixes

with the nano particles.

INTRODUCTION

The thermal properties of heating or cooling fluids play a vital role in

the development of new energy efficient heat transfer equipment. However

conventional heat transfer fluids such as water, Ethylene glycol, Engine oil and

Acetone have poor heat transfer properties compared to those of most fluids. In spite

of considerable research deployed until today. The major improvement in heat

transfer capabilities have suffered a major lacking as a result and important need

skills to develop new strategies in order to improve the effective heat transfer
behaviors of conventional heat transfer fluids. To improve the heat transfer rate of

conventional fluids the nano particles are suspended in the base fluid. In this context

the two fluid medium are selected namely Water and Ethylene glycol. To enhance

the heat transfer rate of the above said medium the nano particles of Aluminium

oxide were chosen because of the following reasons,

• They were easy to produce

• Chemically stable
In order to analyze the heat transfer rate of the base fluid and nano fluid CFD
software namely FLUENT is used. This project mainly deals with the heat transfer
coefficient analysis of the fluid medium under the following options.

• Analysis of heat transfer co-efficient of conventional fluid [water]


• Analysis of heat transfer co-efficient of nano fluid [water + Al2O3 NP]
LITERATURE SURVEY:

EASTMAN ET.AL showed that 10mm copper particles in ethylene


glycol could enhance the conductivity by 40% with small particle loading fraction.
With cupric oxide the enhancement was 20% for a volume fraction of 4%. These
results clearly show the effect of particle size on the conductivity enhancement.

DAS ET.AL measured the conductivities of alumina and cupric oxide


at different temperature ranging from 200C to 500C and found linear increase in the
conductivity ratio with temperature. However the same load fraction the ratio of
increase was higher for cupric oxide than alumina.

KIM.ET.AL conducted experiments on several oxide nano particles


over a wide range of experimental conditions. They also demonstrated that high
power laser irradiation can result in significant increase in effective thermal
conductivity even at small volume fractions.

PAK AND CHO studied the heat transfer performance of Al 2O3 and
TiO2 nano particles suspended in water and expressed that convective heat transfer
coefficient is 12% smaller than that of pure water at 3% volume fraction
PROBLEM IDENTIFICATION:

DETAILS OF TEST SECTION:

The test section chosen for that work is a straight brass tube of inner

diameter of 10mm and length of 800mm and the section was subjected to uniform

heat flux of 3.5KW. Using GAMBIT software creates the test section. Then it is

exported to the FLUENT software for analyze the heat transfer rate. The heat

transfer rates are analyzed for different Reynolds number of the category 10000,

20000 and 30000. The results are obtained in CFD software (FLUENT). FLUENT is

a supporting software for CFX.CFX is a commercial Computational Fluid Dynamics

(CFD) program, used to simulate fluid flow in a variety of applications. The ANSYS

CFX product allows engineers to test systems in a virtual environment. The scalable

program has been applied to the simulation of water flowing past ship hulls, gas

turbine engines (including the compressors, combustion chamber, turbines and

afterburners), aircraft aerodynamics, pumps, fans, HVAC systems, mixing vessels,

hydrocyclones, vacuum cleaners, and more.

THERMAL AND PHYSICAL PROPERTIES OF NANO FLUIDS:

DENSITY (ρ) ρ = (1-φ) ρo + φρs


SPECIFIC HEAT (CP) CP = (1-φ) (CP)0 + φ(CP)S

DYNAMIC VISCOSITY (µ) µ = µ0 (123 φ2 + 7.3φ+ 1)

THERMAL CONDUCTIVITY (k) k = ko * ks + 2k0 + 2(ks – ko) (1+β3) φ

____________________________

ks + 2 k0 - 2(ks – ko) (1+β3) φ


.

The nano particles are added in different percentages in the water to

form the nano fluid. Then the nano fluid is allowed to passes through a uniformly
heated pipe (brass tube) and the heat transfer rate was analyzed for different

Reynolds numbers category of 10000,20000 and 30000.. To analysis the heat

transfer rate of nano fluid the above properties is used.

The addition of small particles to the fluid can some times provide heat
transfer enhancement. However the works in this area provide the suspension of
micro to macro size particles bear the following major disadvantages.

• The particles settle rapidly, forming layer on the surface and reducing the heat
transfer capacity of the fluid.

• If the circulation rate of the fluid is increased sedimentation is reduced by the


elusion of the heat transfer devices, pipe lines etc. increase rapidly.

• The large size of the particles tends to clog the flow channels particularly if the
cooling channels are narrow.

• The pressure drop in the fluid increase considerably

Nanomaterials are applications with morphological features smaller

than a one tenth of a micrometer in at least one dimension. Despite the fact that

there is no consensus upon the minimum or maximum size of nanomaterials, with

some authors restricting their size to as low as 1 to ~30 nm, a logical definition would

situate the nanoscale between microscale (0.1 micrometre) and atomic/molecular

scale (about 0.2 nanometers). In nanotechnology, a particle is defined as a small

object that behaves as a whole unit in terms of its transport and properties. It is

further classified according to size: In terms of diameter, fine particles cover a range

between 100 and 2500 nanometers, while ultrafine particles, on the other hand, are

sized between 1 and 100 nanometers. Similarly to ultrafine particles, nanoparticles

are sized between 1 and 100 nanometers, though the size limitation can be

restricted to two dimensions. Nanoparticles may or may not exhibit size-related


properties that differ significantly from those observed in fine particles or bulk

materials.

The suspended nano particles in the conventional fluids


called nano fluids. The attractive features that made nano particles suspension in
fluids are,

• A large surface area

• High mobility

• The nano sized particles are properly dispersed

The following nano particles are most widely used for


heat transfer applications aluminium oxide, aluminium, copper, copper oxide, silver
etc. The following methods are used for the production of nano particles. They are
plasma arcing, chemical vapour deposition, electro deposition, ball milling etc.

The electrodes can be made of other materials but they

must be able to contact electricity. An interesting variation to make the electrodes

form a mixture of contacting and non-contacting materials. Plasma arcing cab is

used to make deposits on surfaces rather than new structures. In this way it

resembles chemical vapour deposition except that the species involved are ionized.

As surface deposit the nonmaterial’s can be as little as a few atoms in depth. It is not

a nonmaterial’s unless at least one dimension of the bulk particles of the surface

deposit is of nanometer scale. If this is not true it is a thin film and not nano

materials. Each particle must be anodized and independent other than interacting by

hydrogen bonding. Sol gel is a useful self-assembly process for nonmaterial

formation. Solutions are clear because molecules of nanometers size are dispersed

and move around randomly. In colloids the molecules are much larger and range

from 20µm to 100 µm in diameter. Colloids tare suspensions of these sized

molecules in a solvent.
Plasma is an ionized gas. Plasma is achieved by making
gas conduct electricity by providing a potential difference across the two electrodes
so that the gas yields up its electrons and that ionizes. A typical plasma arcing
device consists of two electrodes. An arc passes from one electrode to the other.
The first electrode vaporizes as electrons are taken from it by the potential
difference. To make carbon nano tubes carbon electrodes are used. Atomic carbon
actions are produced these positively charged ions pass to the other electrode, pick
up electrons and are deposited to form nanotubes. An object to be coated is allowed
to stand in the presence of the chemical vapour. The first layer of molecules or
atoms deposited may or may not react with the surface. However these first formed
depositional species can act as a template on which materials are often aligned
because the way in which atoms and molecules are deposited is influenced by their
neighbors. During deposition a site for crystallization may form in the depositional
axis so that aligned structures grow vertically.

ANALYSIS OF HEAT TRANSFER CO-EFFICIENT OF CONVENTIONAL FLUID


[WATER]

The results are obtained in FLUENT software.


Reynolds number (Re) vs Heat transfer coefficient (h)
Reynolds Heat transfer
number coefficient

10000 6892.77

20000 7739.3

30000 8303.12

ANALYSIS OF HEAT TRANSFER CO-EFFICIENT OF NANO FLUID [WATER +


AL2O3 NP]

Reynolds number (Re) vs Heat transfer coefficient (h)


Reynolds Heat transfer
number coefficient

10000 7048.02
20000 7887.27

30000 8440.58

CONCLUSION
The convective heat transfer features of water and nano fluid in a
tube were analyzed. The suspended nano particles remarkably enhance the heat
transfer process and the nano fluids has large heat transfer coefficient than the
original base fluids under the same Reynolds number. The heat transfer feature of a
nanofluid increases with the volume fraction of nano particles. Thus we conclude that
enhancing the heat transfer properties of the conventional fluids that have poor heat
transfer properties by using the nano particles by mixing it with the base fluids.

REFERENCES

1. Xaun.y.li “investigation on convective heat transfer and flow features of nano


fluids”. asme journal of heat transfer vol.125 no: 1 pp: 151-155, 2003

2. Wen.d. and ding.y. “Experimental investigation into convective heat transfer of


nano fluids at the entrance region under laminar flow conditions”, international
journal of heat and mass transfer vol: 47 pp: 5188-2004

3. Pak.b. and cho.y.i. “Hydro dynamic and heat transfer study of dispersed fluids
with sub micro metallic oxide particle”. Experimental heat transfer vol.11 pp
151-170, 1998

4. Choi.s.u, “enhancing thermal conductivity of fluids with nano particles”, pp 99-


105, american society of mechanical engineers, new york 1995

5. Xuan and rotezel.w. “Conceptions for heat transfer correlation of nano fluids”,
international journal of heat and mass transfer vol: 43 pp 3701-3707, 2000

6. Yang.y.z. zharg.g.grullce, e.a.anderson, “heat transfer properties of nano


particle fluids. Dispersions (nano fluids) in laminar flow”. International journal
of heat and mass transfer. vol-48, pp 1107-1116, 2005

7. Maiga, s.e.nguyen, c.t.galanis n and roy.g.”heat transfer behaviors of nano


fluids in a uniformly head tube” super lattices and micro structures vol 35,
pp543-557 2004

8. Boargirino.j.”convective heat transfers enhancement in nano fluids”.proc 18th


national and 7th ishmt-asme heat and mass transfer conference, iit guwahan-
india pp: 2417-2423 jan.3.2006
___________________________________________________________________
___________

MODELLING OF WELDING FUME PLUME DISPERSION WITHIN THE WORKING


ENVIRONMENT

K.Deepana, R.Ayyappanb Kalpana Balakrishnanb M.Anandhanc


a
II M.E., Industrial safety engineering , Mepco Schlenk Engineering College
Sivakasi.
b
Department of Environmental Health Engineering, Sri Ramachandra University,
Porur, Chennai.
c
Senior Lecturer, Mechanical Engineering Department, Mepco Schlenk Engineering
College Sivakasi.
(Corresponding email: deepanbm@yahoo.co.in, phone: +919176099774)

ABSTRACT
Measurement of Breathing zone fume generation and individual particulate
concentration generated from base metal and weld electrode becomes vital to take
appropriate measures to eliminate them at source. In order to take control measures
most significant parameters should be identified such as current, voltage, diameter
of electrode, stick out distance and welding speed for ER70S6 MIG wire and E6013
SMAW welding elctrode were taken for this study. Further the study was focused to
assess Breathing zone concentration by a personal air sampler and individual
particulate concentration by Inductively Coupled Plasma Analyser. The statistical
modelling using ANOVA was developed to determine the plume dispersion within
the environment with respect to various input parameters.

Key words: Breathing Zone Concentration, Personal Air Sampler, Inductively


Coupled Plasma Analyser, ANOVA

1. Introduction:
Welding is one of the most widely used metal fabrication methods. Of all the
welding processes, manual metal arc welding and metal inert gas welding account
for 60-70 percent of welding activities in the industry. Workers are exposed to
emissions such as fumes and gases arising out during welding process unless it is
effectively controlled. Fumes consists of individual constituents like chromium,
magnesium, nickel, etc., that may result in respiratory disorders which include
bronchitis, airway irritation, lung function changes and a possible increase in the
incidence of lung cancer if it exceeds the threshold limit value(5mg/m3).

Welding fumes have posed a threat to health since the first coated electrodes
were introduced. The earliest cases of welders being effected by its noxious fumes,
when operators were found to exhibit signs of pneumoconiosis.

Fumes from welding consists of various chemical constituents of the materials


used in the process and are transported in an airborne plume cloud of fine particles.
Methods of controlling the fume at the source by optimising welding parameters and
carefully selecting consumables have been developed in this technique to solve the
problem.

2. Experimental

2.1 Breathing Zone Sampling:

It is the preferred method of evaluating worker exposure to airborne


particulate matters and gases. The worker wears a sampling device that collects an
air sample. The sampling device is placed as close as possible to the breathing zone
of the worker (defined as a hemisphere in front of the shoulders with a radius of 6-9
in.). So the data collected closely approximate the concentration inhaled.

2.2 Personal Air Sampler:

The Air Sampler consists of the following

• Sampling pump

• Cassette

• Filter paper
• Tube connections

Fig 1. Personal Air Sampler

2.2.1 Description of sampler:

Make and Model: SKC Universal Sample Pump

Flow range : 5 to 5000 ml/min

Weight : 936 grams

Run time : Maximum 6.8 days

Power supply : 6V

2.3 Calculation of Concentration:

Volume of air sample = Elapsed time X Flow rate

Weight of the dust


Concentration of dust = ---------------------------------------
Volume of air sample

2.4 Analysis and Prediction of MIG ER-70S6 wire:


Experiments were conducted on the MIG welding setup manufactured by
Lincoln Power Wave 455(Semi mechanized MIG welding station). An extensive set
of tests were conducted over a wide range of conditions. All the tests were
conducted on carbon steel using ER70S6 MIG welding wire in an automotive
industry, Chennai. It is representative of the typical behaviour of many carbon and
alloy steels so that the rules developed could be reasonable applied to many steels.
The chosen input parameters in this study are welding speed, welding
voltage, constant gas flow rate, stick out distance and the response considered is
fume generation. Three levels are considered for each of the four input parameters
(refer Table 1), so 24 combinations of input process parameters are to be considered
for modelling.

Table 1. Input Parameters and their Levels


SI.No. Parameters Units Level 1 Level 2 Level 3
1 Current Amps 180 230 280
2 Voltage Volts 18 24 30
3 Welding speed Mm/min 250 375 500
4 Stick out distance Mm 16 18 20

Table 2. Output Responses


SI.No. Parameters Units
1 Fume Generation Mg/m3

2.4.1 Experimental Design and Output:


Table 3. Experimental Design and Output of MIG ER70S6 wire
E.N Curre Volta Weldi Stick Weight Weight Difference Fume
o. nt ge ng out of of in generat
(Amp (Volts Speed distan Filter Filter concentrati ion
s) ) (m/mi ce Paper Paper on (g) (mg/m3
n) (mm) before after )
sampli sampli
ng (g) ng (g)
1 230 24 375 18 0.0449 0.0453 0.00040 2.500
0 0
2 180 30 500 16 0.0449 0.0450 0.00016 1.000
0 6
3 280 18 375 20 0.0447 0.0451 0.00048 3.000
0 8
4 180 30 250 20 0.0447 0.0448 0.00015 0.937
0 5
5 230 30 500 20 0.0449 0.0453 0.00041 2.563
0 1
6 180 18 500 20 0.0449 0.0450 0.00012 0.750
0 2
7 180 18 250 16 0.0444 0.0445 0.00010 0.600
0 0
8 180 30 375 16 0.0442 0.0443 0.00013 0.812
0 3
9 180 30 250 16 0.0440 0.0441 0.00011 0.687
0 1
10 230 18 250 20 0.0442 0.0445 0.00039 2.438
0 9
11 180 30 250 20 0.0445 0.0446 0.00015 0.937
0 5
12 230 24 375 18 0.0443 0.0447 0.00042 2.600
0 2
13 280 30 250 16 0.0445 0.0451 0.00065 4.062
0 5
14 280 30 500 16 0.0449 0.0457 0.00080 5.000
0 0
15 180 24 500 16 0.0442 0.0443 0.00017 1.060
0 7
16 230 30 500 20 0.0440 0.0444 0.00041 2.563
0 1
17 280 24 500 20 0.0442 0.0447 0.00051 3.187
0 1
18 180 30 500 18 0.0444 0.0445 0.00019 1.200
0 9
19 180 30 500 18 0.0444 0.0446 0.00020 1.250
0 0
20 280 18 500 16 0.0447 0.0453 0.00060 3.750
0 0
21 280 30 250 20 0.0446 0.0452 0.00068 4.250
0 8
22 230 30 250 16 0.0448 0.0452 0.00040 2.500
0 0
23 180 18 500 16 0.0448 0.0449 0.00011 0.687
0 1
24 280 18 250 16 0.0447 0.0450 0.00030 1.875
0 0
For the future, same model will be applied for the manual welding process
with different parameters to see the variation in the concentration of welding fumes.

2.5 Analysis and Prediction of SMAW E6013 electrode:


Experiments were conducted on the SMAW welding setup manufactured by
Lincoln Power Wave 455. An extensive set of tests were conducted over a wide
range of conditions. All the tests were conducted on carbon steel using E6013
SMAW welding electrode in an automotive industry. It is representative of the typical
behaviour of many carbon and alloy steels so that the rules developed could be
reasonable applied to many steels.
The chosen input parameters in this study are welding current with constant
voltage, electrode diameter and the response considered is fume generation. Three
levels are considered for each of the four input parameters (refer Table 4), so 21
combinations of input process parameters are to be considered for modelling.

Table 4. Input Parameters and their Levels

SI.No. Parameters Units Level 1 Level 2 Level 3


1 Current Amps 80, 85, 90, 100, 105, 130, 135, 140,
95, 100 110, 115, 120 145, 150
2 Diameter mm 2.5 3.15 4

Table 5. Output Responses

SI.No. Parameters Units


1 Fume Generation Mg/m3
Table 6. Experimental Design and Output of SMAW E6013 electrode

E.No. Curre Diamet Weight Weight Difference Fume


nt er (mm) of Filter of Filter in conc.
(Amp Paper Paper concentrati (mg/m3 )
s) before after on (g)
samplin samplin
g (g) g (g)
1 80 2.5 0.0462 0.0472 0.0010 1.052
2 80 2.5 0.0463 0.0475 0.0012 1.212
3 85 2.5 0.0464 0.0477 0.0013 1.354
4 90 2.5 0.0471 0.0485 0.0014 1.458
5 95 2.5 0.0470 0.0485 0.0015 1.563
6 100 2.5 0.0465 0.0482 0.0017 1.771
7 100 2.5 0.0463 0.0480 0.0017 1.771
8 100 3.15 0.0463 0.0487 0.0024 2.512
9 100 3.15 0.0461 0.0486 0.0025 2.610
10 105 3.15 0.0469 0.0496 0.0027 2.813
11 110 3.15 0.0459 0.0487 0.0028 2.917
12 115 3.15 0.0470 0.0499 0.0029 3.021
13 120 3.15 0.0464 0.0495 0.0031 3.229
14 120 3.15 0.0465 0.0496 0.0031 3.229
15 130 4 0.0459 0.0493 0.0034 3.592
16 130 4 0.0466 0.0500 0.0034 3.592
17 135 4 0.0465 0.0501 0.0036 3.750
18 140 4 0.0463 0.0501 0.0038 3.958
19 145 4 0.0457 0.0497 0.0040 4.212
20 150 4 0.0463 0.0506 0.0043 4.432
21 150 4 0.0462 0.0505 0.0043 4.432

3.Modelling of Fume Plume Dispersion

Variance in the input parameters can be determined with the help of Statistical
modelling using ANOVA. So that it is identified that depending upon which input
parameter the fume concentration may exceed the Occupational Exposure Limit.
4. Results and Discussion

An extensive set of tests were conducted over a wide range of conditions for
MIG welding wire ER-70S6 using Semi-mechanized welding station and SMAW
welding electrode E6013 and the results have been tabulated. And conducting a
comparative study with the results of manual welding process shows the difference
in the concentration of fumes. This can be attributed to the fact the temperature of
the plume has been reduced considerably due to its interaction with the environment.
Statistical modelling using ANOVA also provides a clear view of plume concentration
within the working environment. These data would be useful for designing an
efficient ventilation system.

5. Conclusion

Experimental result has led to the conclusion about the variation of response
parameters in terms of independent parameters within the specified range. Voltage,
Current, Welding speed and dimension of the welding rod are the most significant
factors for all responses.

For this assessment it can be observed that the TLV of fumes is almost
reaching the standard value. If the workers are exposed in such a condition for a
long period of time they may get occupational diseases. However the study
considers the worst case condition (no mechanical ventilation) and in open
atmosphere this model will be in effective.

6. Acknowledgement

Author is grateful to the Faculty and HOD, Environmental Health Engineering


Department, Sri Ramachandra University, Chennai; Management, Principal and
HOD, Department of Mechanical Engineering, Mepco Schlenk Engineering College,
Sivakasi, for their constant encouragement for offering facilities to carry out this
project.

References:
10 J.Norrish, G.Slater, P.Cooper “Particulate Fume Plume Distribution and
Breathing Zone Exposure in Gas Metal Arc Welding”. University of
Wollongong, New South Wales, 2522, Australia

11 N.T.Jenkins and T.W.Eagar “Chemical Analysis of Welding Fume Particles”.


Welding Journal, American Welding Society and the Welding Research
Council, June 2005.

12 N.T.Jenkins, P.F.Mendez, T.W.Eagar “Effect of Arc Welding Electrode


Temperature on Vapour and Fume composition” Massachusetts Institute of
Technology, USA

13 S.Dilip Srinivas, K.Mukund, M.Arun “Computational Modelling and Simulation


of Buoyant Plume Dynamics” Department of Production Engineering, PSG
College of Technology, Coimbatore

14 “Application of CFD Modelling to the Design of Fume Control Systems in the


Steel Industry” Tom Plikas and Jennifer Woloshyn and Dale Johnson, Hatch
Ltd
DESIGN AND DEVELOPMENT OF POWER GENERATING SHOCK ABSORBER
M. KALAIMANI*
*
Lecturer, Department of Mechanical Engineering, S.S.M. College of Engineering,
Komarapalayam.
Kalaimani_pdd@yahoo.co.in
Abstract

The shock absorber is mostly used in the automobiles for absorbing shocks
application. The basic construction of the shock absorber is spring and damper
combination which will absorb the shock while the vehicle is running. During the
working of the shock absorber has down the linear motion is to convert the electrical
energy. A linear generator according to the present invention is adapted to generate
a voltage proportional to the speed of a movable permanent magnet. The magnet is
surrounded by a copper wire coil. As the magnet moves back and forth through the
coil an electric current is automatically generated. One of the advantages of this
approach is that the current is produced directly without the need of a generator. An
electromagnetic analysis has been performed to analyze the overall generator
design.
Keywords: linear generator, electromagnetic, finite element model in Quick field.
1. Introduction

Shocks absorbers are used to damp oscillations by absorbing the energy contained
in the springs or torsion bars when the wheels of an automobile move up and down.
Conventional shock absorbers do not support vehicle weight. They reduce the
dynamic wheel-load variations and prevent the wheels from lifting off the road
surface except on extremely rough surfaces and making possible much more precise
Steering and braking. The shock absorbers turn the kinetic energy of suspension
motion into electrical energy. Linear generators have lately been suggested as
suitable energy converters in shock absorber. A linear generator it is possible to
couple the motion of the shock absorber directly to the generator. The generator
consists of a stator; copper coils and a linear translator; which carries a different
polarity of permanent magnets is juxtaposition to the shaft through the movable
member linear translator rod. The permanent magnets are arranged in a manner to
tightly contact with each other and the polarity of each adjacent permanent magnet is
opposite to each other. The particular magnet is chosen for this application because
this magnet should have highest magnetic Property for produce the current.
Although this requirement satisfies the generator design, maximum operating
temperature for the permanent magnet should be observed to maintain its physical,
mechanical and magnetic properties.
2. Design of Permanent Magnet Linear Synchronous Generator

A linear generator can be excited either by the field winding or by permanent


magnets. In the case of a linear generator with permanent magnets, its operation
conditions depend not only on permanent magnets, but also on the entire magnetic
circuit. They also depend on how the magnets are installed and if the circuit is
magnetized before of after the installation. It is fundamental to determine the size of
the magnets. First, it is necessary to choose a specific type of permanent magnet,
because each kind of magnet has a unique characteristic.

3. Permanent Magnet Characteristics

3.1. Demagnetization Curve and Magnetization Parameters

Permanent magnet can be described by its B-H curve which usually has a
wide hysteresis loop (Fig. 4.1.). For permanent magnets, the essential part of the B-
H curve is the second quadrant, called the demagnetization curve. There are two
significant points on this curve: one at H = 0 , where the magnetic flux density is
equal to Br (remanent magnetic flux density, or remanence), another point c H at
point B = 0 , where a reverse magnetic field intensity is applied to a magnetized
permanent magnet (coercive force, or coercitivity).
Demagnetization curves for different permanent magnet materials

The saturation magnetic flux density sat B corresponds to high values of magnetic
field intensity when an increase in the applied magnetic field produces no further
significant effect on the magnetic flux density. Maximum magnetic energy per unit
produced by a PM is the maximum energy density per volume:

In electric machines technology, the following PM materials are used


− Alnico (Al, Ni, Co, Fe);
− Ferrites (ceramics), e.g., barium ferrite 2 3 BaO× 6Fe O and strontium ferrite
2 3 SrO × 6Fe O ;
− Rare-earth materials, i.e., samarium-cobalt SmCo and neodymium-iron-boron
Nd − Fe − B

Demagnetization curves for different permanent magnet materials


Alnico magnets were used by the PM machines industry from the mid 1940s to about
1970. They have high magnetic remanent flux and low temperature coefficients, but
low coercive force and the extremely non-linear demagnetization curve. In the
1950s, Ferrites were invented. They have a higher coercive force than that of Alnico,
but they have a lower remanent magnetic flux density. They have a low cost and
very high electric resistance, not having eddy-current losses in the PM volume. Rare
earths PMs have been developed during the last three decades having great
progress concerning available energy density. The first generation of the rare earths
PMs are based on the composition 5 SmCo , with a high remanent flux density, high
coercive force, high-energy product, linear demagnetization curve, and low
temperature coefficient. The only disadvantage is the high cost, due to the supply
Restriction of Sm and Co.

Based on the cost, a second generation of rare earth has been discovered with
neodymium Nd and iron. The Nd is much more abundant than Sm . Nd − Fe − B
magnets have better properties than 5 SmCo , but unfortunately the disadvantage is
that their demagnetization curves depend on the temperature, and are also
susceptible to corrosion. Protection of Nd − Fe − B magnets for metallic ( Sn or Ni )
or organic (electro-painting) is the best method of protection against corrosion

1.1. Parameters of permanent magnet materials

If the temperature increases, there is some degradation of the properties of the


permanent magnets, and they vanish completely at Curie temperature. Table 1.2.
shows these changes. Coefficients C C and B C show in percents the reversible
changes in remanence and coercive force.

1.2. Temperature influence on permanent magnet materials

The material recommended for the model in this project is the Nd − Fe − B since it
considerably improves the performance-to-cost ratio. Ferrites are not used because
they would increase the size of the shock absorbers, and 5 SmCo would increase
the cost.

3.2. Determination of the Operating Point of the Permanent Magnets

If B is the magnetic flux density, the total magnetic flux φ can be expressed as

Where the integral is over an area A . So, if the magnetic flux density through the
Transversal section of a core is uniform:
Where: φ - core flux
B - Flux density in the core
A - Transversal section area of a core
The relation between the magnetomotive force (mmf), and the magnetic field
Intensity for magnetic circuits is given by

Where: H - average magnitude of field intensity in the core


l - Average length of the core
The relation between the magnetic field intensity H and the magnetic density B
Depends on the material in which the field exists, and

Where: μ - magnetic permeability

4. Design of Linear Generator

4.1. Stator and Rotor Cores

Linear generator is excited by permanent magnets attached to the secondary


ferromagnetic core that can be made of solid iron. The iron core is rigidly attached to
the outer cylinder, which is also made of the solid iron. Another part of the generator
(primary part) is attached to the bottom cover (Fig 3.1.). During the oscillation the
magnets are moving with respect to the winding, inducing in it an ac voltage. To
determine the dimensions of the magnetic circuit, only a section of the generator
equal to the length of pole pitch is considered

If the pole pitch ô of the generator is given, the magnet width

Where = 0.65 − 0.75 m K


The tooth pitch:

Where: m - number of phases


q - Number of slots per pole per phase
To determine the tooth width, it is assumed that the entire magnetic flux crossing
The air-gap is in the teeth. That means:

Where: g B - magnetic flux density in the air-gap


B t - magnetic flux density in the teeth
A g – cross section area on the magnet surface along a half pole-pitch
D - Outer diameter for the secondary
A t - The cross-section area in the tooth

D - Outer diameter for the secondary


A t - The cross-section area in the tooth

And the slot opening is calculated as the difference

Due to the primary part slots, the magnet flux experiences an increase of the real
air-gap. Thus the new equivalent air-gap

4.2. Voltage Drop

Referring to the simplified equivalent circuit of the generator shown in Fig. 4.9.,
The output voltage:

and the voltage drop across the phase resistance Rφ

4.3. Power Losses

In a generator there are three kinds of losses:

a) Core losses, due to the change of magnetic field; these losses take place in the
stator steel, and they consist of the hysteresis losses and the losses due to the eddy
currents.
b) Copper losses; they are resistive losses in the coil windings
c) Mechanical losses due to friction and ventilation.
The copper (resistive) losses, which are the only losses considered in this
Application, appear in the conductor with the electrical resistance T Rφ carrying a
current I:
The output power

and the generator efficiency:

5. Steady State Characteristics of Electric Shock Absorber

A computer program (steady.m) was written using Quick field in the form of m-file.
The
Force-speed characteristics of the electric shock absorber obtained for the
parameters
Indicated are shown in

Damping force-speed characteristics

The steady computer program plots the efficiency-speed characteristic for two
different values of the source voltage (V 12 and 24 V ) S = . The efficiency-speed
characteristics of the electric shock absorber obtained for the parameters indicated
are shown in
Efficiency-speed characteristics
Relative speed of the generator with the modified circuit

Generator output current with the modified circuit

Electromagnetic (damping) force of the generator with the modified circuit.


Displacement of the secondary with respect to the primary of the
PMLSG with the modified circuit.

6. CONCLUSION

The shock absorber has been designed and analyzed to use in tow wheelers.
It consists of a permanent magnet linear synchronous generator, a spring, and an
electric accumulator. The electric accumulator consists of a controlled rectifier and a
battery, and it was not evaluated in the present project. In the design calculations,
the dimensions and performance parameters of the currently used mechanical shock
absorbers were used as the reference. For this purpose, these shock absorbers
were
described first.

The results obtained from the dynamic simulation of the electric shock
absorber with the modified output electric circuit show that the oscillations attenuate
to zero after disturbance appears. Therefore, the electric shock absorber works
properly under the modified circuit.
7. REFERENCES

[1] Reimpell, J., Stoll, H., and Betzler, J., “The automotive Chassis, Engineering
Principles”, Second Edition, 2001, pp. 347-385

[2] Crouse, W. and Anglin, D., “Automotive: Chassis and Body,” Fifth Edition,
McGraw-Hill Book Company, 1976, pp. 48-54

[3] Mendrela, E. and Drzewoski, R., “Electric Shock Absorber for Electric
Vehicles,” Conference, Proc. of BASSIN’ 2000, Lodz, Poland 2000.

[4] Gieras, J. and Wing, M. “Permanent Magnet Motor Technology, Design and
applications,” Second Edition, Eastern Hemisphere Distribution, 2002. pp. 51-52

[6] Boldea, I. and Nasar, S., “Linear Electric Actuators and Generators,” Cambridge
University Press, 1977, pp. 46

[7] Mendrela, E., Handouts on Leakage Inductance of the generator winding.

[9] Danielson, O., “Design of a Linear Generator for Wave Energy Plant,” Master
Degree Project, Uppsala University School of Engineering, UPTEC F03 003,
January 2003

[10] Vacuumschmelze – Rare – Earth Permanent Magnets. VACODYMVACOMAX


Catalog

[11] “Shock Absorbers,” 2003. Available at:


http://www.monroe.com/tech_support/tec_shockabsorbers.asp

[12] “Handbook of Mechanical Spring Design,” Associated Spring Corporation,


General Offices, Bristol, Connecticut 06012, 1964, pp. 50-51

[13] Walsh, R., “Electromechanical design handbook,” Second Edition, McGraw-Hill,


Inc., 1995, pp. 7.1-7.45

[14] “Design and Engineers Resources,” 2004. Available at:


http://www.engineersedge.com/
REAL-TIME IMAGE SEGMENTATION ON
CELL BASED NETWORK

S.KARTHICK, Lecturer, Department of ECE.


V.M.K.V engineering college

Image segmentation of still and real video signals is an important initial task
for higher level image processing such as object recognition or object tracking. The
hardware realization is an important task for achieving very high speed
segmentation, in the order of tens of microseconds for a color image and hundreds
of nanoseconds for a gray image. So, if a hard ware architecture which can segment
both color and gray images is found, it will be very helpful for image segmentation
field.

The aim of this paper is realization of a digital algorithm for gray scale/color
image segmentation. The implemented algorithm is adoptable for both gray scale
and color image segmentations. So, only some slight modifications are needed to
perform both gray and color image segmentations by using the same chip. Since
only a preprocessing unit is differed for both architecture, the time difference for
segmentation of gray and color images are significantly reduced.
CELLULAR NEURAL NETWORKS

C.ARUN KUMAR MADHUVAPPAN, Lecturer, Department of ECE.


V.M.K.V engineering college
R.RAMANI , Lecturer, Department of ECE.
V.M.K.V engineering college
Neural network is composed of a group or groups of chemically connected or
functionally associated neurons. A single neuron may be connected to many other
neurons and the total number of neurons and connections in the network may be
extensive.

Similar to neural networks CNN are also a parallel computing paradigm with the
difference that communication is allowed between neighbouring units only.
Applications like image processing, analyzing 3D surfaces, solving partial differential
equations and other sensory-motor organs are included.

CNN processors are a system of a finite, fixed number, fixed location, fixed topology
which is locally interconnected.

The topology and dynamics of CNN processors closely resembles that of CA


(continuous automata). It is conceivable that large CNN processors compared to the
resolution of the input and the output can be modeled as a Continuous Spatial
Automata.

Back propagation algorithms tend to be faster but genetic algorithms are useful
because they provide a mechanism to find a solution in a discontinuous, noisy
search space.

CNN processors have been implemented and are currently available as


semiconductors, and there are plans to migrate CNN processors emerging
technologies in the future.

QUALITY CONTROL ON ELECTOSURGERY AND ITS EQUIPMENTS USING FMEA


TECHNIQUE FOR HEALTH CARE INDUSTRY

Prof.S.DURAI THILAGAR* P.GANESAN**

*Professor, Department Of Mechanical Engineering. VMKV Engineering College,


Salem-308
** Final Year-ME-Manufacturing Engineering. VMKV Engineering College, Salem-
308
xxxxxx
National
Conference on

ABSTRACT Recent Trends in Manufacturing Technology

Quality plays an inevitable role not only in manufacturing industries but also
in healthcare industries (Hospitals), retail business, banking and all service-based
jobs. In this paper, factors to improve the quality in healthcare industries are
presented. Interface matrices are created and ranking has been given for each
factors. Priority is given for Life saving activities in the ranking. Pie chart is created
using interface matrices. Minimization of human and technical errors in surgical
equipments is selected as high priority using Interface matrices. As per Association
of periOperative Registered Nurses AORN, Electro surgery generator is considered
as high-risk equipment. In this project, Electrosurgery process and its equipments
are taken for quality control activities in health care industry. Electrosurgery is the
application of a high-frequency electric current to biological tissue as a means to
cut, coagulate, desiccate, or fulgurate tissue. Electro surgery generator and its
processes are studied in detail and the possibilities of errors are found out. Severity,
Occurrence, and its impact in patient life for each error are tabulated. Corrective
actions are recommended to minimize the errors.
Keywords: Quality Control Improvement; FMEA; Electro surgery Generator;

1. INTRODUCTION

Quality Control [11] is the ongoing effort to maintain the integrity of a process to
maintain the reliability of achieving an outcome. As a process performance
improvement methodology, QC is viewed today as a disciplined, systematic,
measurement-based and data-driven approach to reduce process variation. There
are many methods for quality control. These cover product improvement, process
control and people based improvement. The following methods of quality
management and techniques that incorporate and drive quality control improvement
—ISO 9004:2000, ISO 15504-4: 2005 ,QFD, Kaizen, Zero Defect Program, Six
Sigma — 6σ, PDCA , Quality circle, Taguchi methods , Toyota Production System ,
Lean Manufacturing, Kansei Engineering, Six sigma combines established methods
such as Statistical Process Control, Design of Experiments and FMEA in an overall
framework.Quality Improvement (QI) as a powerful business strategy has been
around for almost twenty years and has grown exponentially in healthcare industry
during the past five years. In manufacturing, it is quite possible to reduce or even
eliminate (in some cases) most of human variability through automation. In
healthcare industry, the delivery of patient care is largely a human process, and
hence the causes of variability are often difficult to identify and quantify. In this
project, factors necessary for the quality improvement in healthcare industry are
presented. Pareto chart is used to find out the critical factor which affects human life
directly. Mininimization of human errors and technical errors in surgery is found to be
most critical one. Here, Electrosurgery process is taken for quality improvement.
Electrosurgery is the application of a high-frequency electric current to biological
tissue as a means to cut, coagulate, desiccate, or fulgurate tissue. Electrosurgery is
performed using an Electrosurgical Generator (also referred to as Power Supply or
Waveform Generator) and a hand piece including one or several electrodes,
sometimes referred to as an RF Knife. FMEA technique [12] is used for quality
improvement in Electrosurgery process especially related to Electrosurgery
generator.

2.1. Potential areas in Healthcare Industry for quality improvement

Potential areas in healthcare industries and FMEA tool is briefly explained in


the following section.

Quality Improvement projects in health care industry are focused on direct


care delivery, technical errors, administrative support and financial administration.

Following are the factors need to be consider for quality improvement

• Minimizing the errors in surgical equipments


• Increasing capacity in X-ray room
• Reducing turn around time in preparing medical reports
• Improving patient satisfaction at ER
• Reducing bottle necks in emergency department
• Reducing cycle time in various inpatient and outpatient diagnostic areas.
• Reducing the number of medical errors and hence enhancing patient safety
• Increasing the accuracy of laboratory results
• Increasing the accuracy of billing processes and thereby reducing the number
of billing errors
• Improving bed availability across various departments in hospitals
• Reducing the number of post-operative wound infections and related wound
problems

• Increasing surgical capacity


• Reducing length of stay in ER
• Reducing inventory levels
• Improving patient registration accuracy

2.3. IDENTIFICATION OF CRITICAL FACTOR:


The common factors for quality control are presented in the Literature survey.
Among those factors, a unique one needs to be selected for this project. The priority
is given for life saving activities. To identify this, Interface matrices with ranking has
been created. Based on the ranking, PIE chart is created. Minimization of errors in
surgical equipments is considered to be critical one for life saving activity.

Criticality = (Basic need x 2) + (Economic x 1) + (Comfort x 1) + (Life Saving


activity x 3)

Life
Sl Saving
N Basic Economi Activit Criticality
o Factors Needs c Comfort y in %

1 Increasing capacity in X-ray room 0 7 7 3 23

Reducing turn around time in

2 preparing medical reports 0 7 7 0 14

Improving patient satisfaction

3 at ER 3 7 7 0 20

Minimizing the errors in surgical


4 equipments 7 7 7 7 49

Reducing cycle time in various

Inpatient and outpatient diagnostic


5 areas. 3 7 7 3 29

Increasing the accuracy of


6 laboratory results 7 7 7 3 37

Increasing the accuracy of billing


processes
and thereby reducing the number of
7 billing errors 3 7 7 0 20

Improving bed availability across


8 various departments in hospitals 3 7 7 0 20

Reducing the number of post-


operative
wound infections and related wound
9 problems 7 0 7 3 30

10 Increasing surgical capacity 3 7 7 0 20


11 Reducing length of stay in ER 3 3 7 0 16

12 Reducing inventory levels 3 7 0 0 13

Improving patient registration


13 accuracy 7 3 3 0 20

13 1
12 2
11
3

10

4 - More
Critical
9
Zone

8
5
7
6
From the Pie chart, it is clearly known that minimizing the errors in surgical equipments
is most important one where quality improvement is required to save the patient life.

Surgical instruments today span a wide range of devices - from the "low tech"
end of simple sharp knife, to the "high tech" end of nanosecond pulsed surgical laser
systems. With the advent of High Energy surgical devices now available - such as
electrosurgery, cavitational ultrasonic aspirators, harmonic (ultrasonic) knives,
cryosurgery, various laser systems and endocoagulators - it is useful to view these
various devices simply as different means of delivering energy to tissue. Even a simple
scalpel may be viewed as delivering mechanical energy to tissue at a concentrated
pressure point (the blade edge) to incise tissue. No one particular system is inherently
better than the others are for all surgical purposes. Each may have advantages in
certain situations, and user preference is frequently only a personal bias, influenced by
past familiarity and training with a system. . As per Association of periOperative
Registered Nurses AORN , Electro surgery generator is considered as high-risk
equipment.

Electro surgery and its equipments are explained in the next section.

3.1. ELECTRO SURGERY:

Electrosurgery [1]is the application of a high-frequency electric current to biological


tissue as a means to cut, coagulate, desiccate, or fulgurate tissue. Its benefits include
the ability to make precise cuts with limited blood loss. Electrosurgical devices are
frequently used during surgical operations helping to prevent blood loss in hospital
operating rooms or in outpatient procedures. [8]

Electrosurgery is commonly used in dermatological, gynecological, cardiac, plastic,


ocular, spine, ENT, orthopedic, urological, neuro- and general surgical procedures.

3.1.1 Working Principle

Electricity[2] is attraction of two oppositely charged particles, arbitrarily referred to as


positive and negative. When an electrical connection (such as an electrode on tissue) is
made between the two poles of positive and negative, an electrical current will flow
between them. This is an exchange of electrons along the pathway. Electricity must
have two poles in order to flow. In electrosurgical units, when these two poles are
localized in one instrument or probe, it is referred to as a bipolar unit, since both poles
are contained within the one instrument. When one of the poles is an instrument, and
the other a remotely located ground pad (dispersive electrode), it is referred to as a
monopolar, or unipolarinstrument, since the Instrument is only one of the two poles.
Alternating Current (AC) takes the concept of positive and negative just one step further
by quickly reversing the polarity, or order of positive and negative, back and forth. At
home, the AC circuit reverses about 60 times per second, or 60 hertz (Hz). This
frequency of AC electricity can directly interfere with our own biological electrical
frequencies and result in shocks or stopping of the heart. The ability of electricity to
create this type of interference with our own bodies - muscletetany and contraction,
interference with normal heart rhythms, etc. - is termed the Faradic effects of electricity.

Electrosurgical units [3,4] utilize AC electricity but at significantly faster rates of reversal
for the polarity. ESU's utilize frequencies of around 350,000 to 500,000 times per
second, or Kilohertz (kHz). Some go up to 3 or 4 Megahertz (MHz). This extremely high
frequency does not interfere with our own biological processes to any significant
degree, so Faradic effects do not apply.

3.1.2. Operating Mode:

Fundamental electrical relationship [5.6], which describes the three electrical


parameters of voltage (V), current (I) and resistance(R):

Ohm's law

Voltage = Current x Resistance


Remember that Voltage (V) and Current (I, for induced current) are factors of the power
you have selected on the ESU (Watts = V x I) . Resistance is not controlled by the
operator but is a function of the tissue. We will see that voltage "drives" or pushes the
current through tissue against the tendency for tissue to "resist" this flow. As the flow is
resisted, heat is generated in tissue. Resistance varies in different tissues. It may vary
from 100,000 ohms for dry callused Palmer Skin, to 2000 ohms for fat, to 400 ohms for
muscle. High electrolyte tissues such as blood and muscle offer low resistance and
easily transmit the electrical current. Skin and fat have higher resistance. More
importantly, as electricity is applied to tissue and it begins to desiccate orchar, the tissue
resistance will begin to change immediately. A 40-watt setting may remain constant, but
the voltage and current are in constant flux as a function of this varying tissue
resistance, and distance from electrode to tissue. Changing voltages [7] will then cause
fluctuatingLevels of lateral damage from cut to cut, or even during the same

Coag Mode: Voltage is the parameter enhanced when choosing the "coag" mode on an
ESU.

Cut Mode: Current (amperage) enhanced when choosing “cut” mode.

4.1 FMEA Technique - To Improve the Quality

FMEA[12] is a team-based, systematic way of examining a process to anticipate ways


in which failure can occur and then redesigning the process to eliminate the possible
failure, stop the failure before it reaches an individual, or minimize its consequences.

The FMEA Process[13] (5 Steps)

1. Define the FMEA Topic

2. Assemble the Team

3. Graphically Describe the Process

4. Conduct a Hazard Analysis

5. Actions and Outcome Measures

What is Severity (S)?

Seriousness of a potential failure mode- Catastrophic (4) (hazardous effect - death or


permanent loss of function,etc.)
- Major (3) (major effect - permanent lessening of body function, increased length of
stay/care for 3 or more patients, etc.)

- Moderate (2) (moderate effect - increased length of stay/care for 1 or 2 patients)

- Minor (1)(no effect – no injury, no increased length of stay/care)

What is Probability (P)?

Frequency of potential failures- Frequent (4)(failures certain)

- Occasional (3) (high number of failures likely)

- Uncommon (2) (Occasional failure)

- Remote (1) (failure unlikely)

What is Detectability?

The likelihood of detection of the failure mode-

High (4) (will detect failure)

- Moderate (3) (most likely will detect failure)

- Low (2) (might detect failure)

- Remote (1) (will not detect failure)

Rating System

4 Catastrophic Frequent Remote

3 Major Occasional Low

2 Moderate Uncommon Moderate

1 Minor Remote High

Detectability (D)

Probability (P)

Severity(S)

Rating
RPN
S (risk
Sl (severit O D priority
No Failure y (occurrenc (detectio number Recommende
. mode Effects rating) Cause(s) e rating) n rating) ) d actions

Improper Instant
Level setting response
Power Stray energy Accidental slip technology
Setting for injuries over other instrument -
different results even tissues from Valley
1 tissues in dealth 4 3 3 36 Lab

EMF
interferenc 1.Effective
e with process &
other Inspection
O.R. report
equipment Poor
such as Affects Electormagnet 2.EMI
video Interpretatio ic Interference Resistant
2 systems n of data 2 shielding 3 4 24 coating

Verfying all Require


anesthetic Proper
circuit Cause inspection
connection Explosion/Fir Improper report/fire
3 s e Hazard 4 inspection 1 4 16 alarm

5. CONCLUSION:

Factors to improve the quality in healthcare industries were listed out and interfaces
matrices were created to find the high-risk one to save the patient life. Minimizing the
errors in surgical instruments are taken as high risk factor, which involves in life saving
activity. Electrosurgery generator was taken for the quality control process.
Electrosurgery process and its equipment were studied in detail .The failures in
electrosurgery generator were found out and rating for the severity, occurrence,
detection was assigned. Based on these, RPN (Risk Priority number) was calculated.
Causes and Action items for each failure were given. By adopting the action item for
each process, failures can be reduced to negligle level, which results in saving the
patient and surgeon life. Thus the implementation of Quality measures in health care
industries paves the way for saving the invaluable human life, economic benefit,
comfortness,etc.,

References

1. Hainer BL, "Fundamentals of electrosurgery", Journal of the American Board of


Family Practice, 4(6):419-26, 1991 Nov-Dec.
2. Electrosurgery for the Skin, Barry L. Hainer M.D., Richard B. Usatine, M.D.,
American Family Physician (Journal of the American Academy of Family
Physicians), 2002 Oct 1;66(7):1259-66.
3. A Simple Guide to the Hyfrecator 2000 Schuco International (London) Ltd.
4. Boughton RS, Spencer SK, "Electrosurgical fundamentals", J Am Acad Dermatol,
1987 Apr;16 (4):862-7.
5. Bouchier G, "The fundamentals of electro-surgery. High frequency current
generators", Cah Prothese, 1980 Jan; 8 (29):95-106. In French.
6. Oringer MJ, "Fundamentals of electrosurgery", J Oral Surg Anesth Hosp Dent
Serv, 1960 Jan; 18:39-49.
7. Reidenbach HD, "Fundamentals of bipolar high-frequency surgery", Endosc Surg
Allied Technol, 1993 Apr;1(2):85-90.

CURRENT TRENDS IN LABORATORY AUTOMATION IN CEMENT PLANTS

R.Meenashi Sundaram * P.Mouli**

* First Year-ME-Manufacturing.V.M.K.V.Engineering College, Salem-308,


meenashi.sundaram@adityabirla.com

** Firstl Year-ME-Manufacturing.V.M.K.V.Engineering College, Salem-308


mouli.pediredla@adityabirla.com
Abstract

By the end of a decade there is a good tradition for taking stock, summarizing the main
events of the past10 years and making predictions for the next decade. To meet the ever-increasing
demands for efficiency and high, consistent analysis quality more and more production laboratories
now base their activities on automated procedures for sampling sample preparation and analysis.
There has been a clear increase in cement industry lab automation over the past decade. Important
driving factors behind the introduction of automation include fast data capture for quality-control
tasks, data management requirements, demand for high and consistent analysis quality and
company policies on projecting a high tech profile .This paper is an example of such an analysis in
the Field of laboratory automation in the cement industry.

1. INTRODUCTION

1.1. Cement Industry

The cement industry is experiencing a boom on account of the overall growth of the
Indian economy. The demand for cement, being a derived demand, depends primarily on the
industrial activity, real estate business, construction activity, and investment in the infrastructure
sector. India is experiencing growth on all these fronts and hence the cement market is flourishing
like never before. Indian cement industry is globally competitive because the industry has witnessed
healthy trends such as cost control and continuous technology up gradation. Global rating agency,
Fitch Ratings, has commented that cement demand in India is expected to grow at 10% annually in
the medium term buoyed by housing, infrastructure and corporate capital expenditures.

1.2. Current Scenario

The Indian cement industry is the second largest producer of quality cement, which
meets global standards. The cement industry comprises 130 large cement plants and more than
300 mini cement plants. The industry's capacity at the beginning of the year 2008-09 was 198.30
million tones.

Cement production during April to October 2008-09 was 101.04 million tonnes as
compared to 95.05 million tonnes during the same period for the year 2007-08.Despatches were
100.24 million tonnes during April to October 2008-09 whereas 94.33 million tonnes during the
same period for the year 2007-08.During April-October 2008-09, cement export was 1.46 million
tonnes as compared to 2.16 million tonnes during the same period for the year 2007-08.

1.3. Technological Advancements

Modernization and technology up-gradation is a continuous process for any growing


industry and is equally true for the cement industry. At present, the quality of cement and building
materials produced in India meets international standards and benchmarks and can compete in
international markets. The productivity parameters are now nearing the theoretical bests and
alternate means. Substantial technological improvements have been brought about and today, the
industry can legitimately be proud of its state-of-the-art technology and processes incorporated in
most of its cement plants. This technology up gradation is resulting in increased capacity, reduction
in cost of production of cement.

1.4. Future Outlook

To compete in the market every industry has to prove themselves through their product
for achieving

 Lower cost
 Quality
 Zero complaints
To achieve zero complaints from the customer stinget Quality control norms to be
adopted in the process. For that now cement Industries are installing Robo lab for ensuring 100 %
Quality Parameters.

To meet the ever-increasing demands for efficiency and high, consistent analysis quality
more and more production laboratories now base their activities on automated procedures for
sampling, sample preparation and analysis. There has been a clear increase in cement industry lab
automation over the past decade. As in the steel industry, the automated central laboratory has
become the accepted industrial standard. Important driving factors behind the introduction of
automation include fast data capture for quality-control tasks, data management requirements,
demand for high and consistent analysis quality and company policies on projecting a high tech
profile. However, the cost of the laboratory operation has of course been the overall most important
single parameter.Labour cost savings are rather simple to account for in an investment justification,
but this is not the case with most of the other important potential benefits. Till 2006, Cement
Industries were adopting manual sampling and analysis process for Quality Control activities.

2. AUTOMATION CONCEPTS

In the automation of sample Preparation and analysis, a distinction can be made


between two main categories of automation concepts:

2.1. Automated Equipment Systems

Laboratories in which the sample preparation units and the analysis equipment are
automated and then linked together by conventional transport belts or the like. The automation is
provided by dedicated highly specialized equipment units.

2.2. Robotics Systems


Laboratories in which automation is achieved by robotics. The equipment units serviced
by the robot(s) may be fully automated, semi-automatic or manual. In the concept the robot is a
specialized automation component and integrated in the detailed handling of the other equipment
components to a degree that makes it very difficult to re-program the Robot for modified
procedures. In the operator is automated rather than the equipment. The main automation element
provided by the robot is the transport of samples between the different stations in the robot cell. The
concept implies that the robot can easily be re-programmed or set up to service new equipment
units.

QCX/RoboLab

A typical configuration consists of standard industrial robot placed in the centre of a circular
arrangement of sample preparation and analytical equipment. Samples normally arrive
automatically from the connected automatic sample transport system, but may also be entered via
operator sample conveyors or special input/output magazines. QCX/RoboLab offers a very high
flexibility in terms of the number and types of equipment handled by the robot. Supported, fully
automated preparation & Analysis disciplines relevant to the cement industry include powder or
fused bead preparation for X-ray analysis, particle sizing by laser or by conventional sieving, color
analysis, Carbon/Sulphur/Moisture combustion analysis, physical testing and collection of shift/daily
composites. For the typical cement lab project a throughput capacity of 10-20 samples will apply;
but higher numbers in one robot cell are achievable.

The QCX computer integrates the system components. It identifies incoming samples,
downloads the relevant sample-handling specification and controls all intelligent devices in the
configuration. Sequence control includes priority handling, intelligent handling of equipment failure
situations and much more.

QCX/RoboLab (and QCX/Auto Prep) provides high quality in sample preparation and
analysis. Quality not only meets the performance of ‘the very best lab technician’, but is highly
consistent over time. Thus, there are no fluctuations from shift to shift in analytical levels due to
small differences in the practical procedures undertaken by human operators.
A Robot is made up of two principal parts

 Controller
 Manipulator
We can communicate with the Robot using a teach pendant and Operator panel located on the
controller.

The teach pendant and the operators panel

Figure shows various Axes movement of the Manipulator


Manipulator IRB 2400

Figure shows various parts of the Controller


S4C Control System

3. CONCLUSION

Eliminating the manual sampling by Robo lab en number of benefits are there
Consistent quality of the product, Human error in sampling is totally eliminated, Analysis could be
completed in time and timely corrective action can be taken, Standard deviation and variation
considerably reduced and reduced customer complaints. Even though the installation cost is two
high, now all the new Cement plants prefer this type of Robo Lab for achieving consistent in quality.

4. REFERENCE

1. An innovative construction process by Contour Crafting Dooil Hwang, Behrokh


Khoshnevis, University of Southern California, USA

2. An IT Infrastructure and Safe Collaboration in Modern Construction Site M.


Abderrahim, R. Diez, C. Balaguer, J.M Navarro-Suner, S. Boudjabeur

Вам также может понравиться