Вы находитесь на странице: 1из 26

Electrical conductivity in materials - Application

Electrical conductivity in materials are important in a number of contexts:

1. Dissipation of static electricity in order to avoid ElectroStatic Discharge (ESD), which is the
most common use of electrically conductive plastics and composites. ESD may cause damage
to sensitive electronics or ignition of flammable products. Two different sets of standards are
associated with these two subareas.
2. Low electrical conductivity due to comfort or production process, eg. handling of plastic film
or powders.
3. Electrical conductivity of metals is of interest in transport of power and energy. We can also
measure conductivity of metals and alloys using the four-point method and a high current.
Conductivity Testing – Electrical Properties Testing

Four-Point Probe Method for Conductive Fibers


The resistivities of bars, rods, or other constant cross-section samples of larger dimension can

be determined by this method, with one modification.


Electrical Conductivity of Solids

For conductivity testing, the standard test method for determining the resistivity of electrical
conductor materials is ASTM B 193-87. Conductivity is calculated from the measured
resistance and dimensions of the specimen. The accuracy and convenience with which
resistance can be measured depends on the actual resistance of the specimen. A long, thin
specimen may be required if the specimen is a very good electrical conductor. Electrical
conductivity of aluminum alloys varies with its composition and microstructural state. The
conductivity of aluminum alloys is used to correlate to the extent of solutes retained in solid
solution. In metal matrix composites, conductivity is a function of the microstructural state of
the matrix alloy and is inversely proportional to the volume of fibers.

Electrical resistance can be measured directly using an Ohmmeter when the resistance of the
specimen is more than one ohm and contact resistance to the specimen is negligible. A bridge
circuit or potentiometer must be used when the electrical resistance can not be measured
directly.
Conductance is the reciprocal of the resistance. Conductivity is likewise the reciprocal of the
electrical resistivity. Our current set-up allows resistance measurements between 30 m? and
30 M?.

Four-Point Probe Method for Conductive Solid Samples

The electrical resistivity of conductive fibers and small diameter wires can be determined by a
four-point probe method. Four copper wires are stretched across the opening of a teflon (or
other non-conductive plastic block), glued in place, parallel, and with precisely-known
separation (gage length) between the two inner wires. All four wires then are connected to
individual copper terminal blocks. The two outer leads are connected to a precision current
source and the two inner leads are used to measure voltage drop. The sample fiber or wire is
laid across the lead wires and may be pasted in place using conductive silver paint. Ohm’s law
(V = IR) allows the resistance (R) to be determined. Coupled with cross-sectional area (A)
measurement of the sample and the known gage length (L), the resistivity is = RA/L.

Four-Point Probe Method for Conductive Fibers

The resistivities of bars, rods, or other constant cross-section samples of larger dimension can
be determined by this method, with one modification. To ensure that a constant potential is
applied across the entire cross-section of the sample, copper plates are affixed to the sample at
its ends by silver paste. To these plates is applied the test voltage. The current is again measured
using fine wire leads or knife-edge supports of precise separation.

Developing Stealth Technology - Lessons from the American XST/F117 programme

Forty years ago the Americans developed a very serviceable stealth aircraft, the world’s first,
in a remarkably short time and at very modest costs. As India discusses its own AMCA
programme, the honesty, simplicity and common sense that guided the American programme
is worthy of review.
The Americans, geographically isolated, has always had a unique requirement of “air
superiority over hostile airspace”. The Air war over North Vietnam showed that this
requirement could no longer be met by the traditional US combination of “superior” aircraft
and ECM. North Vietnam war was a static defensive action. The Ramadan/Yom Kippur War
showed that the same carefully co- ordinate defence system using low technology mobile
AAA, tracked SAMs and aircraft could also be used for offensive operations. The American
requirement of air superiority over hostile airspace was clearly under threat.
The Americans, despite considerable scepticism, identified stealth as a possible solution to
the challenge. Stealth meant reduction of Radar, Aural, IR and visual signatures. Since the
threat came mainly from radar and radar guided weapons the reduction of radar cross section
(RRCS) was given primacy. The details of the American programme to develop stealth
technology is of interest because an urgently needed technology was developed to meet an
identified requirement at low cost to a tight time scale- a management skill that will be of
interest to Indian Aerospace if the AMCA programme is pursued.
The Stealth development Programme; Step1. Establishing the basics
The bare bones of the American Stealth development programme are as given below. The
focus on short steps, strict timelines, low costs and the verification and securing of each
technology step before taking the next steps is inescapable.
Within a few months of the Yom Kippur/Ramadan War, as soon as the Israeli loss patterns
had been analysed, Ken Parko of DARPA requested submissions from the top five American
fighter manufacturing companies. The DARPA focused on just two parameters.
What were the signatures thresholds for an aircraft to be essentially undetectable at an
operationally useful radar range?
Did the company have the capability to design and produce an aircraft with the necessary
low signatures?
The approach at this stage was exploratory. Was stealth feasible?
The eventual winner, Lockheed, was not even contacted because Lockheed had not produced
a fighter since the Starfighter of the ‘60s. The emphasis on experience because of the urgency
of the programme is to be noted.
Of the five companies contacted two declined and one proposed continued emphasis on
sophisticated platforms and advanced ECM. Two companies-Northrop and MacDonnell
Douglas showed a grasp of the problem. In the closing months of 1974 i.e.. Twelve months
after Yom Kippur the two companies were funded $ 100,000 each to conduct further studies.
For convenience of Indian readers the $ values have been converted to rupees at he then
exchange rates and the local purchasing power parity (PPP) factor of about 3.5 is being also
given because the cost of an Indian Engineering man hour is much less than US. Thus we
have $100,000/ 12Lacs/3.5 lacs. The small amounts released, even by 1974 prices, is
noteworthy.
At this stage Ben Rich, President of the Lockheed “Skunk Works” (secret prototype
workshop) noted that Lockheed’s earlier SR 71 Blackbird and D 21 drones had shown low
RCS capabilities and he approached the CIA-which had financed these programmes – for
permission to discuss with DARPA the practical experience that Lockheed had already
gained on RCS.On the strength of these studies Lockheed was permitted to join the
programme and submit a proposal but without any funding.
In early 1975 Lockheed formed a three man project team comprising of the following:
Dick Scherrer- Project Leader
Denys Overholser Radar Expert
Bill Schroeder retired mathematician
The sub team Overholser and Schroeder concluded that a faceted shape based on simple
geometric optics would be giving the required results at the lowest risk. They collaborated to
set up a mathematical model and asked for six months. This was brought down to three
months and in fact the team had developed RCS prediction software of reasonable accuracy,
the ECHO 1 in five weeks.
The simulation software was immediately validated by experiment .The Lockheed Company
funded $25,000 / Rs3 lacs/ Rs 85,000) for two metal foil covered1/3 scale wooden models-
one for RCS anechoic chamber and the other as a wind tunnel model. The RCS test model
was a checked at the Lockheed Radar Anechoic Chamber in 1975. The anechoic chamber
was fairly simple. It was found that there was a reasonable but not perfect match between the
spikes as predicted by the ECHO 1 software and as seen in the anechoic chamber tests. The
main problem was the diffraction due to the edges. At this stage the ECHO 1 software was
modified to take benefit of the work done a decade earlier by a Russian physicist Pyotr
Ufimetsyev Chief Scientist, Moscow Institute of Radio Engineering, whose 1962 work
“Method of edge waves in the physical theory of diffraction” permitted the prediction of RCS
using curved surfaces and therefore a more advanced airframe could also be considered.
Indeed there were was a powerful group amongst them the formidable legend Kelly Johnson
who would have liked to see the Ufimetsyev equation applied for a more efficient airframe
but this lobby was firmly over ruled in the interest of programme certainty. There was so
much skepticism over the faceted design that the programe was nicknamed in typical
American manner as the “Hopeless Diamond” the hopeless being for the predicted/presumed
flying quality and the “Diamond” referring to the faceted shape.
The model was then sent to the Mojave Desert where the flat featureless terrain was like a
giant natural anechoic chamber. Amongst the many discoveries that comes from an
experiment driven approach the engineers found that designing a non reflecting pole to mount
the model for radar reflection studies was almost as difficult as designing the aircraft shape!
The Mojave tests further boosted confidence in the ECHO 1 modified software.
Based on its own experiments Lockheed submitted its proposals to DARPA for
anexperimental survivable Test vehicle (XST) in the summer of 1975- i.e. about 18 months
from DARPA’s first enquiries.
DARPA now had three proposal from which to select for funding. The two from Lockheed
and Northrop were quite similar .The MacDonnell Douglas team were the first to establish
the minimum level of stealth required but according to reports they were not able to give a
suitable submission-possibly by being up to their eyebrows in the F 15 programme!
The Lockheed Northrop (LN) XST technology menu
The LN XST focused on maximizing total stealth i.e. Radar, IR, Aural and visual.
Everything else was compromised to achieving this end. The logic was that stealth was a
substitute for conventional good performance which in any case was and still is unachievable
with total stealth.
Given below is the menu of stealth features in the programme.
High wing leading edge sweep. The XST had a LE sweep of 72oand the F 117 had a LE
sweep of 67o. By comparison the highly swept MiG 19 and Su7 wings had 570 LE sweep.
Modelled on geometrical optics, the extraordinary high LE sweep reduced radar returns.
Lockheed/Northrop’s previous experience in lifting body dynamics gave the team the
confidence. High LE sweep is aerodynamically inefficient and current LE sweep is around
40o.
Sharp edged LE sweep. Chosen for low frontal area this feature gives a “sharp” stall. Current
practice is not known to the author.
Faceted Aerofoil Pictures of the F117 shows an aerofoil that appears to be made up of three
flattish segments each that make up the upper and lower curved surfaces. This would give
poor lift and uncertain handling. Current designs do not appear to use this feature.
Hopeless Diamond faceting The American team chose this ‘inferior’ technology despite
knowing its problems of uncertain handling and despite knowing of Ufimetsyev’s work in the
interest of timely completion of project and because it “worked”. Current designs,
particularly the B2, use increasing amounts of Ufimetsyev’s theory.
Inlet meshing gave excellent stealth of the inlet which is a stealth “hot spot”. However it is
poor in terms of propulsive efficiency and is prone to blockage in icing conditions. Current
designs use DSI ((Diverter less Supersonic Intakes) or useprominent ‘S” intakes.
Fishtail exhaust Excellent in terms of IR and Aural stealth it is propulsion wise bad with a
rumored twenty percent loss in efficiency. This was last used on the F 22 Raptor and current
design go for as high a BPR turbofan as possible along with exhaust shielding by the
empennage.
The resulting aircraft, the XST/F117 had excellent stealth characteristics but the current
thinking is that like Variable Geometry the solution is as bad as the problem! Full stealth is
“mission profile sensitive i.e. it is good for long range subsonic aircraft but not possible for
fighters.
Step 2 proving the basics .The XST
DARPA studied the submissions of Lockheed and Northrop and based on their grasp of the
problems as shown by their proposals they were asked to collaborate to produce the following
quanta of advance:
1. A large scale ground test model i.e. a structural test model.
2. The construction of two flight models

The funding for the above was released on 26 April1976. The amount was $10.4 M/12.5
crores/3.56 crores. The aircraft produced would be used to validate the following:
Validate in flight the four stealth features designed-Radar, IR, Aural, and Visual.
Demonstrate acceptable flying qualities.
Demonstrate that accurate computer modeling of stealth has been achieved.
It will be noted that the task requirement was focused on just stealth, handling and confirming
a maturity of the RCS prediction and stealth engineering capabilities. Confirmation of the
predictions at the earliest opportunities was the persistent aim. The XST was not required to
have ANY operational capabilities. The XST prototypes were practically hand built- in metal
(sic) –stealth being achieved by shape, faceting and RAM. There was no attempt at
“optimization” which would be a waste at this stage of the development. The aircraft was
“cobbled” together using many standard aggregates off the shelf. For example the engines GE
J 85 4 came from the T 2 Buckeye Trainer, the landing gear was salvaged from a written off
Northrop F 5 and the flight controls came from Lear Siegler and the cockpit instrumentation
came from a F 16. The time saved by such an approach was reflected in the fact that the
prototype flew on 1st December 1977 i.e. within 18 months of funding approval which had
come on 26 April1976. The “cut and try” approach can be seen from the fact that the aircraft
had to be grounded to fit bigger fins which had been found necessary. This prototype was lost
on 17 January 1978 on its 36th flight. The second prototype was flown on 20 July 1978 and
lost on 20 July 1979 on its 52nd sortie due to systems related problem. It was found that the
prototypes’ handling and flying qualities were so bad that it was officially reported that “the
only (bad) thing that the XST does not do is to tip on its tail when parked! Despite such
adverse comments the programme had generated sufficient confidence and data as to go in
for the actual combat version, the F 117, was a substantially different machine from the XST
technology demonstrator.
Step 3 Converting to a weapon: XST to F 117
The project had demonstrated that the limited objectives set out above had been clearly
achieved the Americans moved on to the next stage. They funded on 1stNovember 1978 $ 340
M (Rs. 408 crores/ Rs.117 crores) for 5FSD airframes, support equipment etc with a target
date of 21 months to first flight i.e. July 1980. At this time the design specified that
maximum stealth was to be in the “Head on - +/- 25 degrees” segment” .i.e. during the
penetration phase. There was no attempt even at this stage produce “all aspect stealth”. The
IOC was to be March 1982. The F 117 used the ‘full stealth” treatment already proved on the
XST. There is now increasing evidence that these features were overkill and later designs
have moved away from ‘full stealth” in the interest of restoring aerodynamic efficiency.
The XST/F 117 programme management policy.
The management policies followed for the programme laid down fourteen rules covering
programme management, organization, contractor customer relations, documentations,
customer reporting, specifications engineering drawing, funding, cost control, sub contractor
inspection, testing security and management compensation. The following expansion of some
of the rules showed the result oriented approach.
1. It is essential that the programme manager have authority in all aspects of managing the
programmes.
2. The Customer programme manager must have similar authority to the prime contractors
managers. Incidentally the Customer was an equal financial stake holder with one third of the
stake.
3. The number of people involved must be reduced to a vicious minimum i.e. between 10 to
25% of “normal”. In development programmes mediocre numbers is no substitute for talent.
You either have the “right stuff” or you only have a high wage bill.
4. Very simple drawing and drawing release systems with great flexibility to make changes.

The People
The profile of Dick Scherrer, the team leader who was 54 when he took charge of the project
is typical of the people involved. He was a BS in Engineering and from 1942 to 1959(he was
at NASA Ames Wind Tunnel and had much practical experience in Tunnel testing. From
1955 to 1959 he was also working on the side on the Disney land “rides” such as “Dumbo the
flying elephant”. In 1959 he moved to Lockheed California making project studies for about
5 or 6 important programmes including winning Lockheed submissions such as the P3 Orion,
The S2 Viking and the Tristar. Though by our rules he would be “low” on formal
qualifications the emphasis on hands on experience is to be noted.The degree of ‘hands on”
experience can be gauged by the fact that when the Lockheed shop floor workers went on
strike during the assembly of the prototype the engineers were confident enough to do it
themselves.

Analysing the management of the programme


Rene Descartes (1596- 1650) had observed “If you begin with certainties you will end up in
doubts but if you are content to begin with doubts you will end up in certainties. The
Americans were unconsciously following Descartes.
Their starting point was not an aim or an identified Technology but a search for what
technology – at that point unidentified- would enable the USAF to survive and dominate
hostile air space. Having identified Stealth as a possible solution they then tried to establish
a minimum level of stealth required to survive. The technological difficulties were respected
as only engineers and practical men can and no impossible goals were set.
Recognizing that “stealth” would have an adverse effect performance they focused on getting
to know the Stealth technology. The XSF was a pure technology demonstrator with no
combat capability and even the final F 117 was decidedly subsonic and of limited war load
capability.
The Americans at the start, focused on just two common sense questions:
a) Is it possible to reliably predict RCS /stealth characteristics?
b) If so, would the resulting aircraft be flyable?

In spite of their enviable depth of experience , or perhaps precisely because of it , the


Americans lost no time in backing each theoretical projection with hard data n matter how
“crude” the data was at that point of time. The Americans, again because of their depth of
experience, avoided making the mistake of developing the combat F 117 in one go. The
Table 1 shows how different the XST was from the F 117. Translated in to India it would
mean that the LCA TD 1should have been not KH 001, the LCA prototype , but a HF 24 or
even a HJT 16 or MiG 21 but having the four technologies – composites , FBW, BVR and
the Glass cockpit needed in the LCA programme. The resulting collapse of “to flight” times
and the refinement hard data would have brought to the LCA programme can be imagined. In
the case of the AMCA it would mean a subsonic aircraft using aggregates from concurrent
programmes but incorporating all that we have learnt so far on stealth airframes so that all our
detail design is based on proven technology and not on hypothesis and validated before we
proceed to the infinitely more difficult job of designing asupersonic stealth aircraft. It is the
problems that we cannot know theoretically that need to be unearthed at the earliest. No
amount of computer studies can solve problems that one cannot foresee and can only be seen
in flight.
The very short lives of the two XST prototypes – 52 and 36 sorties respectively shows the
emphasis that has to be placed on generating data to validate the theoretical projections. The
prototypes were there to generate data and they were hard driven. It may be mentioned that
the equally skilled Russians had ten crashes in their SU 24 programme-four apparently due to
airframe related problems and six due engine related problems. Crashes are an occupational
hazard in developmental flying and there is no merit in a crash free programme. The risk
taking informal approach can be seen by the fact that the F 117 prototype was test flown after
just 4 taxi tests.

The Americans achieved remarkable fiscal and time control by slicing up the development
into salami slices. At no point was the development Target beyond the time horizon. Since
DARPA officials stay on the job for about two years before moving out most of the horizons
were in months and we in India can do so too.
Whilst the differences between the American and our way of doing things is obvious they
Americans were not doing anything that is impossible to do in India-save the way we think.
The HJT 36 and the HTT 40 programmes underline that we can put up a prototype in 36
months and these figures can improve. The only correction will be to slice up a project, no
matter how complex, into a one, two, or maximum three years slices. Any project with a
target ten years away will, to use a current phrase, go BVR. No one- the “funder” and the
“fundee” will be around to answer or pick up the pieces when the time to call to account
comes ten years hence! This is the fundamental weakness in our management.
Lessons for our AMCA programme
Given the above our AMCA programme should be reviewed thus:
1. Stealth technology is important enough for India to work on this field. The caution is that
advances in RAM may radically change the way stealth aircraft are designed. Thus timeliness
of completion is of even more importance than is usual.
2. At the proposal stage it is essential to have more than one submission even if it means
releasing 1-2 crores each to various selected (private sector) vendors. Multiple competitors
generate ideas which can all be included in the final winner. Multiple entries will also avoid
“the only son” performance syndrome which solitary organizations have shown.
3. Developing a stealth AMCA “in one go” over ten years as is being proposed is a bleak
prospect.
4. The first step should be a stealth technology demonstrator an AMCA-ST a subsonic stealth
aircraft which will feature everything that we think we have learnt about stealth aircraft. This
like the XST will maximize use of aggregates- engines systems undercarriage etc from the
existing programme so that the only major design load is the stealth technology airframe.
5. The target time to flight should be thirty months and about three hundred crores. If the team
cannot meet this target it will also not be able to do meet a ten year, ten thousand crore plan.
6. The AMCA XST should complete its flight tests – focusing mainly on stealth signatures
and handling, in about a year.
7. If the programme turns in encouraging results it should lead to a small subsonic nocturnal
intruder – not a supersonic AMCA as a first aircraft. A detection range of one tenth of the
Canberra could be a possible target.

Small steps, tight control, continuous Government Interest (which seems now to be
happening) can work wonders. Funding big plans and distant promises will be folly repeated.

Table 1
Parameter XST F117
Wing span 6.58 13.21
Length 14.41 20.09
Wing Area 35.86 105.9
Empty Weight not known 13,154
MTOW 5670 23,181
Engines GE-J85-4A GE F404F1D2
2x13.15kN 2x48.51kN
Vmax not known M0.9 approx.

The difference between the tech demonstrator and the Final product is obvious.
All dimensions in Mts. Sq.Mts, Kgs,

Density:

Pounds of Selected High-volume Engineering Thermoplastics Sold in the United States, 1981
and 1991

Pounds (millions) Percentage Increase


1981 1991 Since 1981
Thermoplastic polyesters 1,230 2,550 107
Acrylonitrile butadiene styrene 968 1,130 17
Nylon 286 556 94
Polycarbonate 242 601 148
Poly(phenylene oxide)-based alloys 132 195 48
Polyacetal 88 140 59
FIGURE 3.1 U.S. production of thermoplastics by type, 1990. SOURCE: Reprinted with
permission from Chemical & Engineering News (1991), p. 54. Copyright© 1991 by the
American Chemical Society.

Schematic of the structure of high-density polyethylene, low-density polyethylene, and linear


low-density polyethylene.
Thermal Expansion :

a
Material a (mm/m/oK)
(m/m/oK)
Aluminum 23.8 x 10-6 0.0238
Concrete 12.0 x 10 -6 0.011
Copper 17.6 x 10 -6 0.0176
Brass 18.5 x 10 -6 0.0185
-6
Steel 12.0 x 10 0.0115
-6
Timber 40.0 x 10 0.04
Quartz Glass 0.5 x 10 -6 0.0005
-6
Polymeric Materials 40-200 x 10 0.040-0.200
Acrylic 75.0 x 10 -6 0.075
The property of thermal expansion is mainly used in manufacturing techniques such as
casting and setting bushes etc.

Thermal expansion (and contraction) must be taken into account when designing products
with close tolerance fits as these tolerances will change as temperature changes if the
materials used in the design have different coefficients of thermal expansion. It should also
be understood that thermal expansion can cause significant stress in a component if the
design does not allow for expansion and contraction of components. The phenomena of
thermal expansion can be challenging when designing bridges, buildings, aircraft and
spacecraft, but it can be put to beneficial uses. For example, thermostats and other heat-
sensitive sensors make use of the property of linear expansion.

Linear Coefficient of Thermal Expansion for a Few Common Materials

Poisson’s Ratio
It is the ratio of longitudinal strain and lateral strain
Elastic Moduli – Young’s Modulus

Many experiments show that for a given material, the magnitude of strain produces is the same
regardless of the stress being tensile or compressive. Young’s modulus (Y) is the ratio of the
tensile/compressive stress (σ) to the longitudinal strain (ε).

Y = σε … (1)
We already know that, the magnitude of stress = FA and longitudinal strain = ΔLL. Substituting
these values, we get
Y = FAΔLL
∴ Y = (F×L)(A×ΔL) … (2)
Now, Strain is a dimensionless quantity. Hence, the unit of Young’s modulus is N/m2 or Pascal
(Pa), the same as that of stress. Let’s look at Young’s moduli and yield strengths of some
materials now:

Young’s Modulus Elastic Limit Tensile Strength


Materials
Y (109 N/m2) (107 N/m2) (107 N/m2)

Aluminum 70 18 20

Copper 120 20 40

Wrought Iron 190 17 33

Steel 200 30 50

Bone

Tensile 16 – 12

Compressive 9 – 12

From the table, you can observe that Young’s moduli for metals are large. This means that
metals require a large force to produce a small change in length. Hence, the force required to
increase the length of a thin wire of steel is much larger than that required for aluminum or
copper. Therefore, steel is more elastic than the other metals in the table.

Determination of Young’s Modulus of the Material of a Wire

The figure below shows an experiment to determine Young’s modulus of a material of wire
under tension.

As can be seen in the diagram above, the setup consists of two long and straight wires having the
same length and equal radius. These wires are suspended side-by-side from a fixed rigid support.
The reference wire (wire A) has a millimeter main scale (M) and a pan to place weight.

The experimental wire (wire B) also has a pan in which we can place weights. Further, a vernier
scale is attached to a pointer at the bottom of wire B and the scale M is fixed to reference wire
A. Now, we place a small weight in both the pans to keep the wires straight and note the vernier
scale reading.

Next, the wire B is slowly loaded with more weights, bringing it under tensile stress and the
vernier reading is noted. The difference between the two readings gives the elongation produced
in the wire. The reference wire A is used to compensate for any change in length due to a change
in the temperature of the room.
Let r and L be the initial and final length of the wire B, respectively. Therefore, the area of the
cross-section of the wire B is = πr2. Now, let M be the mass that produces an elongation of ΔL in
wire B. Therefore, the applied force is = Mg, where ‘g’ is the acceleration due to gravity. Hence,
using equations (1) and (2), Young’s modulus of the material of wire B is:

Y = σε = Mgπr2.LΔL
⇒ Y = (Mg×L)(πr2×ΔL) … (3)
Elastic Moduli – Shear Modulus

Shear Modulus (G) is the ratio of shearing stress to the corresponding shearing strain. Another
name for shear stress is the Modulus of Rigidity.

∴ G = shearingstress(σs)shearingstrain
⇒ G = FAΔxL
= F×LA×Δx … (4)
We also know that, Shearing strain = θ

∴ G = FAθ
= FA×θ … (5)
Further, the shearing stress σs can also be expressed as

σs = G × θ … (6)

Also, the SI unit of shear modulus is N/m2 or Pa. The shear moduli of a few common materials
are given in the table below.

Shear Modulus (G)


Material
109 N/m2

Aluminum 25

Brass 36

Copper 42

Glass 23

Iron 70
Lead 5.6

Nickel 77

Steel 84

Tungsten 150

Wood 10

From the table, you can observe that the shear modulus is less than Young’s modulus for the
same materials. Usually, G ≈ Y3.
Elastic Moduli – Bulk Modulus

We have already studied that when we submerge a body in a fluid, it undergoes a hydraulic
stress which decreases the volume of the body, leading to a volume strain. Bulk modulus (B) is
the ratio of hydraulic stress to the corresponding hydraulic strain.

B = -p(ΔVV) … (7)
The negative sign means that as the pressure increases, the volume decreases. Hence, for any
system in equilibrium, B is always positive. The SI unit of the bulk modulus is N/m2 or Pa. The
bulk moduli of a few common materials are given in the table below.

Bulk Modulus (B)


Material
109 N/m2

Aluminum 72

Brass 61

Copper 140
Glass 37

Iron 100

Nickel 260

Steel 160

Liquids

Water 2.2

Ethanol 0.9

Carbon disulfide 1.56

Glycerine 4.76

Mercury 25

Gases
Air (at STP) 1.0 x 10-4

Compressibility (k) is the reciprocal of the bulk modulus. It is the fractional change in volume
per unit increase in pressure.

∴ k = 1B = – 1Δp × ΔVV … (8)


From the table, you can observe that the bulk modulus for solids is much larger than that for
liquids and gases. Hence, solids are the least compressible while gases are the most
compressible. This is because, in solids, there is a tight coupling between the neighboring atoms.

Heavy-duty applications call for superior bearing strength, but how is bearing strength
actually measured? In essence, bearing strength is the maximum stress load that the unit can
“bear” or hold before the structure fails. But bearing strength can also be measured in terms
of tensile, compression, and flexural strength, plus bearing hardness. We’ve got the definition
you need to help in your finding the right bearing strength:

How strong is your bearing? Ultimately, it needs to be strong enough to exceed the everyday
operating conditions of your environment. After all, bearings are meant to carry the stress and
load of your application, but best practice is to measure bearing strength conservatively, so as
not to exceed design limits.

Bearing strength is often measured by:

Tensile Strength

Tensile strength measures the ability of a material to withstand load under tension without
failure. Also known as ultimate strength, tensile strength is measured in pounds per square
inch (PSI). The higher the tensile strength, the stronger the bearing material, and the better
ability it has to resist cracking.

Tensile Elongation
Tensile elongation is the increase in length that occurs when a material is stretched, but
before it breaks under tension. It’s indicated as a percentage of the original length of the
material. High-tensile strength and high-elongation are key factors in determining the overall
toughness of a material.

Compressive Strength

Compressive strength refers to the resistance of a material to breaking under compression.


Ultimate compressive strength is closely related to compressive strength.

Flexural Strength

Flexural strength is a material’s ability to resist bending under load (also known as modulus
of rupture or bend strength).

Modulus

Modulus covers tensile, compressive and flexural strengths. It is defined as the ratio between
the stress or force per unit area. The modulus of a material can predict the reaction of a
material, as long as the stress is less than the yield of the material.

Durability and Damage Tolerance Properties

Fatigue Strength
Definition - What does Fatigue Strength mean?
Fatigue strength is the highest stress that a material can withstand for a given number of
cycles without breaking. It is affected by environmental factors such as corrosion.
In other words, the maximum stress that can be applied for a certain number of cycles
without fracture is the fatigue strength. The standard fatigue strength for copper alloys is that
reported for 100,000,000 cycles. At stresses above this fatigue strength, fewer cycles can be
accomplished before failure; at lower stresses, the metal will withstand more cycles before
failure.
It is common to estimate fatigue strength as some fraction of ultimate tensile strength that is
specific to a material type (for example, 35% for austenitic stainless steels).
Fatigue strength is as important to the design of parts with high deflection cycles, as yield
strength is to the designer who must obtain requisite contact forces.
Fatigue strength is also known as endurance strength or fatigue limit.

Corrosionpedia explains Fatigue Strength


Fatigue strength is used to describe the amplitude (or range) of cyclic stress that can be
applied to the material without causing fatigue failure, or the highest stress that a material can
withstand for a given number of cycles without breaking.
For example, ferrous alloys and titanium alloys have a distinct limit, below which there
appears to be no number of cycles that will cause failure. Other structural metals such as
aluminum and copper do not have a distinct limit and will eventually fail even from small
stress amplitudes. In these cases, a number of cycles (usually 107) are chosen to represent the
fatigue life of the material.
Fatigue occurs because micro cracks develop on the metals' surface when it is cyclically
stressed. With repeated bending, these cracks propagate through the metal thickness to a
point where the remaining sound structure fails by ordinary rupture because the load can no
longer be supported.
Fatigue strength is somewhat correlated with tensile strength. The stronger tempers of some
metals have lower fatigue strength than their weaker tempers.
Orientation affects the fatigue strength. Data typically compiled and published are for test
specimens with a longitudinal orientation (their length is parallel to the rolling direction). But
fatigue strength can be measurably affected by the manner in which the part is positioned on
the strip for stamping.
The number of cycles that a metal can endure before it breaks is a complex function of:

 Static and cyclic stress values


 Alloy
 Heat-treatment and surface condition of the material
 Hardness profile of the material
 Impurities in the material
 Type of load applied
 Operating temperature
 Several other factors

Notch Sensitivity

Notch sensitivity q is defined by the equation

Actual intensification of stresses over nominal stress to the Theoretical intensification of


stress over nominal stresses

The values of q are between zero and unity. It is evident that if q=0, then Kf =1, and the
material has no sensitivity to notches at all. On the other hand if q=1, then Kf = Kt, and the
material has full notch sensitivity. In analysis or design work, find Kt first, from geometry of
the part. Then select or specify the material, find q, and solve for Kf from the equation
Crack Growth
Figure 1. Schematic variation of fatigue-crack propagation rate (da=dN) with applied stress
intensity range (1K), for metals, intermetallic and ceramics

Toughness

The ability of a metal to deform plastically and to absorb energy in the process before
fracture is termed toughness. The emphasis of this definition should be placed on the ability
to absorb energy before fracture. Recall that ductility is a measure of how much something
deforms plastically before fracture, but just because a material is ductile does not make it
tough. The key to toughness is a good combination of strength and ductility. A material with
high strength and high ductility will have more toughness than a material with low strength
and high ductility. Therefore, one way to measure toughness is by calculating the area under
the stress strain curve from a tensile test. This value is simply called “material toughness” and
it has units of energy per volume. Material toughness equates to a slow absorption of energy
by the material.
There are several variables that have a profound influence on the toughness of a material.
These variables are:

 Strain rate (rate of loading)


 Temperature
 Notch effect

A metal may possess satisfactory toughness under static loads but may fail under dynamic
loads or impact. As a rule ductility and, therefore, toughness decrease as the rate of loading
increases. Temperature is the second variable to have a major influence on its toughness. As
temperature is lowered, the ductility and toughness also decrease. The third variable is termed
notch effect, has to due with the distribution of stress. A material might display good
toughness when the applied stress is uniaxial; but when a multiaxial stress state is produced
due to the presence of a notch, the material might not withstand the simultaneous elastic and
plastic deformation in the various directions.

There are several standard types of toughness test that generate data for specific loading
conditions and/or component design approaches. Three of the toughness properties that will
be discussed in more detail are 1) impact toughness, 2) notch toughness and 3) fracture
toughness.

• Special Design Factors


Depending upon the industry and type of manufacturing process the DFM practices vary and
are tamed to define check lists for quality and design checks. It has been estimated that by the
time product design gets determined, about 80% (shown in the graph in Figure 1) of the total
product costs are incurred. These designs determine the manufacturability which in turn
impacts production costs. DFM has a vital role to play in controlling product costs.

Figure 1: Product Cost vs Time Graph (Source: Design for Manufacturability Book by DR.
Anderson)

Looking at the criticality of DFM, it has become an important ingredient of product success.
Here in this blog I have tried to include top 10 contributing factors to DFM.
1. Product Complexity

Complex designs create assembly bottlenecks and make it difficult to meet constraint time
frame requirements. Product complexity can be attributed in many ways like the number of
PCB layers, number of processors on board, routing and placement of components, heat sink
requirements and form factor. It becomes challenging to maintain product quality while
shrinking production time. A number of intelligent and innovative methods need to be applied
for an effective DFM process and to handle complex product requirements.

2. Product Variants

The huge number of product variants also makes it difficult to have a standard DFM process
in place. It becomes challenging to keep track of material availability and to ensure quality
standards. It is always recommended to have minimum product variants to keep manufacturing
processes agile. But there are instances where it is need of the market to have multiple variants
of a single product to cater to multiple customer needs; an ideal example here is that of mobile
smartphones. The only solution to tackle this challenge is to have DFM frameworks defined
with details of every variant to ensure every aspect of variance is taken care.

3. Component Availability and Price

Material availability and price do contribute to DFM processes. It is recommended to keep


track of every component availability and use components with End of Life (EOL) either
approximately equal or greater than defined for the Product Life Cycle (PLC). Price attributes
to the BOM costs should directly relate to the product profitability. It is better to have detailed
analysis of component prices before stepping into design phase, so that future changes are
avoided.

4. Reusable Design

Product development investments should be future-proof. Enough research should be done to


have modular product designs, so that future changes are taken care without scrapping the
design and start from scratch. DFM should take care of this aspect and should be devised in a
way to take care of all modular changes. This ensures higher return on investments (ROI) and
also reduces time to incorporate design as well as production changes.

5. Failure Analysis Techniques

Failure analysis techniques need to be accurate and precise in determining design issues. DFM
should be devised in a way that all inputs from failure analysis are taken care.

6. Managing Design Costs

Design costs directly impact end product costs. The DFM should be handled efficiently to
minimize design costs by ensuring minimal rework on incorporating any design-related
product changes. DFM needs to be efficient to avoid any design re-spins as it increases
development costs and ultimately impacts profitability.

7. Incorporating Last Stage Design Changes

It is always unavoidable to skip critical design changes that come up with prototype testing.
DFM needs to take care of these last stage changes while ensuring that development timelines
are met to avoid any delays in product launch schedule.

8. Production Friendly Design

Product design should take care of production feasibility. DFM should take care of proper
placement and routing of components, and there should be proper space between the placed
components so that soldering failures are minimized. It is recommended to design as per the
EMS design/Manufacturing facility manufacturing process layout so that production assembly
time can be minimized.
9. Product Quality and Regulatory Requirements

Every product needs to comply with industry regulatory requirements. DFM needs to take care
of every certification requirement and ensure that the design is done as per the designated
framework. Regulatory requirements are purely industry-specific and every EMS/production
facility needs to follow them strictly.

10. Quality Standard Framework


Quality standards determine product success. DFM needs to take care of design quality and
ensure that any design changes do not impact product performance negatively.
All above factors form critical aspects of DFM process and they determine the product
success both in terms of quality and profitability. To conclude I suggest that we should
always have a DFM process which can yield us maximum output while taking care of all
design changes.

Вам также может понравиться