Вы находитесь на странице: 1из 25

Types of Manometers

Manometers
Principle of Operation
Manometers derive pressure by the combination of a height differential of a liquid column
and the density of the fluid within the liquid column. The U type manometer, which is
considered as a primary pressure standard, derives pressure utilizing the following equation:
P = P2 - P1 = hw g
Where:
P = Differential pressure
P1 = Pressure applied to the low pressure connection
P2 = Pressure applied to the high pressure connection
hw = is the height differential of the liquid columns between the two legs of the manometer
= mass density of the fluid within the columns
g = acceleration of gravity
5.1 Types of Manometers
5.1.1 U Tube Manometers
The principle of operation of the U type manometer is shown on Figure 5-1. It is simply a
glass tube bent to form the letter U and partially filled with some liquid. With both legs of
the instrument open to atmosphere or subjected to the same pressure, Figure 5-1, the
liquid maintains exactly the same level or zero reference. As illustrated on
Figure 5-2, if a pressure is applied to the left side of the instrument, the fluid recedes in the
left leg and raises in the right leg. The fluid moves until the unit weight of the fluid as
indicated by H exactly balances the pressure. This is known as hydrostatic balance. The
height of fluid from one surface to the other is the actual height of fluid opposing the
pressure.
The pressure is always the height of fluid from one surface to the other regardless of the
shape or size of the tubes, as illustrated in Figure 5-3.
The left hand manometer has a uniform tube, the center one has an enlarged leg and the
right one has an irregular leg. Manometers at the top are open to atmosphere on both legs
so the indicating fluid level in both legs is the same. Imposing an identical pressure on the
left leg of each manometer, as shown on Figure 5-4, causes the fluid level in
each manometer to change. Because of the variations in volume of the manometer legs, the
distances moved by the fluid columns are different. However, H the total distance between
the fluid levels, remains identical in the three manometers15.
5.1.2 Inclined Tube Manometers
Many applications require accurate measurements of low pressure such as drafts and very
low differentials. To better handle these applications, the manometer is arranged with the
indicating tube inclined, as in Figure 5-5, providing for better resolution. This arrangement
can allow 12 of scale length to represent 1" of vertical height. With scale subdivisions, a
pressure of 0.00036 PSI (1/100 inch of water) can be read15.
Micro manometer
The micro manometer is another variation of liquid column manometers that is based on the principle of
inclined tube manometer and is used for the measurement of extremely small differences of pressure.
The meniscus of the inclined tube is at a reference level as shown in the figure below, viewing through a
magnifier provided with cross hair line. This is done for the condition, p1=p2. The adjustment is done by

moving the well up and down a micrometer. For the condition p1 not equal to p2, the shift in the meniscus
position is restored to zero by raising or lowering the well as before and the difference between these two
readings gives the pressure difference in terms of height.

Micro manometer
Manometer is shown above as a static measuring device. Its dynamics can rarely be ignored. Considering
manopmetric fluid as a free body, the forces acting on it are

The weight distributed over the entire fluid.

The drag force due to its motion and the corresponding tube wall shearing stress.

The force due to differential pressure.

Surface tension force at the two ends.

5.1.3 Well Type Manometers


The well type manometer is illustrated on Figure 5-6. In this design, the pressure is applied
to a fluid well attached to a single indicating tube. As the fluid moves down in the well, the
fluid is displaced into the smaller indicating leg of the manometer. This permits direct
reading on a single scale.
The well type manometer utilizes the principle of volume balance wherein the fluid displaced
from the well is equal to the added fluid in the smaller indicating column. The well area and
the internal diameter of the indicating type must be carefully controlled to insure the
accuracy of the instrument.
The well type manometer does not fulfill the requirements of a primary standard as
described in paragraph 1.5 and can be considered as one form of a secondary standard.
5.2 Intrinsic Correction Factors
5.2.1 Fluid Density Correction
Manometers indicate the correct pressure at only one temperature. This is because the
indicating fluid density changes with temperature. If water is the indicating fluid, an inch
scale indicates one inch of water at 4C only. On the same scale mercury indicates one inch
of mercury at 0C only. If a reading using water or mercury is taken at 20C then the
reading is not an accurate reading. The error introduced is about 0.4% of reading for
mercury and about 0.2% of reading for water. Since manometers are used at temperatures
above and below the standard temperature, corrections are needed. A simple way for
correcting for density changes is to ratio the densities.
Where:
ho = Corrected height of the indicating fluid to standard temperature

ht = Height of the indicating fluid at the temperature when read


o = Density of the indicating fluid at standard temperature
t = Density of the indicating fluid when read
Using this method is very accurate, when density/ temperature relations are known. Data is
readily available for water and mercury.
Density (g/cm3) as a function of temperature (C) for mercury:
= 13.556786 [1 - 0.0001818 (T - 15.5556)]
Density ( g/cm3) as a function of temperature for water:
= 0.9998395639 + 6.798299989 x 10-5 (T)
- 9.10602556X10-6 (T2) + 1.005272999 x
10-7 (T3) - 1.126713526 x 10-9(T4) +
6.591795606 x 10-12 (T5)
For other fluids, manometer scales and fluid densities may be formulated to read inches of
water or mercury at a set temperature. The manometer still only reads correct at one
temperature, and for precise work the temperature corrections cannot be overlooked.
5.2.2 Gravity Correction
The need for gravity corrections arises because gravity at the location of the instrument
governs the weight of the liquid column. Like the temperature correction, gravity correction
is a ratio.
go = International Standard Gravity(980.665
Gals.)
gt = Gravity at the instruments location (In
Gals.)
A 10 change in latitude at sea level will introduce approximately 0.1% error in reading.
At the Equator (0 Latitude) the error is approximately 0.25%. An increase in elevation of
5000 feet (1524 m) will introduce an error of approximately 0.05%.
For precise work you must have the value of the gravity measured at the instrument
location. Gravity values have been determined by the U.S. Coast and Geodetic Survey at
many points in the United States. Using these values, the U.S. Geodetic Survey may
interpolate and obtain a gravity value sufficient for most work. To obtain a gravity report,
the instruments latitude, longitude and elevation are needed. Similar agencies are available
in countries outside the United States. Contact local authorities for the agency and
procedures to determine local gravity.
Where a high degree of accuracy is not necessary and values of local gravity have not been
determined, calculations for differences in local gravity can be obtained. Gravity at a known
latitude is:
Gx= 980.616 [1 - 0.0026373 cos(2x) + 0.0000059cos2(2x)]
Where:
Gx = gravity value at latitude x, sea level (cm/sec2)
x = latitude (degrees)
The relationship for inland values of gravity at elevations above sea level is:
Gt = Gx - 0.000094H + 0.00003408(H-H1)(cm/sec2)
Where:
H = Elevation (feet) above mean sea level
H1 = Average elevation (feet) of the general terrain within a radius of 100 miles of the point
5.2.3 Pressure Medium Head Correction
Commonly, a differential pressure is measured by the height of the fluid column. Actually
the differential pressure, measured by the indicating fluid height, is the difference between
the density of the fluid column and the density of equal height of the pressure medium.

The relationship is:


Where:
pm = density of the pressure medium
The significance of the pressure medium correction effect on the manometer reading varies
with the indicating fluid and pressure medium. Whether this correction is necessary depends
upon the users accuracy requirements. The most common pressure medium is air. Not
correcting for air over water yields an error of 0.12% (using the density of air as 0.0012
g/cm3). In precise work, air density can be determined exactly knowing the temperature,
pressure and relative humidity of the air. The correction for air over mercury is extremely
small (0.008% error) and there for may usually be ignored. Another application, often used
in flow applications, is water over mercury. The pressure medium correction in this situation
is mandatory. An error of 7.4% is introduced if the correction is not applied. In many
instancesmanometer scales can be designed with this correction built-in.
5.2.4 Scale Corrections
Another factor governing manometers accuracy is the scale. As with indicating fluids,
temperature changes affect the scale. At higher temperatures the scale will expand and
graduations will be further apart. The opposite effect will occur at lower temperatures. All
Meriam scales are fabricated at a temperature of 22C (71.6F). A 10C shift in temperature
from that temperature will induce an error in the reading of about 0.023% in an aluminum
scale. All Meriam scales are made of aluminum.
ho = ht [1 + (T - To)]
Where:
= Coefficient of linear expansion for the scale material (0.0000232/C for aluminum)
T = the temperature when the manometer was read To = temperature when the scale was
manufactured

5.2.5 Compressibility, Absorbed Gases and Capillary Considerations


Compressibility of indicating fluids is negligible except in a few applications. For
compressibility to have an effect, the manometer must be used in measuring high
differential pressures.
At high differential pressures the fluid shrinkage (Increase in density) may begin to be
resolvable on the manometer. At 250 PSI the density of water changes approximately 0.1%.
Depending upon accuracy requirements compressibility may or may not be critical. The
relationship between pressure and density of water is as follows:
= 0.00000364 p + 0.9999898956
Where:
= density of water(g/cm3) at 4C and pressure p
p = pressure in PSIA
Since the need to correct is very rare, other indicating fluids compressibilitys have not been
determined. Mercurys compressibility is negligible.
Absorbed gases are those gases found dissolved in a liquid. The presence of dissolved gases
decreases the density of the liquid. Air is a commonly dissolved gas that is absorbed by
most manometer fluids. The density error of water fully saturated with air is 0.00004% at
20C. The effect is variable and requires consideration for each gas in contact with a
particular fluid. Mercury is one exception in which absorbed gases are not found. This makes
mercury an excellent manometer fluid in vacuum and absolute pressure applications.

Capillary effects occur due to the surface tension or wetting characteristics between the
liquid and the glass tube. As a result of surface tension, most fluids form a convex
meniscus.
Mercury is the only fluid that does not wet the glass, and consequently forms a concave
meniscus. For consistent results, you must always observe the fluid meniscus in the same
way, whether convex or concave. To help reduce the effects of surface
tension, manometers should be designed with large bore tubes. This flattens the meniscus,
making it easier to read. A large bore tube also helps fluid drainage. The larger the bore the
smaller the time lag while drainage occurs. Another controlling factor is the accumulation of
corrosion and dirt on the liquid surface. The presence of foreign material changes the shape
of the meniscus. With mercury, it helps to tap or vibrate the tube to reduce error in the
readings. a final note to capillary effects is the addition of a wetting agent to the manometer
fluid. Adding the wetting agent helps in obtaining a symmetrical meniscus.
Parallax (Readability)
In order to achieve consistent results, the level of the meniscus on a manometer must be
read with the eyes level to the meniscus. Placing the eyes level with the meniscus
eliminates reading distortions caused by angle of reading,parallax, etc. If a mirror back is
available, it will aid in placing the operators eyes in the proper position before taking a
reading.
To duplicate the factory calibration procedure, read the lowest indicated liquid level as
measured by the hairline at which the original zero was set.

Fortins barometer:
The barometer is an instrument used to measure atmospheric pressure. Fortins barometer is a modified
form of Torricellis simple barometer.
Construction
A Fortins barometer consists of a narrow glass tube of length about 90 cm. This tube is closed at one end.
The tube is completely filled with mercury and kept inverted in a cistern filled with dry mercury. Usually,

the glass tube is protected by enclosing it in a brass tube. The upper part of the brass tube has a slit that
enables the level of the mercury in the glass tube to be seen. A scale graduated in millimeters is attached
to the brass tube. This functions as the main scale. For accurate measurement, a vernier scale that can
slide over the main scale is also fixed to the barometer. The vernier scale can be moved up and down using
a screw.
The bottom of the cistern is like a bag made of flexible leather. The mercury level can be adjusted by means
of a screw provided underneath. There is an ivory pointer in the cistern, placed at the top. The tip of this
pointer coincides with the zero of the main scale. The level of the mercury column in the cistern can be
changed with the screw under it. It is so adjusted that the ivory point is exactly at the surface of the
mercury in the cistern. The whole apparatus is fixed in a vertical position.
Working
Any change in the atmospheric pressure is accompanied by an immediate change in the level of the
mercury in the glass tube. As the height of the mercury column in the barometer changes, mercury flows
between the tube and cistern. As a result, the level of the mercury in the cistern also changes. To
determine the length of the mercury column in the barometer, it is necessary to know the position of the
free surface in the cistern as well as in the tube. The first step in measuring atmospheric pressure using
Fortins barometer is to set the mercury level in the cistern. Using the adjustment screw, set the level of
the mercury in the cistern such that the ivory pointer just touches the mercury. The reading of the top of
the mercury column is then measured using both the main scale and the vernier scale. Before the readings
are noted, the vernier scale needs to be positioned properly. The vernier scale is to be adjusted so that its
edge and the corresponding reading in the main scale just set tangentially to the meniscus. Now, the
readings on the main scale and the vernier scale are noted, and the atmospheric pressure is calculated.
Advantages
Fortins barometer is widely used in laboratories and in meteorological departments. The main advantages

of Fortins barometer are :


It is portable.
It allows the mercury level in the cistern to be set to zero. This makes the reading
more accurate

aneroid barometer
An aneroid barometer is an instrument for measuring pressure as a method that does not
involve liquid. Invented in 1844 by French scientist Lucien Vidi, the aneroid barometer uses a small,
flexible metal box called an aneroid cell (capsule), which is made from an alloy of
beryllium and copper. The evacuated capsule (or usually several capsules, stacked to add up their
movements) is prevented from collapsing by a strong spring. Small changes in external air pressure
cause the cell to expand or contract. This expansion and contraction drives mechanical levers such
that the tiny movements of the capsule are amplified and displayed on the face of the aneroid
barometer. Many models include a manually set needle which is used to mark the current
measurement so a change can be seen. In addition, the mechanism is made deliberately "stiff" so
that tapping the barometer reveals whether the pressure is rising or falling as the pointer moves.
This type of barometer is common in homes and in recreational boats, as well as small aircraft. It is
also used in meteorology, mostly in barographs and as a pressure instrument in radiosondes

Bourdon tube pressure gauges:

Bourdon tube pressure gauges are used for the measurement of relative pressures from 0.6 ... 7,000 bar.
They are classified as mechanical pressure measuring instruments, and thus operate without any
electrical power.

Bourdon tubes are radially formed tubes with an oval cross-section. The pressure of the measuring
medium acts on the inside of the tube and produces a motion in the non-clamped end of the tube. This
motion is the measure of the pressure and is indicated via the movement.
The C-shaped Bourdon tubes, formed into an angle of approx. 250, can be used for pressures up to 60
bar. For higher pressures, Bourdon tubes with several superimposed windings of the same angular
diameter (helical tubes) or with a spiral coil in the one plane (spiral tubes) are used.

MERCURY-IN-GLASS THERMOMETER:
The mercury-in-glass or mercury thermometer was invented by Polish-Dutch ,physicist Daniel
Gabriel Fahrenheit in Amsterdam(1714). It consists of a bulb containing mercury attached to
a glass tube of narrow diameter; the volume of mercury in the tube is much less than the volume of
the bulb. The volume of mercury changes slightly with temperature; the small change in volume
drives the narrow mercury column a relatively long way up the tube. The space above the mercury
may be filled with nitrogen or it may be at less than atmospheric pressure, a partial vacuum.
In order to calibrate the thermometer, the bulb is made to reach thermal equilibrium with a
temperature standard such as an ice/water mixture, and then with another standard such as
water/vapour, and the tube is divided into regular intervals between the fixed points. In
principle,thermometers made of different material (e.g., coloured alcohol thermometers) might be
expected to give different intermediate readings due to different expansion properties; in practice the
substances used are chosen to have reasonably linear expansion characteristics as a function of
true thermodynamic temperature, and so give similar results.

Beckmann thermometer :
A Beckmann thermometer is a device used to measure small differences of temperature, but
not absolute temperature values. It was invented byErnst Otto Beckmann (1853 1923), a
German chemist, for his measurements of colligative properties in 1905.[1] Today its use has largely
been superseded by electronic thermometers.
A Beckmann thermometer's length is usually 40 50 cm. The temperature scale typically covers
about 5 C and it is divided into hundredths of a degree. With a magnifier it is possible to estimate
temperature changes to 0.001 C. The peculiarity of Beckmann's thermometer design is a reservoir
(R on diagram) at the upper end of the tube, by means of which the quantity of mercury in the bulb
can be increased or diminished so that the instrument can be set to measure temperature
differences at either high or low temperature values. In contrast, the range of a typical mercury-in-

glass thermometer is fixed, being set by the calibration marks etched on the glass or the marks on
the printed scale.

Calibration
In setting the thermometer, a sufficient amount of mercury must be left in the bulb and stem to give
readings between the required temperatures. First, the thermometer is inverted and gently tapped so
that the mercury in the reservoir lodges in the bend (B) at the end of the stem. Next, the bulb is
heated until the mercury in the stem joins the mercury in the reservoir. The thermometer is then
placed in a bath one or two degrees above the upper limit of temperatures to be measured.
The upper end of the tube is gently tapped with the finger, and the mercury suspended in the upper
part of the reservoir will be jarred down, thus separating it from the thread at the bend (B). The
thermometer will then be set for readings between the required temperatures.

Gas Thermometer

Gas thermometry reduces temperature measurement (from helium temperatures to 1063C)


to measurement of pressure or a gas volume in a closed vessel (under certain conditions)
followed by temperature calculation using the measurement results and the ideal gas laws.
A gas thermometer is a primary instrument for determination of thermodynamic
temperature. Application of exact relations requires design of complicated devices
inconvenient for practical use. In practice, temperature scales are used in which a simple
and convenient secondary thermometer is used and methods of transfer of thermodynamic
temperature from a primary instrument to the secondary thermometer are employed

(see International Temperature Scale). This requires use of precise primary instruments
reproducing thermodynamic temperature, instruments for realization of the temperatures
of phase equilibria of substances (for determination of the constants of the primary
instruments), i.e., representing the so-called fixed points and, of course, the secondary
thermometer itself together with simple and convenient methods for its calibration. The
simplest thermometer is a gas thermometer which consists of a glass or metallic gasimpermeable reservoir connected with an arrangement intended for pressure measurement
in the reservoir.
A schematic drawing of a gas thermometer is shown in Figure 1: reservoir 3 is immersed
into a medium whose temperature is to be measured; gauge 1 is connected via capillary 2 to
the reservoir; the reservoir and the capillary are filled with a working gas. A gas
thermometer allows the determination of pressure p and volume V of mass m of the ideal
gas with molecular weight converting from thermodynamic state 1 to state 2, with the gas
mass m = Vp/TR remaining constant in both states. Depending on the character of gas
transition from 1-to-2 state, three gas thermometers are distinguished: those of constant
volume, constant pressure and constant temperature. A constant-volume gas thermometer
is used at low temperatures (typically with helium as a working substance) and possesses
the highest sensitivity. At high temperatures, when gas desorption on reservoir walls
becomes pronounced and helium penetrates through the walls, gas thermometers of other
design are used with nitrogen as a working substance. For precise temperature
determination, corrections are made for gas nonideality, thermal expansion of the reservoir,
a "harmful" volume and thermomolecular pressure as well as for hydrostatic aspect. Since
the reservoir of a gas thermometer is connected with a manometer via a capillary, then
there is the "harmful" gas volume of a gas above the manometer mercury and inside the
capillary, whose temperature varies from the value to be measured to room temperature.
With change of the bulb temperature, the amount of a gas contained in "the harmful
volume" changes. A difference of the gas temperature in the bulb and in "the harmful
volume" requires the appropriate corrections. Such corrections may only be determined
rather approximately, therefore for low temperatures use is made of a thermometer without
a "harmful volume". Such thermometer is used in metrological works. Bulb 3 (Figure 1) is
partitioned with an elastic membrane 4. Smooth change of the pressure in the upper part of
3 allows 4 to be maintained in an equilibrium condition and thus pressure measured in the

lower part of closed volume of 3. For technical measurements, use is made of filled-system
gas thermometers working at temperatures from 150 to 600C. At a temperature up to
600C nitrogen is used as a working gas, while above 600C argon is used. The scale of a
filled-system gas thermometer (T = f(p)) is obtained using a knowledge of a volume of the
instrument components. For this, corrections are made for the gas nonideal state, thermal
expansion of the bulb and capillary, temperature variation, etc. This thermometer is inertial;
it fails to measure rapid processes. A schematic drawing of its construction is given
in Figure

Bimetallic thermometers

Introduction
Bimetallic thermometers are made up of bimetallic strips formed by joining two different metals having
different thermal expansion coefficients. Basically, bimetallic strip is a mechanical element which can sense
temperature and transform it into a mechanical displacement. This mechanical action from the bimetallic strip
can be used to activate a switching mechanism for getting electronic output. Also it can be attached to the
pointer of a measuring instrument or a position indicator. Various techniques such as riveting, bolting, fastening
can be used to bond two layers of diverse metals in a bimetallic strip. However the most commonly used
method is welding. Since two metals are employed to construct a bimetallic strip, hence they are named so.

Working
The working of a bimetallic strip thermometer is based upon the fact that two dissimilar metals behave in a
different manner when exposed to temperature variations owing to their different thermal expansion rates. One
layer of metal expands or contracts more than the other layer of metal in a bimetallic strip arrangement which
results in bending or curvature change of the strip. The working principle of a bimetallic thermometer is

illustrated in figure below. One end of a straight bimetallic strip is fixed in place. As the strip is heated, the other
end tends to curve away from the side that has the greater coefficient of linear expansion

Main Features
These types of thermometers work best at higher temperatures, since their accuracy and sensitivity

tends to reduce at low temperatures.


Bimetallic strip thermometers are manufactured in various designs. One of the most popular design i.e.

flat spirals is shown in the figure below. They can also be wound into a single helix or multiple helix form.

Bimetallic thermometers can be customized to work as recording thermometers too by affixing a pen to

the pointer. The pen is located in such a way that it can make recordings on a circling chart.
Bimetallic strips often come in very long sizes. Hence, they are usually coiled into spirals which make

them compact and small in size. This also improves the sensitivity of bimetallic strips towards little
temperature variations.
The bimetallic strip can be scaled up or down. On a large scale, it can provide literally tones of force

for mechanical control or other purposes. On a smaller scale, it can provide the force and movement for
micro machine integrated circuits (MMIs).[2]

Applications
Bimetallic strips are one of the oldest techniques to measure temperature. They can be designed to work at
quite high temperatures i.e. upto 500F or 260C. Major application areas of a bimetallic strip thermometer
include:

For various household appliances such as ovens etc.

Thermostat switches

Wall thermometers

Grills

Circuit breakers for electrical heating devices

A thermocouple is an electrical device consisting of two dissimilar conductors forming electrical


junctions at differing temperatures. A thermocouple produces a temperature-dependent voltage as a
result of the thermoelectric effect, and this voltage can be interpreted to measure temperature.
Thermocouples are a widely used type of temperature sensor.
Commercial thermocouples are inexpensive, interchangeable, are supplied with standard
connectors, and can measure a wide range of temperatures. In contrast to most other methods of
temperature measurement, thermocouples are self powered and require no external form of
excitation. The main limitation with thermocouples is accuracy; system errors of less than one
degree Celsius (C) can be difficult to achieve.
Thermocouples are widely used in science and industry; applications include temperature
measurement for kilns, gas turbine exhaust,diesel engines, and other industrial processes.
Thermocouples are also used in homes, offices and businesses as the temperature sensors in
thermostats, and also as flame sensors in safety devices for gas-powered major appliances.

LDV - Laser Doppler Velocimetry


Laser Doppler Velocimetery (LDV) is a technique used to measure the instantaneous velocity of a flow
field. This technique, like PIV is non-intrusive and can measure all the three velocity components. The
laser Doppler velocimeter sends a monochromatic laser beam toward the target and collects the
reflected radiation. According to the Doppler effect, the change in wavelength of the reflected radiation
is a function of the targeted object's relative velocity. Thus, the velocity of the object can be obtained
by measuring the change in wavelength of the reflected laser light, which is done by forming an
interference fringe pattern (i.e. superimpose the original and reflected signals).This is the basis for
LDV. A flow is seeded with small, neutrally buoyant particles that scatter light. The particles are
illuminated by a known frequency of laser light. The scattered light is detected by a photomultiplie
tube (PMT), an instrument that generates a current in proportion to absorbed photon energy, and then
amplifies that current. The difference between the incident and scattered light frequencies is called the
Doppler shift. By analyzing the Doppler-equivalent frequency of the laser light scattered (intensity
modulations within the crossed-beam probe volume) by the seeded particles within the flow, the local
velocity of the fluid can be determined.

Laser Doppler Optical System


Basic one-component LDV Equipments:
Laser system (Continuous-Wave-CW, single colour for single channel), transmission optics (e.g. Bragg
cell, lenses, beam expanders, beam splitter, mirrors, prisms, fibre cable link with laser beam
manipulator), receiving optics (e.g. lenses, pinhole, interference filter, photomultiplier), signal
processor units (e.g.. fringe-counting, spectral analysis, photon-correlation), traversing mechanism
(manual or automated) for transmitting and receiving optics, oscilloscope, seeding generation (solid or
liquid vapour) and computer (large capacity hard disk) with a data acquisition board and data handling
software. The more compact and easy to handle type of LDV system has fiber transmission and
receiving optics.

Pros :

**Non-contactingmeasurement.
** Very high frequency response.
Cons:
** Sufficient transparency is required between the laser source, the target surface, and the
photodetector
(receiver).
** Accuracy is highly dependent on alignment of emitted and reflected beams.
** Expensive; fortunately, prices have dropped as commercial lasers have matured.

Hot-Wire Anemometer

The Hot-Wire Anemometer is the most well known thermal anemometer, and measures a
fluid velocity by noting the heat convected away by the fluid. The core of the anemometer is
an exposed hot wire either heated up by a constant current or maintained at a constant
temperature (refer to the schematic below). In either case, the heat lost to fluid convection

is a function of the fluid velocity.

Typical Hot-Wire Anemometer


By measuring the change in wire temperature under constant current or the current
required to maintain a constant wire temperature, the heat lost can be obtained. The heat
lost can then be converted into a fluid velocity in accordance with convective theory.

Further Information

Typically, the anemometer wire is made of platinum or tungsten and is 4 ~ 10 m (158 ~


393 in) in diameter and 1 mm (0.04 in) in length.
Typical commercially available hot-wire anemometers have a flat frequency response
(< 3 dB) up to 17 kHz at the average velocity of 9.1 m/s (30 ft/s), 30 kHz at 30.5 m/s
(100 ft/s), or 50 kHz at 91 m/s (300 ft/s).
Due to the tiny size of the wire, it is fragile and thus suitable only for clean gas flows. In
liquid flow or rugged gas flow, a platinum hot-film coated on a 25 ~ 150 mm (1 ~ 6 in)

diameter quartz fiber or hollow glass tube can be used instead, as shown in the schematic
below.

Another alternative is a pyrex glass wedge coated with a thin platinum hot-film at the edge
tip, as shown schematically below.

Pros and Cons

Pros:
- Excellent spatial resolution.
- High frequency response, > 10 kHz (up to 400 kHz).

Cons:
- Fragile, can be used only in clean gas flows.
- Needs to be recalibrated frequently due to dust accumulation (unless the flow is very
clean).
- High cost.

Introduction

Consider a wire that's immersed in a fluid flow. Assume that the wire, heated by an
electrical current input, is in thermal equilibrium with its environment. The electrical power
input is equal to the power lost to convective heat transfer,

where I is the input current, Rw is the resistance of the wire, Tw and Tf are the temperatures
of the wire and fluid respectively, Aw is the projected wire surface area, and h is the heat
transfer coefficient of the wire.
The wire resistance Rw is also a function of temperature according to,

where is the thermal coefficient of resistance and RRef is the resistance at the reference
temperature TRef.
The heat transfer coefficient h is a function of fluid velocityvf according to King's law,

where a, b, and c are coefficients obtained from calibration (c~ 0.5).


Combining the above three equations allows us to eliminate the heat transfer coefficient h,

Continuing, we can solve for the fluid velocity,

Two types of thermal (hot-wire) anemometers are commonly used: constanttemperature and constant-current.
The constant-temperature anemometers are more widely used than constant-current
anemometers due to their reduced sensitivity to flow variations. Noting that the wire must
be heated up high enough (above the fluid temperature) to be effective, if the flow were to
suddenly slow down, the wire might burn out in a constant-current anemometer.
Conversely, if the flow were to suddenly speed up, the wire may be cooled completely
resulting in a constant-current unit being unable to register quality data.

Constant-Temperature Hot-Wire Anemometers

For a hot-wire anemometer powered by an adjustable current to maintain a constant


temperature, Tw andRw are constants. The fluid velocity is a function of input current and
flow temperature,

Furthermore, the temperature of the flow Tf can be measured. The fluid velocity is then
reduced to a function of input current only.

Constant-Current Hot-Wire Anemometers

For a hot-wire anemometer powered by a constant current I, the velocity of flow is a


function of the temperatures of the wire and the fluid,

If the flow temperature is measured independently, the fluid velocity can be reduced to a
function of wire temperature Tw alone. In turn, the wire temperature is related to the
measured wire resistance Rw. Therefore, the fluid velocity can be related to the wire
resistance.

Aerodynamicists use wind tunnels to test models of proposed aircraft and engine components.
During a test, the model is placed in the test section of the tunnel and air is made to flow past the
model. In some wind tunnel tests, the aerodynamic forces on the model are measured. In some
wind tunnel tests, the model is instrumented to provide diagnostic informationabout the flow of
air around the model. In some wind tunnel tests, flow visualization techniques are used to
provide diagnostic information about the flow around the model. Two of the oldest flow
visualization techniques are the use of smoke and tufting.
The figure shows smoke flow and tufts being used on the NASA Dryden F-18 flight vehicle, but
the techniqes are used more often in wind tunnel testing. Smoke is used to visualize the flow that
is away from the surface of the model. Smoke can be used to detect vortices and regions of
separated flow. On the figure, smoke has been introduced at the corner of the fuselage and
leading edge extension (LEX) to visualize the vortex generated by the LEX at angle of attack. In
the picture we see that vortex is well established until the flow encounters the vertical stabilzer of

the aircraft. Smoke has the advantage that is relatively inexpensive to produce. Smoke can be
injected from the surface or dispersed with a hollow wand that can be moved through the flow
field. The disadvantage of smoke is that it does not work well at higher speeds (greater than ~300
mph), the smoke must be introduced at the proper location without altering the flow, and the
smoke can leave a residue in the tunnel or on the model, depending on the type of smoke
employed.

Aerodynamicists use wind tunnels to test models of proposed aircraft and engine components.
During a test, the model is placed in the test section of the tunnel and air is made to flow past the
model. In some wind tunnel tests, the aerodynamic forces on the model are measured. In some
wind tunnel tests, the model is instrumented to provide diagnostic informationabout the flow of
air around the model. In some wind tunnel tests, flow visualization techniques are used to
provide diagnostic information about the flow around the model.
The figure at the top of the page shows two flow visualization techniques, tufts and surface oil
flows. The low speed inlet shown in the photo uses tufts, which are described on a separate page,
on the yellow external surfaces. Colored oil flows are used on the silver internal surfaces to

visualize the flow down the duct. Surface oil is applied as small dabs of oil at some upstream
location. The oil is standard 40W treated with a flourescent dye or pigment. The thickness of the
oil can be modulated using naptha or 60-70W oil. As the air flows over the model, the oil is
carried downstream in long streaks. A variety of pigments aid in flow visualization. Flourescent
pigment can be illuminated with a black light for greater visibility with photography.
Surface oil flows will indicate the boundary of a flow separation since the oil cannot penetrate
the separation boundary. In the photo, a separation is present inside the inlet at the corner of the
inlet and forebody. Because of the variation in skin friction between a laminar and a turbulent
boundary layer, surface oil treated with napthalin can be used to determine the transition point on
a model. Oil downstream of the transition point will be swept away. Some skill and experience is
required to properly place the oil dabs, and some clean-up is required when the test is completed.
A wind tunnel test using surface oil flow would proceed as follows. The oil is applied with the
tunnel stopped. The crew then leaves the tunnel, seals the tunnel, turns on the motor, and brings
the tunnel up onto the test condition. When the surface oil flow streaks are properly established,
the tunnel is stopped, the tunnel is opened, and the crew quickly photographs the streaks. The oil
must be applied with the correct thickness so that it generates a streak of some meaningful
length, but does not pool when the tunnel is stopped. Again, skill and experience is needed to
obtain meaningful data.

Вам также может понравиться