Вы находитесь на странице: 1из 6

CALIBRATION USING PORTABLE DIGITAL PRESSURE INDICATORS Class #8020.1 Leo J.

Buckon Technical Services Manager Meriam Process Technologies 10920 Madison Avenue Cleveland, Ohio 44102

INTRODUCTION The use of electronic pressure calibrators in the gas industry has added new concerns and issues in pressure measurement. Readings appeared that perhaps didnt match the old reliable standby calibration readings or methods, and terms like sensitivity, accuracy, resolution, stability and traceability have become common. Technicians began using correction factors to achieve standard conditions. These correction calculations presented challenges to technicians when performing their calibrations. They began to see the effects of temperature on their test instruments and how temperature affects the accuracy of the gas measurement. More recently, the wide spread use of digital field devices such as smart transmitters has continued to change the technicians world as new tools became necessary to configure and maintain field instrumentation. When using electronic pressure calibration equipment, technicians can make their job easier if they identify and purchase instruments that are traceable, precise, accurate, sensitive, and repeatable. The American Petroleum Institute Chapter 21 gives good advice and recommendations in this area. Communications capability and multifunctionality also help the technician to be more productive in the field. TERMINOLOGY A key area in the proper selection of an electronic field calibration device is terminology used to define the product performance. Understanding the terminology is necessary to insure that the calibrator will perform to the required level. In measurement terminology, Traceability is the ability to trace the calibration of a measurement device, either directly or indirectly, to the National Institute of Standards and Technology (NIST). A Certificate of Traceability to NIST lends credibility to the calibrator and gives the user confidence in its readings. Precision is the limit of error the test instrument will produce when the same input is repeated at the same atmospheric conditions at different test points. Measurement Accuracy is the degree of conformity to a standard that combines traceability and instrument precision. Accuracy is the difference between the true value and the measured value. A portable electronic calibrator can be traceable to a given reference source, yet fail to satisfy the measurement accuracy requirement for a particular application. Careful attention to accuracy specifications is needed. Accuracy specifications are a great aid in comparing electronic calibrators. Accuracy is the closeness of the indication or reading of a measurement device to the true value of the quantity being measured. In an accuracy specification, this uncertainty is generally expressed as the plus or minus () percentage of full scale output, span or reading. This will be explored later in the paper. Sensitivity of an electronic pressure calibrator is the measured change in output of the calibrator for a change in input. Generally, high sensitivity is desirable in test instruments because a large change in output for a small change in input implies the instrument can sense and indicate small changes. Sensitivity must be evaluated together with linearity of the output, range and accuracy. The calibrator sensitivity may have an output shift that is related to a change in temperature. 0.02 percent of Full Scale per degree Celsius is a typical temperaturerelated sensitivity specification. This could also be expressed as a percentage per degree Fahrenheit. Repeatability is the closeness of the results of successive measurement of the same quantity carried out by the same method, by the same person, with the same measuring instrument, at the same location over a short period
1059

of time. The ability of an electronic pressure calibrator to repeat during a series of consecutive proving runs under constant operating conditions gives the user considerable confidence in the device. READING AND COMPARING SPECIFICATIONS Once you have decided to use an electronic pressure calibrator and selected two or three competing designs, you will need to compare and understand the manufacturers performance specifications and standards used. Carefully read and understand the wording regarding accuracy statements. Then use the specifications to determine the expected potential error ( value) at a point of interest relevant to the conditions you will have in the field. This will go a long way toward identifying the best test instrument for your needs. Manufacturer specifications can be expressed as percent of calibrated span, percent of upper range limit, or percent of reading. These percentages may be degraded further if a temperature specification is given (typically shown as percentage per degree C or degree F from the calibration temperature). Typically the accuracy ratings of electronic pressure calibrators fall between 0.025 percent and 0.2 percent. Calibrators with lower accuracy values usually provide the most accurate measurements. When looking at accuracy specifications, you should ask the following questions: Does the accuracy statement include all sources of error? Does the statement specify how linearity, hysteresis, repeatability, temperature, resolution ( number of digits) and even power variance affect the accuracy? Do you have comparable components of accuracy specified for each of the manufacturers under consideration? The selection of the best performing digital calibrator begins by comparing published accuracy ratings. This is not as simple as it appears or sounds. Technicians need to learn about the standard format for determining the accuracy at a point of interest to him using manufacturers accuracy statements. Sometimes your company will have standardized comparison procedures that define standards to use for the comparisons. Other times you may need to devise your own comparison techniques. As an example consider the effect of temperature on an accuracy statement. This effect is usually defined by specifying the accuracy over a temperature range. But some manufacturers state it as a separate coefficient and others may simply not mention it. To be sure you identify the correct calibrator, you need to understand the specifications. For discussion, here are three examples of accuracy statements: Example One: Instrument is 0.05 percent of full scale, + 0.02 percent per degree Fahrenheit from the laboratory certification temperature of 77F (25C). Full scale range is 250 WC. Example Two: 0.05 percent full scale throughout the temperature range of 30 to 130F (-1.11 to 54.4C). Full scale range is 250 WC. Example Three: 0.1 percent of full scale + 0.01 percent per degree Celsius from the reference temperature of 15.56C. Full scale range is 250 WC. Begin by calculating the potential error for each unit. For the purpose of this discussion, evaluate the performance of each unit at an ambient temperature of 86F (30C). The first example would break down as follows: 0.05 times the Full Scale + 0.02% times the difference of (86-77F). Adding figures to our example: ( 0.05% x 250) + 0.02% x (86-77) = 0.125WC + 0.0018WC = 0.1268 WC throughout the 250 WC range. The second example works out: 250 x 0.05% = 0.125 from 30 to 130F. The third example shows: 250 x 0.1% + 0.01% (30 15.56) = 0.25 + .00144 = 0.25144WC potential error. As you can see, the best performance is provided by example two. Example one is close, but when temperatures above 86F are considered, example two wins out. Notice how important the temperature specification is in determining the true accuracy at a desired condition or range of potential operating temperatures. You must pay careful attention to the wording of the specification and to determination of the potential error at a condition of relevance to your calibration needs.
1060

Accuracy statements include conformity, linearity, hysteresis, and dead band. If these components of accuracy are listed separately, then they need to be combined in a meaningful way to determine the accuracy. The Instrument Society of America (ISA) recommends the root-sum-squares method for determining instrument accuracy. Some of the instrument manufacturers use the Total Probable Error method that is similar to the ISA method. The Total Probable Error equation is: TPE = A2 + B2 + C2 . . . This equation provides the technician with the ability to determine the total possible error from component potential () errors. Since it is unlikely all component errors would occur in the same direction (either plus or minus) from their mean, the TPE is a good tool. The root-sum-square (RSS) method determines the total probable errors by summing the squares of the individual error and taking the square root of the total. It is assumed the accuracy components are independent of each other and the mean of the error term is zero. When combining all the error factors, the units of measurement must be the same. Accuracy can be expressed in several different ways: Percent of Span Percent of Full Scale Percent of Reading

Pres s ure 1 3 5 7 9 11 13 15 17 19 1.50% 1.00% 0.50%


Error

% Full Scale % Reading % Reading % Full Scale

0.00% -0.50% -1.00% -1.50%

Percent Reading vs Full Scale


Percent of Span and Percent of Full Scale are equivalent terms. The most demanding expression is Percent of Reading. For a given percentage value and full scale, Percent of Reading is a more accurate expression than Percent of Full Scale. Most manufacturers specify % of Full Scale. EFFECTS OF TEMPERATURE All pressure measuring instruments are sensitive to temperature. When choosing an electronic pressure calibrator, the manufacturers specifications should state the effect of temperature over a specified operating range. This important test data will determine the temperature coefficient that changes the instrument output with a change in ambient temperature. NOTE: This temperature is the ambient temperature and not the pipeline temperature or gas temperature. The temperature coefficient should be determined over the operating pressure range of the test instrument. The technician should be familiar with two important factors of the temperature coefficient: the linearity and the time constant. As the ambient temperature changes, the temperature of the pressure sensor itself changes. The sensor correction for temperature differences will continue to change until the sensor stabilizes at the new temperature.
1061

If the ambient temperature changes again then the process repeats for the new temperature. Warm-up time for the electronic calibrator can be important. Most testers need to have the electrical components at ambient temperature for optimum operation. TEMPERATURE COMPENSATION Solid state pressure sensors can utilize analog and / or digital compensation techniques. Analog compensation uses electronic components to approximately compensate for temperature effects on the pressure element. Digital compensation, however, can precisely correct the pressure with actual measured data determined in an environmental chamber at the manufacturers facility. A microprocessor corrects the digital pressure output using an on-board digital temperature signal. Digital temperature compensation can literally remove the temperature error over a specified temperature range (23F to 122F is typical). FLUID DENSITY REFERENCE TEMPERATURE The reference temperature a manufacturer uses is the temperature base for the unit of measurement selected. Do not confuse it with the ambient operating temperature or the temperature of the process. This provides comparability between measurements regardless of the ambient or process temperature. If you select inches of water or millimeters of mercury, for example, the reference temperature precisely tells the density of the water or mercury. Water at temperatures of 4C, 20C, 60F, or 68F has different densities. To compare 20 WC at 60F to 20 WC at 20C would require the conversion of one temperature reference to the other. Otherwise, measurements of the same pressure by two different testers cannot be directly compared. The densities of water at common reference temperatures are listed in table 1 below: Temperature 4C 20C 23C 15.56C Table 1 Water Density 1.0000 .99823 .9975 .99904

This is another important factor to take into account with the differential pressure sensor. The ISA standard temperature for inches of water column pressure measurement is 20C (68F). The American Gas Association standard is 15.56C (60F) and certain test calibration manufacturers reference to 23C (73.4F). Calculating the percent of difference between 23C and 60F, we find that water is 0.148 percent heavier at 60F than at 23C. If the technician puts 1 inch of water column pressure on a unit calibrated to a reference temperature of 60F and a unit calibrated at 23C, the later would show less than the one inch of pressure. RESOLUTION Resolution is also important to our instrument. Resolution is the smallest increment that can be distinguished or displayed. Resolution may exceed accuracy, but accuracy cannot exceed resolution. Most field calibrators have a 3-1/2 or 4-1/2 digit display. Using a 3-1/2 digit instrument divides the output signal into 1,999 parts and the 41/2 digit divides the output signal into 19,999 parts. The smallest increment of a 3-1/2 digit display is 1 part in 1,999 parts or 0.05% of full scale (1/1.999 X 100 = 0.05%). This limits the accuracy of the instrument to 0.05% of full scale reading. At 100 PSI, this instrument can display no more than PSI and tenths (100.0 PSI). But the calculation of resolution potential error is 100 x 0.05% = 0.05 PSI or 99.95 to 100.05 PSI. Since the hundredth place cannot be displayed, the unit rounds the values to PSI and tenths. Therefore, the unit could display 99.9 to 100.1 PSI when 100 PSI is applied. If this same pressure was applied to a 4-1/2 digit tester (1 part in 19,999 or 0.005%), the unit can display no more than PSI and hundredths or 99.95 to 100.05 PSI when 100 PSI is applied. The 4-1/2 digit enhances the resolution of the instrument above that of the 3-1/2 digit. Be careful, though, because bigger and more expensive isnt always better. The additional display could prove meaningless if the measurement accuracy is worse than the resolution. Improvements in display and microprocessor technologies have made it easier for manufacturers to balance the Accuracy vs. Resolution. However it is still important to consider when selecting a calibrator.

1062

STABILITY AND RE-CERTIFICATION Stability is defined as the ability of an instrument to maintain the measurement accuracy over a specified time period. The shorter the stability time period, the more often the instrument needs to be recalibrated. This statement of stability may or may not be reflected in the manufacturers specifications. If it is not stated, consult the manufacturer for recommended intervals. All instruments are re-certified after some time to insure that the accuracy performance has not changed. Electronic calibrators need to be checked against a standard periodically, too. If environmental conditions affect the standard used for the check, then they should be taken into account. Temperature (density) and the local value of gravity, for example, need to be taken into account when dead weight testers are used as the standard. Again, attention to instrumentation specifications is critical. Several electronic pressure calibrators are available with software features that allow the user to adjust the calibrators pressure sensors in the field. This is a great feature to have because it gives you the ability to get local service from qualified calibration service providers or from your own metrology shop. Most testers allow small adjustments to the original calibration curve. Large adjustments are usually not allowed since the need for large adjustments probably indicates a problem requiring factory service. Once the electronic pressure calibrator is re-certified, no further corrections are needed. Temperature operating limits and error specifications remain the same as originally specified. MATERIAL COMPATIBILITY AND PRESSURE LIMITS When selecting a digital pressure indicator it is important to consider the fluids it will be in contact with. Many digital calibrators use pressure sensors that are compatible with clean, dry, non-corrosive gases only. The benefit to these sensors is that they have a lower cost. However, the electronic components used in the pressure measurement can come in direct contact with the process fluids. If corrosive materials or liquids come in contact with the electronic components, the sensor may be destroyed requiring costly replacement and causing downtime. To prevent this, the technician must isolate the unit to be calibrated from the process and blow down or clean out the unit prior to connecting it to the calibrator. Sensors that are compatible with many fluids are typically made with all 316SS wetted materials. A thin 316SS diaphragm and silicone gel separate the pressure sensing component from the process. These sensors are more durable but, they are also more costly. The additional cost is usually canceled out by the increased durability and the time saved by mot having to blow down the unit under test. The additional resistance of the metallic diaphragm reduced the accuracy, sensitivity and repeatability of the sensor. This was particularly true in low pressure measurement. New developments have overcome these restrictions and all Stainless Steel wetted material sensors are now available for not only Gauge pressure measurement but for low pressure, DP measurements with a full scale of 28 H2O. Another benefit of the all Stainless wetted materials in a differential pressure application is that this design allows the sensor to measure DP at much higher static pressures. On manufacturers unit is rated to 1,000 PSIG static pressure with as low as 1 PSID differential pressure measurement. DIGITAL FIELD INSTRUMENTATION As the gas industry has moved to more modern field devices, digital communication has become more important. Flow computers are replacing chart recorders and smart transmitters replace mechanical gauges and analog transmitters. Todays field mounted smart pressure transmitters use digital technology to measure process variables and then convert them to analog outputs to recorders and flow computers. The technician must be able to configure and maintain the transmitters using handheld communicators with digital protocols. A common misconception is that the configuration of a smart transmitter constitutes a calibration. In fact, most communicators used to configure smart transmitters only address the digital portion of the unit. It looks at what the microprocessor believes to be the pressure input and what should be the analog output. This is processed through a D/A converter and out of the transmitter as an analog signal. To assure accuracy, the technician must measure the applied pressure and the analog output to be certain they match the digital signal of the transmitter. Until recently, this meant that not only did the technician have to carry a pressure measurement device, it was also necessary to carry a handheld communicator. However, electronic pressure calibrators with digital
1063

communications capability have become available. These calibrator/communicators reduce the number of tools the technician must carry to do his job. Because it is not necessary to be switching back and forth between devices, the time spent on maintaining (re-zeroing and re-spanning and re-configuring) and calibrating digital transmitters is significantly reduced. These devices are available with modular designs enabling them to perform calibrations on various other parameters such as temperature and perform checks on voltage and current reading and loop tests. SUMMARY When the gas industry embraced electronic pressure calibrators, technicians were dealt new challenges in their work. Manometers were discarded and dead weight testers were, for the most part, relegated to the instrument shop. New evaluation techniques were needed to insure the selection of the best performing electronic calibrator for the job. Today, the digital field instruments being installed require new understanding and better tools for the technician. With a patient review of manufacturers specifications and features, the technician can be sure that he spends his calibrator dollar on the unit that will not only meet his accuracy and performance needs but also fills the multi-functional roles of multi-meter and digital communicator. Todays electronic pressure calibrators are up to the challenges you face.

1064

Вам также может понравиться