Вы находитесь на странице: 1из 44

0412CT_Whitepapers.

qxp 11/10/04 10:13 AM Page 1

THE ONLINE RESOURCE OF CONTROL MAGAZINE

CONTROL Special Report:


Ten Steps to Avoid
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

Unnecessary Plant Shutdowns


Do you have nightmares about emergency shutdowns? Are you worried that some of your
important systems and equipment are always on the ragged edge of failure? Welcome to
the club. You aren’t alone. With the downsizing of operations and the loss of much insti-
tutional knowledge in process automation, it is a rare plant that has enough manpower
and knowhow to keep the place running the way it should.

Downtime.
T H E DPreventive
I G I TA L R E SMaintenance.
O U R C E O F P LEmergency
A N T S E R V I Shutdowns.
C E S M A G A ZAll
I N Eof these cost big
money. Not only do the repairs cost more, after a failure in equipment or instrumenta-
tion, but the real cost is that the plant can’t make product. I was once told by a plant
manager in the specialty chemicals industry that every hour his plant was down reduced
his output by three-quarters of a million dollars. Compared to just one instance of an
unplanned shutdown, the cost of maintenance and upgrades is miniscule.

Increasing productivity is the only way we can keep our industrial infrastructure open
in the face of significant cost differentials in other parts of the world. Installing plant
automation has been the road to increasing productivity for decades. Now the return
T H E D I G automation
from additional I TA L R E S O Uis
RC E OF
less, C Hwe
and E Mare
I C Aturning
L P R O Cto
E Sother
S I N G ways
M A Gto
A Zincrease
INE produc-
tivity. Clearly, one of the most important ways to increase productivity is to keep the
plant operating more of the time. This is called plant availability. Plant availability is
critical to keeping your plant open and your workers employed.

Here are ten steps to increasing plant availability and avoiding unnecessary shut-
downs. These are important steps, but aren’t necessarily the only ones you can take. But
if you do take these steps, you will see improved productivity and plant availability imme-
diately, and you will also see some additional steps you need to take.

Each of these steps has a proven ROI, and these articles have been chosen to illustrate
THE DIGITAL RESOURCE OF CONTROL DESIGN MAGAZINE
how and why to implement each step, and help you calculate the ROI from doing so. And
don’t forget to figure out how many unplanned shutdowns you are avoiding and crank
those into the ROI value.
Walt Boyes
Editor in Chief

THE DIGITAL RESOURCE OF FOOD PROCESSING MAGAZINE


THE ONLINE RESOURCE OF CONTROL MAGAZINE

Ten Steps to Avoid Unnecessary Plant Shutdowns


T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

3 Step One: Move to Reliability-Centered Maintenance


Center on Reliability – Forget preventive maintenance. Today's uptime requirements call for an entirely different approach
Rich Merritt, Senior Technical Editor

8 Step Two: Prevent Failure with Condition Monitoring


Prevent Failure – Emerging sensor and analysis technologies let operations personnel foresee and correct problems
before equipment goes down
Dan Hebert, PE, Senior Technical Editor

12 Step Three: Optimize Redundant Controls


The Best Defense – In the Field and on the Plant Floor, Redundancy Protects and Secures Critical Processes
Dan Hebert, PE, Senior Technical Editor

16 Step Four: Monitor Power Quality


How and Why to Monitor Power – Quantity, quality, and reliability of this expensive raw material can make or break
your plant's bottom line
Rich Merritt, Technical Editor

20 Step Five: Rationalize Your Alarm Systems


How to Perform Alarm Rationalization – This critical part of alarm management is key to improved plant safety,
environmental responsibility, and operations excellence
Bill Mostia Jr., PE

27 Step Six: Upgrade Your Legacy Systems


SPEAK! New Tricks to Get Information Out of Legacy Systems – How to retrofit your system for modern communications
Rich Merritt, Technical Editor

33 Step Seven: Connect Diagnostics from Your Smart Transmitters


State of the HART – The Compelling Case For the Most Used Digital Communications Protocol in The World
Ron Helson

35 Step Eight: Keep Your Safety Systems Updated and Operating


The Safety Instrumented Function: An S-Word Worth Knowing – Understand the SIF to control confusion, complexity,
and cost of Safety Instrumented Systems
Bill Mostia Jr., PE

39 Step Nine: Do Loop Performance Analysis


TiO2 Facility Finds Hidden Plant – Millennium Inorganic Chemicals increased capacity, improved quality,
and reduced maintenance costs with loop performance analysis
Paul Berwanger

43 Step Ten: Install Better Online Process Analyzers


Steam Cracker Performance Optimized by Process MRA – Magnetic resonance analysis gives BASF AG a consistent,
low-maintenance method to accommodate fluctuating feedstocks
John Edwards
Maintenance has come a long way since the “fix it when it breaks” mentality of the 1940s and the preventive maintenance philosophy
of the 1970s and 1980s. Today, the new world of reliability-centered maintenance (RCM) calls for computers, software, and sensors to
achieve maximum plant availability and reliability at the most effective cost.

A big surprise is preventive maintenance (PM) actually can be bad for certain systems! Not only is it expensive, but it doesn’t work
well with modern, high-tech equipment. So instead of PM, we are entering a whole new world of condition monitoring, loop analysis,
and predictive maintenance.

Although RCM often works best with fieldbus architectures and high-level asset management software (because they can more easily
obtain and process data) the techniques involved are not beyond the reach of a typical process control user, even those with legacy con-
trol systems. This is because it’s not how you acquire the data that’s important, it’s what you do with it.

Military Maintenance

Modern maintenance technology procedures began years ago in the military. The war in Iraq proves beyond a doubt just how effec-
tively these techniques work.

“I used vibration monitoring and maintenance management [15 years ago] in the propulsion power plants of U.S. Navy vessels,” says
Robert Rosenbaum, an automation consulting engineer in American Canyon, Calif. “The use of portable handheld vibration instru-
mentation was so successful that the Navy purchased several permanently installed vibration monitoring systems for its fleet.”

Rosenbaum also says he’s familiar with RCM as once used by United Air Lines to prevent failure in commercial aircraft systems.

Vibration monitoring is just one part of an RCM program. The overall RCM process includes procedures to determine the functions and
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 4

Commercial aircraft and process control systems both use similar systems: pneumatics, electro-hydraulics, servomotors, networks, con-
trol valves, pumps, miles of wire and cables, networks, computers, electronic controls, and flow, temperature, level, and pressure sensors.

Some process industry data seems to confirm just how random failures can be. “The majority of failures in valves and control loops is
not predictable and the probability of failure does not increase with time,” says Lane Desborough, manager of loop management serv-
ices, Honeywell Industry Solutions, Thousand Oaks, Calif. “There is little evidence that valve failure can be predicted reliably based
on accumulated stem travel alone.”

If failures of process equipment are random, so much for preventive maintenance. What do we do now?

Three-Pronged Attack

Preventive maintenance isn’t completely dead, of course. Rosenbaum, who bought into RCM 15 years ago, still believes in PM. “It
heads off trouble before it starts, and the return is well worth the money invested.”

Certain equipment does have a wear-out zone, and prudence dictates that it should be maintained before it breaks.

“We count valve operation cycles automatically, using our data historian, an OSI PI system,” says Don Erb, manager of production
planning and information, Ciba Specialty Chemical, McIntosh, Ala. “When the cycle count reaches a certain trigger value, the valve is
scheduled for maintenance during the next opportunity.”

Ciba’s valve performance is evaluated based on historical data. “After we have one fail at a number of cycles, we check valves in simi-
lar service next time before they reach the same number, with the objective of service before failure occurs,” explains Erb. “One major
objective of our plant is reliability improvement. Our reliability has been improving over the past year, and although this is certainly
not the only program in place, it is contributing.”

Many process plants have developed similar PM programs for valves, only to find that 30% of the valves that are taken apart for pre-
ventive maintenance have absolutely nothing wrong with them (Chemical Processing, November 2001). As Honeywell’s Desborough
points out, this is probably because these programs drive maintenance actions based on device usage, not on control loop performance
degradation.

Therefore, what we need is a better way to determine when assets actually require maintenance. This requires a three-pronged attack:

1. Sensors, tracking systems, or on-board diagnostics on each asset that help identify the presence of a problem.

2. A data acquisition system to collect asset information.

3. Software to analyze the data, determine that a problem exists, and suggest maintenance procedures to correct the situation.

All of the above pieces are readily available on the open market. Plants with fieldbus-based hardware, a frameworks-based control
hierarchy, and asset management software already have the infrastructure in place to do RCM. Those with legacy systems can buy the
necessary hardware and software and install it on their process. As with all things in this industry, you can get the RCM capability you
need by spending anywhere from a few thousand to a few million dollars.

Sensing Problems

In olden days, supervisors would dispatch technicians to the field to check on problems. But not anymore. “The days of having instru-
ment technicians run to the field every time there is a problem are long gone,” says Rami Mitri, director of asset optimization, New
England Controls, Mansfield, Mass. Downsizing and reduced budgets have taken a toll on maintenance operations in many plants, he
says. As staff and budgets decrease, equipment problems increase. “Many customers neglect to link downsizing to reduced mainte-
nance on critical equipment that can either shut down or delay production.”

To overcome problems caused by downsizing and budget cuts, Mitri says end users have to adopt new, enabling technologies for
maintenance. In many cases, this means being able to identify problems before they occur, so maintenance dollars go further.

Several ways exist to determine if a device or system is having problems:

* Manual observation (leaking, making noise, boiling over, etc.).


THE ONLINE RESOURCE OF CONTROL MAGAZINE

4
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 5

* Condition sensing (running hot, vibrating, losing pressure, etc.).

* Internal diagnostics (the device itself detects problems).

* Performance analysis (valve sticking, slow control response, hunting, etc.).

PG&E, the giant utility in California, uses manual techniques to check its gas distribution operations, says Brian Steacy, general
manager of DST Controls, Benicia, Calif. DST supplied PG&E with a PDA-based data acquisition system.

“PG&E opted out of fully automating its data acquisition because it would have been cost-prohibitive and, more importantly, not
entirely safe,” says Steacy. “Much of PG&E’s compressor station instrumentation is too far flung to be hardwired, and many of the thou-
sands of gauges that must be read daily are old, mechanical, or otherwise too costly to match up with transducers or hang on a network.”

Using a handheld system provides regular human presence and keeps an eye on things to help avoid disasters, such as leaking com-
pressor lubricant, unusual conditions, and graphic evidence that a cat had strayed into a compressor cooling fan. “The fan kept run-
ning, so the alarm wasn’t triggered, but visual inspection revealed the necessity to shut the fan down for cleaning, repair, and
balancing,” says Steacy.

Wandering cats aside, manual observations are becoming the solution of last resort these days. Therefore, users must seek out ways
to detect problems remotely, or predict them based on operating conditions.

One of the best ways is via condition monitoring, as explained in “Prevent Failure” (see “Step Three” p 11). That article explains
how vibration analyzers and sophisticated data analysis can predict equipment problems in advance.

Condition monitoring, of course, often requires sensors to be installed on equipment to detect the conditions. Fortunately, this is
getting much easier for end users. Many devices now come with HART or fieldbus interfaces, both of which can transmit diagnostic
information.

Manufacturers also are building diagnostics into various devices, such as power supplies. The S8VS power supply from Omron Elec-
tronics, for example, can monitor percent usage and available life remaining.

For devices that do not have embedded diagnostics, users can install the necessary sensors on vital assets. It’s not something you
would want to do on thousands of devices in a typical plant, but condition sensors can be installed on assets of particular interest. If a
certain pump, valve, compressor, or similar device is failing and causing problems, it could be fitted with vibration or voltage sensors
on a permanent or temporary basis until the problem is diagnosed. For example, Allen-Bradley’s MachineAlert relays can be installed
in a control panel to monitor phase, current, temperature, and motor rotation in any motor control application.

It’s also possible to make manual vibration measurements on certain key machines. For example, SKF’s MicroVibe portable vibra-
tion test and measurement instrument can be used with a PDA; this lets a technician run out into the plant periodically to check criti-
cal systems. The Ultraprobe 1000 from UE Systems has its own on-board recording, logging, and application software for ultrasonic
condition analysis locally or later at a computer.

When buying new or replacement equipment, it’s a good idea to seek out devices that have built-in sensors and embedded diagnostics.

“Investing in assets that can communicate when they require attention, such as maintenance or calibration, is critical to proactive
strategies,” says Mark Bitto, product manager of asset optimization products at ABB, Wickliffe, Ohio. “Intelligent field devices, con-
trol systems, workstations, and network hardware all contain a rich set of embedded diagnostic information. Unfortunately, unless the
device is enabled to report these health conditions, the information will go unnoticed for long periods of time.”

This means all that condition sensing and diagnostic data needs to be acquired for further analysis.

Plucking Data by PDA

At one end of the data acquisition cost spectrum, PDAs are rapidly replacing notebooks and clipboards in the maintenance arsenal.
PG&E technicians, for example, use them to record daily readings and make on-the-spot checks of equipment.

Steacy says software in PG&E’s PDAs can check the current reading to see if it is within limits for each device. “If the operator makes
an entry the system deems out of range, DST’s dBehold software will alarm and prompt for data re-entry,” he explains. “This prompts
THE ONLINE RESOURCE OF CONTROL MAGAZINE

5
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 6

the technician to make a visual inspection of the meter to determine if it was a transcription error or if the meter is having a problem. If
an equipment fault is discovered, the tech can flag it for maintenance.” Maintenance departments everywhere are using similar hand-
held PDAs and laptop computers.

Many maintenance departments realize the benefits of automated maintenance technology, but simply can’t afford it, so they stick
with their manual systems.

“We are looking at replacing any failed transmitters and new installs with fieldbus transmitters, mainly because of wiring and future
advances in information provided,” says Matt Smith, process control supervisor, Amalgamated Sugar Co., Twin Falls, Idaho. “We
looked at Emerson’s AMS, but couldn’t justify the per-point costs because we have about 2,500 transmitters and 1,000 control ele-
ments. We employ 10 instrumentation technicians and are currently implementing an electronic work-order maintenance manage-
ment system. I guess the bottom line is, we have the labor to do it manually.”

When we asked end users for inputs on how they were acquiring data for maintenance purposes, several agreed with Smith, telling
us they simply could not afford to install fieldbus instrumentation and asset management software.

Few are as lucky as James Loar, engineering group leader at Ciba Specialty Chemicals, Newport, Del. “We are in the process of
installing a system to monitor reliability of process control and instrumentation,” he told us. “We are installing a system with Foundation
fieldbus, DeviceNet, Profibus, and AS-i. A new corporate standard for control systems forced us into the luxury of having this capability.”

Fieldbus, DAQ and Asset Management

Clearly, the dream solution for an RCM system is fieldbus instrumentation connected to a distributed control system (DCS), an asset
management software package, loop analyzer, performance analyzer, and a computerized maintenance management system (CMMS),
all of which costs only slightly less than one of Saddam’s gold-plated bathrooms.

“Predictive maintenance technologies that integrate with the process automation system offer distinct advantages to users,” says
Stuart Harris, vice president of Emerson Process Management’s Asset Optimization Div. (www.emersonprocess.com). “We see a clear
trend toward operators playing a first-line role in reliability and maintenance. Therefore, the ability to send predictive equipment advi-
sories to operators is very valuable. Emerson accomplishes this goal in its PlantWeb digital plant architecture, which combines process
automation with asset management.”

All the major process control vendors have their own asset management/CMMS software, or they form alliances. Honeywell inte-
grated Asset Manager PKS and its Loop Scout directly into its asset management system, Experion PKS. Invensys combined its Arches-
tra framework architecture with its Wonderware HMI/SCADA software and Avantis asset management software. ABB has allied with
Accenture, and Integraph uses AM software from Meridium. Suffice to say, if you have a process control system from a major vendor,
you can get everything you need to do RCM.

Much less expensive solutions are readily available. For example, at National Manufacturing Week, we saw a number of vendors--
HMW, InduSoft, Applied Data Systems, Advantech, H-P, and Siemens--combine their products into a data acquisition system just by
plugging them together via Ethernet and .Net, and writing a little software. It took two weeks to set it up, said the InduSoft program-
mer who put it all together.

MTS and National Instruments put together a joint solution that combines MTS’ noise and vibration I-Deas software with NI’s I/O
cards and LabView software. It can sample, process, and analyze up to 5,000 channels of vibration data.

Much of what a maintenance department needs is already contained within a system’s real-time database or its process historian. Temper-
atures, pressures, control signals, and a host of other process data can be used by software to analyze loop performance and detect problems.

You can get the rest of the data you need by installing condition-based monitoring systems on certain assets, or dig diagnostic data
out of your HART and fieldbus instrumentation. If you can find it, that is.

“Last year, every major DCS player introduced HART I/O, either their own or as a reference,” says Louis Szabo, vice president of
Meriam Instrument (www.meriam.com). “There are HART multiplexers to bring HART diagnostic information into existing sys-
tems.” The problem is, HART device descriptions (DDs) supplied to the HART Communication Foundation can’t handle all the capa-
bilities of the devices. “So vendors are coming up with different schemes. The HART Communication Foundation, Fieldbus
Foundation, and Profibus International are promoting enhanced or extended device descriptions (eDDs) while Invensys and Euro-
pean-based vendors are supporting FDT/DTM alternatives.”
THE ONLINE RESOURCE OF CONTROL MAGAZINE

6
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 7

In other words, the information is out there, buried inside HART and fieldbus instrumentation, and all you have to do is extract it.
It’s not always easy, especially in mixed legacy systems.

“I’m using Foundation fieldbus [FF] in our process plant, and I have problems with the instruments due to the architecture,”
laments Jorge Cano, process control engineer at MetMex Penoles, in Torre?n, Coah, Mexico. “We have a Rockwell ControlLogix PLC,
but all interface to FF is with National Instruments FF Configurator software.” Cano is using Rosemount instrumentation and Rock-
well RSView 32 software. “The results are bad. The maintenance costs are very high, process improvements are difficult to implement,
and failures in our process and equipment occur many times. I have plans to migrate to Plant Web with our platform.”

With all due respect to the equipment named, such problems are not rare and are not caused by the equipment. We hear from many
engineers that bringing up a fieldbus system of any kind can be a bear. But there has been progress. Perhaps in a year or so there may be
better software available that will let you obtain the necessary maintenance information from HART and fieldbus more easily.

Once you obtain the necessary field data, a host of software packages is available to help you interpret data, analyze conditions, pre-
dict problems, and recommend solutions. These range from CMMS packages that help schedule maintenance procedures to perform-
ance monitoring software that analyzes plant data and looks for loops that are not performing up to snuff.

RCM is such a major change from the old, easily understandable preventive maintenance techniques, it’s no wonder that engineers
are reporting mixed results.

“We use Emerson’s AMS on control system equipment,” says Joe Pittman, principal safety systems specialist at Lyondell/Equis-
tar Chemical, Channelview Texas. “Other than an automated documentation system, I have seen little benefit on the sensor side. It
has provided benefit on the valve side, with the ability to do valve scanning and define which valves need to be pulled and repaired
during turnarounds.”

Such systems also require a major change in attitude. “Syncrude has an AMS server from Emerson in parallel with its Honeywell
TDC 3000 system,” says Ian Verhappen, instrument engineer at Syncrude in Fort McMurray, Alberta. “It has not been integrated with
the remainder of the maintenance software system for two reasons: first, bureaucracy; second, buy-in from the maintenance team and
their supervisors, who do not understand these systems require work to get results. As with many engineering projects, the biggest
hurdle is not introducing the technology, but rather the culture change required after the fact to use it effectively.”

In other words, the tools to implement an RCM system are available. You just have to conquer a few minor obstacles--such as fieldbus
idiosyncracies, bureaucracies, old equipment failure theories, politics, and maintenance department mindsets--to make it work.

BACK TO TABLE OF CONTENTS

THE ONLINE RESOURCE OF CONTROL MAGAZINE

7
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 8

Prevent Failure
Emerging sensor and analysis technologies let operations personnel
foresee and correct problems before equipment goes down
DAN HEBERT, PE, SENIOR TECHNICAL EDITOR

There is an inverse relationship between maintenance and downtime in most process control plants. The more time and money spent
on maintenance, the less downtime due to unplanned shutdowns. Unfortunately, frequent maintenance performed according to time-
based schedules can be prohibitively expensive and can also fail to prevent equipment failures.

Condition-based monitoring coupled with sophisticated data analysis tools can allow process plants to adjust the
maintenance/downtime relationship in their favor. “We use condition monitoring and other tools to help us shift our process opera-
tions and maintenance from reactive to predictive,” says Leoncio Estevez-Reyes, P. Eng., an engineering specialist in the corporate
process information, diagnostics, and control unit for Weyerhaeuser in Federal Way, Wash. Weyerhaeuser uses hardware and software
systems from Honeywell (www.honeywell.com/imc) to implement condition monitoring at its pulp and paper plants.

Without condition monitoring, users are forced to perform maintenance on an arbitrary and usually incorrect basis. “Condition
monitoring allows companies to save money by determining repairs based on true need and not on a somewhat arbitrary period of
time. It can also help prevent costly process outages by identifying deviations in performance before they get to the point of affecting
production,” adds Estevez-Reyes.

Predictive maintenance based on condition monitoring has been around for a long time, but some fairly recent developments are
making these systems easier and cheaper to implement. First among these developments is smart instruments. It is difficult to predict
valve failure if the only data available from the valve is open/close status. Smart instruments can deliver much more detailed informa-
tion about a valve, and this information can be used to predict valve failure.

Another significant development is low-cost computing and data storage hardware. The most effective condition monitoring sys-
tems analyze large amounts of process data. These data must be collected on a frequent basis and stored on non-volatile media. PCs
with standard operating systems can now perform these data collection and number crunching functions, and storage hardware is
cheap enough to allow retention of sufficient amounts of data for effective analysis.

The third development is sophisticated data analysis software that can automatically examine large amounts of collected data and
present analyses to plant personnel in a concise fashion. This actionable information can be used to schedule maintenance and prevent
unplanned outages.

The chief benefits of condition monitoring are lower maintenance expenses and reduced downtime. “We have reduced maintenance
expenses significantly by saving up to 75% in unnecessary control valve repairs,” says Estevez-Reyes. “We have focused maintenance
during planned shut-downs, identified and prevented process incidents that could have caused down-time, and reduced occurrence of
process alarms by more than 50%.”

These benefits are real and available, but it is impractical for most users to initially implement sophisticated and expensive condition
monitoring systems on a plant-wide basis. Most users instead should first implement these systems in areas of greatest need, and then
expand use to other areas if and when the condition monitoring system proves effectiveness.

Analyze This

In theory, condition monitoring can be used to analyze the operation of virtually any plant component or system. In practice, most
condition monitoring systems are used to monitor valves and rotating equipment because these moving parts continuously are subject
to wear and tear. More esoteric areas of condition monitoring can examine fixed components (such as motor and cable insulation) sub-
ject to deterioration from forces other than friction.
THE ONLINE RESOURCE OF CONTROL MAGAZINE

8
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 9

Every process plant contains a number of control valves, and condition monitoring is a natural fit for predicting valve failure. Most
control valves are part of control loops, so it often makes sense to simultaneously implement valve and control loop monitoring.

“Weyerhaeuser and Honeywell co-developed a control loop monitoring system that surveys the performance of every control loop in
the mill on a regular schedule,” reports Anthony Swanda, an R&D process control engineer with Weyerhaeuser. “If a loop has poor
performance and a high probability of a sticky valve, a maintenance work order is automatically generated.”

Before implementing its condition monitoring system, maintenance was performed on an unscientific basis. “Condition monitoring
systems provide a standard and systematic approach to plant maintenance, as opposed to the most persistent operator or superintend-
ent getting the attention,” Swanda adds. “Also, the information provided by the systems (e.g., total valve movement, degree of stic-
tion) can be quite valuable in quickly diagnosing the problem.”

Most of the major control system vendors now offer some type of control loop analysis software, and these software tools often iden-
tify control valves as the culprit of poor or degrading loop performance. ExperTune (www.expertune.com) specializes in control loop
tuning and analysis software, and its PlantTriage product can run on a standalone or integrated basis to identify the worst performing
control loops in a plant.

Control valves are one of the primary points of failure for many unplanned shutdowns, and most condition monitoring systems work
well with smart valves and actuators to predict valve malfunctions. The other primary point of failure common to virtually all process
plants is rotating equipment. Pumps, generators, and compressors are found throughout most plants, and a host of sophisticated con-
dition monitoring tools can be used to examine these key components.

If It Rotates, It Will Break

All rotating equipment is subject to mechanical failure, making this equipment a prime candidate for condition monitoring. Rotating
equipment also vibrates, and these vibration patterns change as the equipment deteriorates. This makes vibration analysis a favorite
form of condition monitoring.

MeadWestvaco Corp. is headquartered in Stamford, Conn., and is a leading global producer of packaging, coated and specialty
papers, and specialty chemicals. The company uses National Instruments (www.ni.com) condition monitoring software and hardware
in all of its plants worldwide to analyze parameters such as the vibration intensity of key machine bearings and the standard deviation
of the high-speed fluctuations in pump pressures.

According to Jim Fortuna, a research fellow with MeadWestvaco Research in Chillicothe, Ohio, one of the chief benefits of condition
monitoring is more effective maintenance. “Machine and process condition monitoring is being integrated into predictive mainte-
nance programs,” he observes. “This helps drive activities that focus on preventing mechanical failure and improving process robust-
ness. Closer interactions among the maintenance, operations, and technical groups have been facilitated through process/machine
condition monitoring.”

MeadWestvaco uses condition monitoring to trend a number of key variables in addition to vibration and pressure. “Process
flows, machine speeds, critical process performance measures, and other online real-time product data can all be correlated to
process variables,” Fortuna adds. “Online models can be created and trended to relate machine performance to the quality of the
product being produced.”

The ultimate goal of condition monitoring at MeadWestvaco is to use data analysis to improve plant operations. “Process capability
can be trended for each key machine components, and this permits maintenance, engineering, and operations to optimize machine
improvements using actual data instead of speculation,” Fortuna says. “It is not uncommon for us to find multimillion-dollar produc-
tivity improvements by making low-cost machine improvements based on analysis of process/machine diagnostic measurements.”

Most of the condition monitoring systems at MeadWestvaco are permanent installations, but many plants can implement condition
monitoring with less costly temporary set-ups. “Permanent condition monitoring systems can be very expensive, but many items of
equipment can be effectively monitored with low-cost 4-20 mA vibration level sensors and portable temporary systems,” says Ken
Gale, product development director with Expert Monitoring (www.expertmon.com). Expert Monitoring is a systems integrator and
uses National Instruments hardware and software to monitor and analyze rotating equipment.

In a typical application, a user monitors the duty load cycle of the process or the machine to discern how the machine performs for var-
ious product batches or quality types. The site engineering team members then use the analysis results to explore variations in machine
behavior. This allows the team to build a virtual “machine blueprint” so that deviations based on a visual comparisons can be observed.
THE ONLINE RESOURCE OF CONTROL MAGAZINE

9
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 10

Combining condition monitoring with expert remote analysis can increase the effectiveness of preventive maintenance and trou-
bleshooting efforts. Tri-State G&T uses a monitoring system from Bently Nevada (www.bently.com) to measure radial vibration,
speed, thrust, differential expansion, and shell expansion on three 411 MW General Electric turbines located at its generating plant in
Craig, Colo. The plant has some on-site expertise, but assistance from vendor personnel helps to expedite problem resolution.

“We have used the remote diagnostics inherent to the condition monitoring system twice now on vibration problems,” says
Gary Crisp, Tri-State senior mechanical engineer. “It saves time and money because a field rep does not have to be on-site.
Remote condition monitoring also allows several Bently personnel to review the problem at the same time, which results in better
and faster response.”

Get Your Bearings

Most rotating machinery uses bearings as interface points between the fixed and rotating components of the machine. These bearings
are subject to continual frictional forces and must be maintained. Condition monitoring can be used to predict bearing failures, allow-
ing maintenance of these critical components to be performed on a proactive instead of a reactive basis.

SKF (www.skf.com) makes rolling bearings and associated products including condition monitoring software. Its manufacturing
plant in Hanover, Pa., eats its own dog food by using SKF condition monitoring software to monitor machine parameters. “We use
condition monitoring throughout our continuous-flow manufacturing cells,” says Jeff Ketterer, Hanover plant manager. “We moni-
tor the key operating characteristics of our production equipment so we can proactively determine if a problem or breakdown is
about to occur.”

The plant also uses an SKF decision-support system in conjunction with condition monitoring. The decision-support system auto-
mates some of the manual tasks that must be performed when using condition monitoring. “The decision-support software speeds up
the analysis process and also informs channel managers in real time about the condition of key assets,” says Tony Clabaugh, a project
manager with SKF Reliability Systems.

There are many unpredictable events that occur in the day-to-day operation of a manufacturing plant, but one thing is certain:
periodic equipment failure. “We know that all components of equipment--electrical, mechanical, hydraulic--have lifespans. They
will fail,” adds Ketterer. “When we can predict their failure using condition monitoring, we can take assets off line for service at
our convenience.”

Even if It Doesn’t Move...

Control valves and motors are a natural fit for condition monitoring because these components have moving parts and require fre-
quent maintenance. Many other plant components exhibit wear due to forces other than motion.

The insulation found in most cables and motors is subject to electrical “pressure” in the form of the voltage differential between
energized and grounded components. This voltage differential would cause current flows to ground and subsequent equipment shut-
downs if not for the presence of insulation.

Stora Enso North America uses a condition monitoring system from Cutler-Hammer (www.cutlerhammer.eaton.com) to measure
the quality of insulation in critical medium and high-voltage motors at its Biron paper mill in Wisconsin Rapids, Wis. The Biron mill
manufactures coated magazine papers.

The company compares current to previous test data to determine if the insulation system is degrading. “If we can catch insulation
failure early enough, we can make plans to repair or replace the motor prior to a major catastrophe or loss of production,” according to
Patrick Zdun, an electrical engineer.

Prior to the installation of the continuous condition monitoring system, Stora Enso had to test motor insulation off-line. “We no
longer have to shut down motors and perform destructive high-potential tests. This gives our engineering and maintenance depart-
ments control over when and how often a motor will be tested. We can also monitor data more frequently and keep a closer eye on sus-
pect motors,” adds Zdun.

Some process plant components are susceptible to wear and subsequent performance degradation due to corrosion and other forms
of chemical decay. These components can also be good candidates for condition monitoring. A major West Coast refiner uses Invensys
condition monitoring software to determine when to clean crude unit heat exchangers.

THE ONLINE RESOURCE OF CONTROL MAGAZINE

10
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 11

According to the process engineering and optimization department, condition monitoring lets them determine the optimum time
interval between cleanings for each of heat exchanger bundles. The condition monitoring software is also used to monitor tray efficien-
cies in the distillation columns. The plant uses this information to drive turnaround planning.

Predicting the Future

Proper vendor selection and smart implementation have yielded successful results for many users, but further gains could be realized
by vendor improvements and enhancements. “There should be better integration of information and analysis between the device level
and the DCS level,” says Weyerhaeuser’s Swanda. “In our case, that specifically refers to integration of valve position from the device
level with information that Honeywell’s Loop Scout package uses to assess and diagnose loops.”

A process engineer at a major West Coast refiner also was concerned with integration issues and suggested an interface between the
Invensys condition monitoring software and Microsoft Excel. This engineer feels that most users are very familiar with Excel and know
how to manipulate data within the spreadsheets as needed. He also says that there is not the same level of familiarity when performing
data manipulation with the Invensys condition monitoring software.

Off-the-shelf packages may be adequate for some applications, but the greatest benefits are usually realized by customization. “Hon-
eywell was very willing to modify their product to suit our needs,” explains Estevez-Reyes of Weyerhaeuser. “This had a very big impact
on the acceptance of the product and the ease of implementation. I understand that it has also made the product more attractive to
other customers, so it was a win-win for both parties.”

Smart instruments, sophisticated software programs, high-powered PCs, and cheap data storage can make condition monitoring
systems a low-cost and quick-payback investment for many process plants. The top 10 benefits gained by using condition monitoring
software (see Table) are now being realized in a variety of process plant applications. That’s the good news.

The bad news is that these benefits can be difficult to realize. “Our success is 75% or more due to elbow grease, discipline, encourag-
ing and engaging the right people at the right levels, dealing with organizational inefficiencies, and understanding the power and limi-
tations of the technology,” says Estevez-Reyes. “New technologies have made it possible to overcome barriers that would have been
insurmountable some years ago, but putting the software and hardware in place will do no good whatsoever if you do not address all of
the aforementioned factors.”

Top 10 Reasons to Use Condition Monitoring Software

1. Process degradation can be detected and addressed.

2. Condition monitoring can be used with existing control systems to optimize alarm settings.

3. Maintenance costs can be cut because equipment is only serviced as needed.

4. Condition monitoring can be used with data analysis software to automatically detect problems.

5. Shutdowns can be scheduled based on condition monitoring data instead of arbitrary time intervals.

6. Maintenance departments can prioritize their efforts and address areas of greatest need first.

7. Shutdown periods can be shortened because fewer components require service.

8. Loop-monitoring software can automatically detect worst performing control loops.

9. Product quality can be linked to equipment condition.

10. Component failures can be predicted and addressed in time to avoid unplanned

BACK TO TABLE OF CONTENTS

THE ONLINE RESOURCE OF CONTROL MAGAZINE

11
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 12

The Best Defense


In the Field and on the Plant Floor, Redundancy Protects and Secures Critical Processes
DAN HEBERT, PE, SENIOR TECHNICAL EDITOR

Redundancy is a requirement for many process control systems. For each part of the process, users need to determine if redundancy is
needed, and then decide where and how to implement redundancy schemes.

The first item to consider is the necessity of redundant control. “Redundancy should be an economically-based engineering deci-
sion,” says Kevin Totherow, the president of Sylution Consulting (www.sylution.com). “Decision factors are cost of redundancy, likeli-
hood of failure, cost associated with downtime, recovery time and cost of maintenance for redundant systems,” adds Totherow.

According to Totherow, many redundancy decisions are not based on cost/benefit analysis. “Companies often make emotional deci-
sions about redundancy. Many of the companies that insist upon redundancy would have much better project ROI as well as lower on-
going costs with non-redundant systems and good recovery plans,” observes Totherow.

Others echo Totherow’s comments. “The basic calculation is economic: Is the cost of device failure times its probability of failure
greater than or less than the cost of redundancy?” asks Ed Bullerdiek, the control group leader with Marathon Ashland Petroleum in
Detroit. “For Safety Instrumented Systems (SIS), the processes by which you determine this are well known (Fault Tree, FEMA, LOPA,
Markov Models), and this basic thinking is easily extended to other systems,” concludes Bullerdiek.

One difficult cost to quantify is risk of injury or death. “Costs of unsafe conditions should always be presumed exorbitantly high,”
according to Matt Bothe, a senior automation engineer with CRB Consulting Engineers (www.crbusa.com). “Therefore, operations
that pose danger to personnel should always apply redundant systems,” continues Bothe.

Cost versus benefit calculations can be complex, but many processes can be analyzed for redundancy without detailed mathematical
analysis. “Reliable generation of electricity is a necessity, but some of our sub-processes can handle downtime,” reports Dale Evely,
PE, an I&C consulting engineer with the Southern Company in Birmingham, Ala. “Ash and coal handling as well as sootblowing and
water treatment don’t need redundancy because of built-in storage capacity,” adds Evely.

“For our primary process, the boiler and steam turbine, we design in dual redundancy of HMIs, controllers, and communication net-
works. For critical measurements we install redundant field devices and connect those devices to separate I/O cards in separate I/O
racks,’ concludes Evely.

Some processes are low risk in terms of hazards, and this can be a key factor in redundancy decisions. “Use of redundancy in our con-
sumer goods industry is perhaps lower than more critical industries,” observes James Reizner, a section head with Procter & Gamble
in Cincinnati. “But on our paper machines, where downtime is very expensive and getting back on line can be a major event, we per-
form financial redundancy calculations,” continues Reizner.

Redundancy is often needed when a process must run uninterrupted for a long period of time. This is often the case in biotech where
batches can take months to produce. Another such scenario is test systems. “Some of our tests run for thousands of hours, ideally unin-
terrupted, so redundancy is critical,” according to Robert Shaw, PE, an electrical engineer with the QSS Group
(www.theqssgroup.co.uk) at the NASA Glenn Research Center in Cleveland.

Once a decision is made on which process needs redundancy, the next step is to determine where to apply redundancy in the control
system architecture. It is rarely feasible to make an entire process redundant, and there are major differences in cost and benefits
depending on where redundancy is applied.

THE ONLINE RESOURCE OF CONTROL MAGAZINE

12
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 13

Redundancy Simplified
If redundancy is needed, the next question is where to implement it. For simplicity’s sake, let’s divide the control system into five
areas: HMI/Server, controller, I/O, field devices and communications (see Table).

Readers surveyed ranked each of these five areas according to most benefit and least cost.

According to our readers communications yield the best cost/benefit ratio.

An almost perfect confirmation of the reader survey comes from one of the industry’s leading vendors. “Where do customers spend
their redundancy dollars,” asks Steve Lazok, the technical solutions support manager for Yokogawa Corporation of America
(www.us.yokogawa.com)? “In order: communications, controller, I/O and HMI. Redundant field devices come into play only for SIS
solutions.”

Do It Yourself?

In the final analysis, the responsibility for redundancy implementation lies with the end user, but selection of the proper control sys-
tem can make implementation less difficult.

Field device redundancy is specifically designed by the user for each process, but redundancy at the other four levels of the control
system can either be an integral part of the control system or a custom add-on.

When redundancy is required, most users suggest buying a control system designed for redundancy from the ground up. “Buy a sys-
tem with redundancy built-in and it doesn’t cost much in the long run, nor does it take much in the way of resources to support. Buy or
inherit a cheap system and try to add redundancy and you will rue the day you were born,” says Bullerdiek.

“Examine the vendor’s redundancy scheme at all levels. The two most important questions to ask are: ‘If it breaks, how do I fix it
without taking my process down?’ And, ‘Do I have to program anything in to get the redundancy or associated diagnostics to work?”
suggests Bullerdiek.

Without diagnostics, redundancy can disappear unbeknownst to the user. “Support and diagnostics on the HMI or communication
side are needed, even with rigorous initial testing,” observes Kyle Austin, a technical specialist, critical control systems for process
information & control, with UOP LLC in Des Plaines, Ill. “If a secondary communications path or hardware option fails, it can often
be neglected if critical control is not directly affected, and then the benefit of redundancy is lost,” adds Austin.

Others echo Bullerdiek’s opinions concerning single source responsibility. “If someone is concerned about redundancy, they should
look at a DCS or a Triple Modular Redundant (TMR) system because these systems have hardware-based options that make support
much easier. Single vendor solutions eliminate finger pointing and minimize oft overlooked life cycle costs,” says Robert Burgman, a
senior automation engineer with the Pigments Division of Sun Chemical in Muskegon, Mich.

Burgman has implemented redundancy with both a DCS, and with a PC-based HMI and a PLC controller, and he says the differences
are significant. “By far the weakest link in an HMI/PLC system is the PC’s hard drive. Unfortunately, our HMI vendor’s solution to
redundancy is simply duplication, which means that both HMIs poll the PLCs-in effect doubling communications traffic to the PLCs,”
reports Burgman.

According to Burgman, this scheme requires ongoing management and can result in poor performance because duplicate polling
can eat up the bandwidth and cripple the highway. His firm has had to resort to “warm standby” for some HMI/PLC systems, with
both HMIs running, but with the backup HMI’s polling shut off.

By contrast, HMI redundancy on Sun Chemicals’ DCS system is seamless. “Our DCS doesn’t have this issue because the HMIs talk to
the DCS controllers directly via unsolicited communications over a redundant highway. This has proven to require much less support
than our PLC/HMI systems. In addition, redundancy is a snap with our Yokogawa DCS because it was designed that way from the
ground up. We don’t even think about it- it just works,” concludes Burgman.

Not only can the HMI be a problem with a HMI/PLC redundant system, so can the PLC. “We have never found a PLC redundancy
system that works. It has been our experience that the promise of the technology is beyond its performance,” says Andrew Rowe, the
technical manager of process controls & MIS with the United States Gypsum Company in Chicago.

THE ONLINE RESOURCE OF CONTROL MAGAZINE

13
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 14

DCS-type control systems are almost always more expensive the HMI/PLC systems, but in the case of redundancy the cost differen-
tial may be an illusion, especially when life cycle operations and maintenance costs are included.

This can be especially true in terms of software and hardware upgrades. “Like many HMI/PLC systems, we also use Windows at the
HMI level,” reports Bob Hausler, the vice president of system marketing for ABB (www.us.abb.com). “A key difference is that we test
all upgrades with the entire control system including the redundancy features prior to releasing these upgrades to our clients.”

Hausler’s point is well taken. If a plant has an HMI/PLC system controlling a critical process, it would not be wise to simply accept
the latest Microsoft patch. All such changes to the operating system or to any other areas should be thoroughly tested prior to installa-
tion on the control system. With a DCS, this testing is done for the user. With an HMI/PLC system, the user has to do the testing.

It is clear that DCS and TMR vendors have spent a lot of time and money designing and implementing redundant systems (see Side-
bar). It may be wise to take advantage of this expertise if redundancy is needed.

I/O Network Redundancy

Whether the control system is a DCS, triple modular redundant (TMR) or a HMI/PLC; redundancy can be implemented at various sys-
tem levels. As discussed previously, the best cost/benefit ratio comes from redundant communications networks.

The first decision is what networks need redundancy. Networks among HMI/servers and controllers are usually duplicated for most
redundant systems, but controller to I/O networks should be considered individually.

“I/O networks may be redundant depending on whether or not they include active or passive devices and their length. Short passive
networks will not be redundant; long networks such as remote with active devices such as fiber optics will often be redundant,” accord-
ing to Bullerdiek.

Installation details for networks are critical. “Our communications networks are always redundant for reliability and safety, and we
use two very different paths whenever feasible to minimize the risk of losing a path from localized incidents,” says McCormick.

Redundancy and Fault Tolerance In DCS Controllers

In the Foxboro (www.invensys.com) I/A Series system, the term, “fault-tolerant” is used to indicate controllers that are not only
redundant, but are also designed to exercise all components in both controllers in lock step. “This ensures that no incorrect messages
are sent to the I/O modules or to other controllers, workstations, or computers,” says Alex Johnson, the director of systems products
technology at the Foxboro Automation Systems unit of Invensys.

The fault-tolerant version of the I/A series controller consists of two modules operating in parallel, each with two interfaces to the
control network and to the I/O network buses. The two control processor modules, married together as a fault-tolerant pair, provide
continuous operation of the unit in the event of hardware failure occurring within one module of the pair.

Both modules receive and process information simultaneously. Faults are detected by the modules themselves through the use of
synchronization points and memory comparisons.

“One of the significant features of Foxboro’s fault detection approach is the comparison of communication messages at the module
external interfaces. Messages only leave the controller - whether going to an operator console or an I/O module—if both controllers are
attempting to transfer exactly the same message. Message mismatches-and other internal checks-generate faults,” observes Johnson.

Upon fault detection, self-diagnostics are run by both modules to determine which module is defective. The non-defective module
then assumes control without affecting system operations. This fault-tolerant solution has the following advantages over controllers
that are merely redundant:

No bad messages are sent to the field or to applications using controller data because messages are not allowed out of the controller
unless both modules match bit-for-bit on the message being sent.

THE ONLINE RESOURCE OF CONTROL MAGAZINE

14
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 15

The secondary controller will not have latent flaws detectable only upon switchover because it is performing exactly the same opera-
tions as the primary controller.

The secondary controller is synchronized with the primary one, which ensures up to the moment data in the event of a primary con-
troller failure.

No matter what control system is selected, it is important to examine all failure points including power supplies, fuses, and other
ancillary components. “No control system is truly redundant if there is a single point of failure, and there usually is such a point some-
where in the system,” observes Hausler of ABB. A cooling fan may be a minor component, but failure could bring down an entire plant
if care is not taken when designing redundancy.

Homeland Security Drives Redundancy Requirements

According to Connie Chick, controllers business manager for GE Fanuc Automation (www.gefanuc.com), homeland security initia-
tives are creating new redundancy requirements. “Redundant control processors will need to be physically separated for applications
such as water, utility, chemical, refinery, defense and others,” says Chick.

“Depending on the process, separation can entail separate control cabinets in one building or even separation into two buildings.
This separation is meant to prevent a wide range of service interruptions, and it presents a challenge in terms of communication
speed,” continues Chick.

Redundant control processors are constantly exchanging data at high rates of speed, typically via a common backplane. When
processors are physically separated these high-speed communication requirements can become problematic, so GE Fanuc has created
a custom solution for communication between physically separated redundant control processors.

“Our control memory Xchange connects physically separated processors via fiber. This technology allows multiple devices to share
large amounts of control data over a fiber-optic deterministic network at speeds up to 200 times faster than standard industrial Ether-
net LANs,” explains Chick.

BACK TO TABLE OF CONTENTS

THE ONLINE RESOURCE OF CONTROL MAGAZINE

15
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 16

How and Why to Monitor Power


Quantity, quality, and reliability of this expensive raw material can make or break your plant's bottom line
RICH MERRITT, TECHNICAL EDITOR

Electric power is expensive, often unreliable, and sometimes dirty. Using too much power, losing power, or running on unclean power
can adversely affect the operation of your plant. Accordingly, several companies serving our industry are more than happy to sell you
hardware, software, and energy management services that will analyze and optimize your power consumption.

Frank Hein, principal engineer at Abbott Laboratories, North Chicago, Ill., has installed an advanced energy optimization system
from Pavilion. “With all elements of the program in place, Abbott expects to save at least $273,000 per year, based on the current pro-
duction rate,” he says.

However, as Bela Liptak says, before you can control something, you have to measure it. Here’s how and why to make power meas-
urements and do your own analysis and optimization. As we can see from Abbott’s results, the payoff can be enormous.

Problems With Power

Electric power is expensive. “Manufacturing plants consume the most electricity in the United States, totaling more than $3.3 billion
per year,” says Dick Eshelman, vice president of engineering solutions at Rockwell Automation (www.rockwellautomation.com).
“From downtime due to dirty power to avoiding peak demand charges, reducing energy-related expenses starts by understanding
when, where, and how much energy is being consumed, and turning that knowledge into results.”

Demand charges can be substantial. Although they vary from utility to utility, essentially the utility determines how much power
your plant needs, sizes its substations accordingly, and then hits you with a significant surcharge (the peak demand rate) if you use too
much power at any point in time. Exceed the peak demand rate one time in a month, and you get charged for it.

Therefore, the first energy monitoring program most plants install is a load-shedding system that measures and tracks electrical
power consumption on a real-time basis. The system uses the same technique as the utility does to calculate peak demand, and sheds
loads (turns off systems) to keep from exceeding the critical demand level.

If you do not understand your utility’s tariff structure, it might be a good idea to get a copy of the rules and see how your energy bills
are calculated.

Utilities are beginning to have problems delivering power, say market researchers at Frost & Sullivan (www.frost.com). “Insufficient
power infrastructure and the growing load on energy utilities are driving the end user market to uninterruptible power supply (UPS)
systems. In the absence of clean and consistent power, UPS systems are needed to decrease downtime and protect sensitive equip-
ment,” says analyst Farah Saeed.

Is your incoming power clean and consistent? If not, it could cause severe problems. Richard Brown, engineer at ABB
(www.us.abb.com), says voltage sags and interruptions can cause havoc in a manufacturing line. He provides this list of potential
problems for people in the semiconductor industry:

1. A 20 msec. voltage sag to 65% will cause electronic controllers to crash.

2. A 200 msec. voltage sag to 50% will cause an entire process to crash.

3. An interruption lasting several minutes can cause problems with air and water filtration systems.

THE ONLINE RESOURCE OF CONTROL MAGAZINE

16
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 17

4. Longer interruptions can affect critical temperature and humidity tolerances in a process.

“Because of the high economic cost of power quality disturbances, many sensitive facilities are beginning to ask electric utilities to
guarantee levels of reliability in their energy contracts,” says Brown. “Although not widespread at this time, such guarantees are
already in effect for several car manufacturing facilities in Michigan. If a factory experiences a power quality disturbance that disturbs
the production process, it receives a large credit on its energy bill.”

California companies faced electric supply problems every day in 2001, as the energy crisis there reached epidemic proportions. At
Cargill’s salt plant in Newark, Calif., interruptions in the power supply caused production delays, leading some longtime customers to
switch to alternate suppliers. “Our electricity costs doubled here, even with the incredible energy conservation measures we put in,”
says Lori Johnson, public affairs manager at Cargill Salt.

You need to monitor your incoming electric power to see if you have problems, avoid demand charges, decide what kind of protective
and backup equipment to install, and catch your utility with its electric pants down.

Mike Powell, president of eLutions (www.elutions.com), says his company’s EP Web monitoring and control software was installed
in a chocolate manufacturing plant and quickly uncovered a severe problem during training. “The client’s energy manager discovered
a utility feed that had a 1 MW variance when compared to the other two feeds,” Powell explains.

The utility found that the underground feeder was undersized before it entered the facility. “Using the EP Web software with the
integral billing module, we were able to determine that the higher demand caused by the undersized feeder cost the facility millions of
dollars in overcharges. This information was provided to the utility and an adjustment was received.”

Perhaps your utility will help you monitor power. Georgia Power, Atlanta, offers EnergyDirect.com, an online tool that its business
customers can use to analyze and track their current and historical energy use and billing information. The standard package is free,
and allows businesses to track energy costs; correlate energy usage with production data, changes in operations, or weather; forecast
energy costs for the next year; and spot increased energy usage by faulty equipment.

What a Waste

The other side of the energy question is, are you using power efficiently? As John Havener, energy czar at Pavilion Technologies
(www.pavtech.com) points out, a modern plant has many energy-consuming operations. “The central utility system can be extremely
complex, encompassing boilers, chilled water systems, distilled water systems, compressed air systems, compressor trains, cooling
towers, district energy systems, large building HVAC systems, and distributed generation systems,” he explains. “The central utility
system operators must incorporate a number of factors into their energy management decision, including fuel price, energy demand,
energy prices, energy reliability and availability, emissions limits, and corporate profitability.”

That’s a lot of systems to monitor. What uses the most power? “Motors are by far the biggest consumer of electrical power at our
site,” reports Michael LaRocca, senior process control specialist at Solutia, Sauget, Ill. “Electric heaters for a particular unit operation
are second.”

The Dept. of Energy (www.energy.gov) agrees. “Over 13.5 million electric motors of 1 hp or greater convert electricity into work in
U.S. industrial process operations. Industry spends over $33 billion annually for electricity dedicated to electric motor-driven sys-
tems,” says a 1998 DOE report. “Because nearly 70% of all electricity used in industry is consumed by some type of motor-driven sys-
tem, increases in the energy efficiency of existing motor systems will lead to dramatic nationwide energy savings.”

There are dozens of other ways to save energy in a plant. At Cargill Salt, an Energy Team of workers and supervisors was formed to
identify conservation measures. Eric Hoegger, refinery project engineer, says changing lighting systems, installing manual on/off
light switches, and a reduction in air pressure throughout the plant helped cut power consumption. Lighting changes alone saved
$24,000 per year, he says. Other changes included work practices and moving loads to off-peak hours. Overall, Cargill reduced power
consumption by 42% and shaved its peak demand by 52%.

Clearly, if you want to eliminate the energy villains in your plant, you need a plan.

What to Monitor

Several companies will be happy to do an energy program for you. They will analyze your plant, make recommendations, and install the neces-
sary equipment. For example, if you hire Rockwell Automation’s Energy Consulting Services to optimize your plant, here’s what they will do:
THE ONLINE RESOURCE OF CONTROL MAGAZINE

17
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 18

* Tariff Analysis: Using information compiled from energy bills, energy tariffs, and electrical supplier contracts, Rockwell’s analysts
will identify alternative methods to reduce energy costs, such as aggregating multiple meters or evaluating new supplier, delivery, and
tariff options. In other words, they analyze how you are paying for electric service now, and see if they can renegotiate with that suppli-
er or find an alternate source. This may or may not be something you can do yourself.

* Power Quality Studies: Rockwell measures and monitors incoming power to determine the causes of voltage excursions, momentary
power losses, phase reversals, and harmonics. They determine the correlation between power quality, premature equipment failures, and
the cause and frequency of plant shutdowns. Now this is definitely something you can do yourself. Mark Liemiller, director of marketing at
Power Distribution Systems at Siemens (www.sea.siemens.com), says new power monitors are available to make most of the necessary
measurements; then they put the information in a form that is usable. “These can be very basic or quite elaborate devices that monitor and
display critical power data,” says Liemiller.

* Plant Energy Audits: Rockwell determines energy consumption patterns in the plant, and identifies equipment and processes that
should be modified to reduce power consumption. Again, this is within your grasp. Once you make the necessary power measure-
ments, a host of software is readily available to analyze the data and make recommendations.

The approach seems to work. For example, at Chevron’s refinery in Richmond, Calif., Rockwell’s energy analysis showed that two
pumps in the diesel hydrotreater were oversized, sometimes operating at 40% below their best efficiency points. Chevron installed new
medium-voltage drives on a 2,250 hp primary feed pump and 700 hp product pump and saved $330,000 per year.

Monitor Thyself

Engineers have monitored power for decades. “Before the microprocessor revolution, power system engineers recognized three class-
es of instruments,” says Reaz Tajaii, a staff engineer in Power Systems Engineering at Square D (www.squared.com). These include
digital fault recorders, which obtain oscillograms of fault currents and voltages to evaluate the effect of voltage sags and to construct
sequences of events; transient recorders and oscilloscopes, which capture oscillatory and impulsive voltage transients; and power mon-
itors, which report steady-state currents and voltages and provide basic energy and power calculations.

“With microprocessor-based technology, the three classes are slowly merging,” he says. “The power monitor is becoming a univer-
sal measurement and recording instrument. Power monitors now do much of what digital fault recorders and transient recorders do. In
the near future, it is likely that power monitors will take over most of the functionality of these devices.”

Power meters at substations have changed, too. Now they are called intelligent electronic devices, or IEDs. Modern IEDs measure
phase currents, voltages, power factor, frequency, harmonics, and other data, according to Mike Coleman, North American manager
for GE Multilin, a division of GE Industrial Systems (www.geindustrial.com). He says the cost has dropped dramatically, too. “Ten
years ago, a fully loaded system with metering equipment, oscillography, and protective relays for a feeder would have cost $20,000 to
$30,000. Today, a state-of-the-art IED will do all these functions for a fraction of that money,” says Coleman.

Not only that, an IED can communicate all this information over an Ethernet connection. “Data from an IED connected to the cor-
porate network can be accessed to see the energy consumption of a particular feeder, or an engineer can access the IED to check cur-
rents and voltages or analyze an event,” explains Coleman.

Soft Switching Technologies (www.softswitch.com) says its I-Grid power monitoring system can be installed for less than $300 per
monitor. A web-based monitor plugs into a 120 or 240 VAC outlet and a phone line. It monitors the power line and records voltage pro-
files for sags, swells, brownouts, overvoltages, and outages, and sends power quality information through the Internet to the monitor
owner and to a central server for display on the I-Grid web site (www.i-grid.com).

“I-Sense monitors can be installed in front of production equipment or processes within manufacturing facilities, at the service
entrance, or on utility substations,” says William Brumsickle, SoftSwitching director of technologies.

Power equipment manufacturers have been embedding power monitoring functions and communications capabilities into panel-
boards, transformers, and circuit breakers for several years. Such smart power distribution equipment can monitor its own health and
communicate with any device that wants the data.

Rick Grove, staff project engineer at IMC Phosphates, Mulberry, Fla., uses such equipment to monitor total incoming power. “GE
Multilin 750 relays, which are used to trip incoming breakers, are connected by Modbus to Allen-Bradley PLCs, and then to MSSQL via
RSSQL, and then to web servers,” says Grove. This means that Grove is getting the data he needs from circuit breakers, then passing it
through the system all the way to a web server for display and analysis. “Metering potential transformers and current transformers are
THE ONLINE RESOURCE OF CONTROL MAGAZINE

18
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 19

mounted within the metal-clad switchgear, and are the inputs to the Multilin 750 relays.”

If you have a DCS in your plant, you might be able to hook directly into such equipment. Honeywell, for example, can plug right into
third-party multifunction power meters, relays, and motor monitors. “We make nodes such as the Communications Link Module that
interface to these meters via Modbus,” says Mark Converti, manager of power generation marketing at Honeywell (www.honeywell.com).
The link feeds directly into Honeywell’s Power Monitoring and Control, Power Emergency Load Shed, and Tie Line Control software
packages. “Power system faults are initially detected at the power meters. Later, the cause of the fault and any harmful effects can be deter-
mined by uploading the waveform data captured by the meters to our Plant History database.”

It might be a good idea to check with your control system vendor to see what type of similar power monitoring packages they offer.
You may be able to tie directly into your existing switchgear and panelboards to get the power quality information you need.

If you can’t get a free ride from existing power monitors or smart switchgear, you may have to install the necessary equipment your-
self. “Most of the time, the measurement is done at the switchgear or distribution center,” advises Jay Park, Power Rich System group
product manager at ABB. “Each feeder supplies power to different sections or operations of a plant.”

Power quality meters and/or IEDs from companies such as Siemens, Square D, Landis+Gyr, General Electric, and other companies
are reasonably easy to install at key distribution points. In almost all cases, they have communications capabilities and software sup-
port. “All markets use the same devices for data-gathering,” says Park, implying that integration is easy.

Portable tools, analyzers can be used to perform power system surveys, track down problems, or monitor units temporarily. They
also can be installed permanently. Like the IEDs, the portable units come with software support and communications capabilities.

Analyzing the Situation

The goal of monitoring power is to find places where you can reduce power consumption and save money. Sometimes, the answer is
obvious: Replace a motor with a more efficient one. Other times, the situation requires much more analysis.

Fortunately, power analysis software is available from several vendors, including:

* Siemens: WinPM.Net web-enabled software manages intelligent metering and protective devices, analyzes data, and shares energy
usage and power quality data across an entire enterprise.

* ABB: Power Rich monitors and controls power in substations and distribution networks. It fits in applications ranging from $7,000
to $3 million, says Park.

* eLutions: AEM/EP Web is a web-based energy information package that measures, manages, and controls energy usage.

* DOE: MotorMaster+, available free from the Dept. of Energy, helps determine when efficient motors will help.

* Honeywell: PMC power monitoring and control software runs on Honeywell control systems. It has three parts: power control;
analysis and predictions of energy usage; and load shedding.

* GE Industrial Systems: Envision, to be announced soon, analyzes the costs of electricity and other plant utilities, and generates
real-time and historical views of utility costs.

* Pavilion: Energy Center Optimization Solution is a neural-network energy optimization package that analyzes a plant’s energy
consumption and devises ways to change plant operations to minimize energy use.

Power monitoring can help find all the “low-hanging fruit,” as Frank Hein of Abbott puts it. To get huge energy savings, it may be nec-
essary to install an energy optimization system. “Abbott recognized that to get to the next level of savings, we would have to stop concen-
trating on improvements in individual pieces of equipment and focus on entire systems,” he explains. “Multi-unit optimization allows us
to focus on the entire utility system and exploit the complexities of our large equipment set to find the optimal operating scenarios.”

Nevertheless, you have to start somewhere. Monitoring power is the first step.

BACK TO TABLE OF CONTENTS


THE ONLINE RESOURCE OF CONTROL MAGAZINE

19
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 20

How to Perform Alarm Rationalization


This critical part of alarm management is key to improved plant safety,
environmental responsibility, and operations excellence
BILL MOSTIA JR., PE

Modern control systems have brought us many benefits, but along with the benefits have come problems. One of the major benefits has
been the increased information available to the operator, while one of the problems is what to do with all that information.

Computer-based control systems also have increased the level of abstraction of the process. The operator has more and more infor-
mation, but with a smaller and smaller window to look through, resulting in a higher and higher level of abstraction. Increased com-
plexity and sophistication, increased automation, control concentration and separation, and additional layers of control have further
increased the level of abstraction. Some systems are so abstract they approach the complexity of a video game.

To compensate for this abstraction, control systems have provided additional operator interface functions, and system designers
have increased the number of alarms and alerts to help keep the operator informed. Alarms increase the amount of information going
directly to the operator but they often are a source of operator overload and confusion.

In older control systems, hardwired panels were used to provide alarm annunciation. The panels were large but limited in capacity,
and so by their very nature tended to limit the number of alarms. In modern control systems, alarms are generally software-driven and
are essentially “free” for existing process variables. Little incentive to limit their creation has led to a laissez-faire attitude toward
alarms. We can configure a new alarm at the flick of a finger, and there has been a lot of flicking going on.

Also, regulations from OSHA and the EPA as well as voluntary programs such as ISO 9000 and 14000 have led to the addition of
alarms, sometimes with little consideration of the effect on alarm loads at the system level.

Some notable examples of alarms causing problems include the Three Mile Island accident in 1979, where important alarms were
missed; the Texaco refinery explosion at Milford Haven in 1994, where, in the 10 minutes prior to the explosion, two operators had
to respond to 275 alarms, peaking at three per second; and the recent Esso Longford gas plant explosion in Australia, where some
experts concluded that operators routinely ignored alarms leading up to the explosion because, in the past, ignoring them had no
negative impact.

Alarming Growth

Alarm growth is a natural outcome of the increased information load and abstraction of the modern control system. However, if alarms
are not dealt with in a disciplined manner, uncontrolled alarm growth can result, which can lead to out-of-control alarm systems. If
your alarm system has one or more of these characteristics, it may be out of control:

1. Many alarms during abnormal situations.

2. Many alarms on during normal operation.

3. High alarm loading rates (alarms per unit time, alarms per operator, alarms per event, etc.).

4. Incidents or near-incidents where operators missed key data provided (or not) by the alarm system.

5. A large number of high-priority alarms.

6. Alarms that are on for long periods of time.

THE ONLINE RESOURCE OF CONTROL MAGAZINE

20
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 21

7. Alarms going off and on regularly or intermittently (chattering or transient).

8. Lost count of the number of alarms.

9. Lost track of alarm setpoints or why they were set there in the first place.

10. Don’t know which alarms are safety, operational, environmental, informational, etc.

11. Operators don’t know what particular alarms mean or that there may be inappropriate alarms.

12. Operators don’t know what to do when a particular alarm occurs.

13. Don’t know when the alarms were tested last.

14. Alarms that are not useful and even confusing or obscuring.

15. A large number of defeated alarms.

16. No procedure or policy on alarm creation; i.e., anyone can create an alarm or change the limits on his or her own authority.

17. Alarm documentation is out of date or nonexistent.

18. No written procedures or policies on alarms.

Elements of Alarm Management

Driven by hazardous events, out-of-control alarm systems, and desires to optimize alarm systems, effective alarm management has
moved to the front of the minds of many users and manufacturers. There can be both problems and treasures buried in the alarm sys-
tem. Alarm management can be viewed as part of a larger scheme of normal operations and critical or abnormal condition manage-
ment.

Alarm management is the management of alarms throughout their lifecycles. Some of the things required of alarm management are:

* An alarm philosophy.

* Alarm ownership.

* Controls on the creation of alarms and their functionality (from alarm activation to operator detection, operator action to per-
form, time in which to perform the action, etc.).

* Pre-alarm management.

* Alarm change management.

* Alarm prioritization.

* Alarm definitions.

* Alarm organization and presentation.

* Alarm filtering and suppression.

* Informational alert management.

* Alarm operating procedures

* Training

THE ONLINE RESOURCE OF CONTROL MAGAZINE

21
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 22

* Alarm maintenance and testing.

* Alarm rationalization for existing systems and as part of continuous improvement.

It should be noted that the control system equipment’s capabilities, as well as any third-party add-ons you have or will have, can affect
what you can do with an alarm system and thus can impact alarm management procedures and practices and alarm rationalization.

Rationalization Is Key

Alarm rationalization is the systematic process of optimizing the alarm database for the safe and efficient operation of the facility. This
process normally results a reduction in the total number of alarms, the prioritization of alarms, the validation of alarm parameters, the
evaluation of alarm organization and presentation, evaluation of alarm functionality, etc.

Rationalization also can, in some cases, identify the need for new alarms or changes in the process, equipment, or instrumentation. It
can be done to fine-tune an existing good alarm system, but it is more commonly done where the alarm system has gotten out of control.

Note that alarm rationalization is not a one-shot process. The forces of chaos are out there looking for any opportunity to take con-
trol of any complex system, and alarm systems are no exception. Over time, people will come and go, the process will change, operat-
ing philosophies will change, marketing will stick its nose in things, the hardware system will change, improve, degrade, etc. All these
are opportunities for changes or lack of changes to the alarm system, and they indicate the need for periodic alarm rationalization.
Training, procedures, procedural controls, and auditing are some of the tools used to maintain an optimum alarm system for effective
and safe operation of a plant.

How to Rationalize Alarms

Alarm rationalization is a structured process that generally involves an approach similar to that of a HAZOP team, with representa-
tives from operations, maintenance, engineering, and safety. It is important to have operator input on this team. It is also important to
have an organized plan to perform the alarm rationalization, with an established procedure and practices.

While alarm rationalization will vary from company to company and plant to plant, the methodology generally consists of eight basic
steps (Figure 1). These steps are presented serially but in fact can overlap or run in parallel in some cases.

1. Develop an Alarm Management Procedure

A consistent, comprehensive alarm management procedure/philosophy is necessary before beginning alarm rationalization.

The procedure typically contains a plant alarm philosophy, alarm type identification methodology (operational, safety, environmen-
tal, etc.), risk identification methodology and method of prioritization of alarms, alarm functionality requirements (presentation,
organization, pre-alarms, operational time requirements, etc.), alarm filtering or suppression methods, identification of undesirable
alarm types and how to handle them, methods of setpoint determination, testing requirements, alarm sequences, acceptable alarm
metrics, documentation requirements, etc.

While alarm rationalization is only a part of the overall alarm management, it is absolutely necessary to have defined consistent
alarm practices to apply to the alarm database.

2. Develop Alarm System Metrics

It is hard to measure your progress if you don’t know where you start and where you end up. To measure the progress of alarm rationali-
zation, you need to develop alarm system metrics for your system. Typical metrics can be total number of alarms, alarms per operator,
alarms per hour, alarms per identified abnormal situation, fraction of unacknowledged alarms, average time for an alarm to return to
normal, average number of active acknowledged alarms, number of chattering alarms, number of standing alarms, number of nui-
sance alarms, number of disabled or shelved alarms, etc.

Operational metrics also can be used, such as total production rate, off-spec production, number of upsets, uptime, workload, etc.

Some safety and environmental metrics that could be used are the number of plant shutdowns, number of incidents/near misses,
releases to the atmosphere, releases to flare, etc.

THE ONLINE RESOURCE OF CONTROL MAGAZINE

22
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 23

Operators and operating staff interviews also should be used as metrics. These people are on the front line and have good firsthand
information on how the plant is operating.

3. Benchmark the Existing Alarm System

This is a two-part process. First is system benchmarking, which consists of an alarm operational analysis (statistical analysis of the
existing system) and identification of the current values or states of other alarm rationalization benchmarks.

Statistical analysis of the current alarm activity of the existing system should look at alarm frequency, duration, occurrence, bursts
(floods), correlation (or not) with process or equipment activity, correlation with other alarms, acknowledgement time, etc. This type
of analysis can be a static analysis of the alarm and operational logs, a dynamic online analysis via resident software that monitors
alarm activity in real time, or both.

Static analysis typically has the advantages of working on all systems; working with existing prior history to determine metrics;
being flexible in the metrics used; correlating with past or current operational, process, or equipment events; and using a human to
review the data who may (or may not) catch trends or issues that would escape an automatic system.

Disadvantages are the analysis is typically done by hand and can be cumbersome, and some trends may be missed.

Dynamic analysis advantages typically include statistical alarm activity detected online with built-in alarm metrics calculated both in
snapshot and over time, built-in alarm-system expertise that can complement the user’s capabilities, automatic documentation to
meet regulatory requirements such as 21 CFR Part 11, and management of alarm parameters. Dynamic analysis can be used to monitor
the system after rationalization to detect downstream issues, provide alarm and process event analysis, and, in some cases, analyze
manually input alarm data.

Disadvantages are long-term online analysis requires long online time; automatic correlation with operational, process, or equip-
ment events may not be available or may have to be entered manually; and some desired alarm metrics may not be available from the
system.

It should be noted that not all aspects of alarm analysis/rationalization may be provided by one method, so a combination of static
and dynamic methods may be required.

Benchmarking can include identification of the values or states of the non-alarm system attributes such as operational, environmental,
safety, and commercial goals. This benchmarking should also include interviews with operators and operating staff before rationalization.

The second part of benchmarking is to analyze individual alarm operational performance to see what the alarm is doing on a day-to-
day basis and to find individual bad actors or groups of bad actors.

This benchmarking step should not only give alarm system metrics but should identify alarm conditions such as alarm floods, opera-
tor alarm overload, standing alarms, transient alarms, nuisance alarms, chattering alarms, redundant alarms, obscuring alarms, alarm
(symptom) vs. process or equipment activity, ignored alarms, defeated (shelved) alarms, process issues related to alarms, faulty field
devices, etc.

4. Identify and Analyze Individual Alarms

In alarm rationalization, an important step is identifying all the alarms in the system and their parameters (setpoints, directions, dead-
bands, test intervals, priorities, etc.).

Identifying individual alarms is only a beginning step because we also need to know the functionality of the alarm, the alarm’s rela-
tionship to other alarms (correlation), and its relationship to process and equipment conditions. An alarm isn’t simply exceeding a
limit, then providing a visual and audible response. Defining an alarm like this can get you into trouble fast.

Analysis of the alarm functionality includes determining what the purpose of the alarm is, what the consequences and risk associat-
ed with the alarm are, how important the alarm is, what the alarm indicates, how the alarm is organized and presented, what other
alarms will be active at the same time (situations where the alarm and other alarms will activate), alarm correlation, alarm patterns,
what action is expected of the operator, how will the operator know what to do, how much time is there to detect and accomplish the
action (and if it is reasonable), what the expected action to bring the process to the desired state is, process or equipment conditions
that affect the desired performance of the alarm, how the alarm fits in to troubleshooting, etc.
THE ONLINE RESOURCE OF CONTROL MAGAZINE

23
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 24

Data gathered on the alarm in the initial benchmarking step will be used in this step to characterize individual alarm performance
and correlation with other alarms, the process, and equipment for this analysis.

5. Prioritize Alarms

Here you determine the importance or significance of the alarm through a ranking scheme. This is normally done by risk analysis to
determine the importance that the operator detect and perform the expected action when the alarm occurs. Prioritization helps
ensure the operator knows the importance of the alarm itself as well as the importance of the alarm in relationship to other alarms.

The prioritization scheme is generally limited to the control system’s capabilities and any third-party alarm management software
on the system. The number of prioritization levels should be kept to a minimum to minimize operator confusion. The number of
alarms prioritized in each category (high, medium, low) generally can be visualized as a triangle with EEMUA Guideline 191 alarm
proportions (see sidebar, "Few Solid Guidelines"). Informational alerts should be kept out of the prioritization scheme if possible.

Few Solid Guidelines

There are not a lot of solid guidelines in the area of alarm management/rationalization. Probably the best known is the British Engi-
neering Equipment and Materials Users Assn. (EEMUA) (www.eemua.co.uk), which has a guideline, "Alarm Systems, a Guide to
Design, Management and Procurement, EEMUA Publication No. 191." This guideline has some metrics for good performance of an
alarm system.

The British Health & Safety Executive (HSE) also has a couple of guidelines that may be of some assistance"see information sheet,
"Better Alarm Handling," at www.hse.gov.uk/pubns/chis6.pdf, and the Technical Measure Document, "Control Systems," at
www.hse.gov.uk/hid/land/comah/level3/5C9933F.HTM.

ISA offers ISA TR91.00.02, "Criticality Classification Guideline," which provides guidance in defining types of systems, as well as
ANSI/ISA S18.1 1992, "Annunciator Sequences and Specifications."

IEC has IEC 61508, "Functional Safety of Electrical/Electronic/Programmable Electronic Safety-Related Systems," and IEC 61511,
"Functional Safety: Safety Instrumented Systems for the Process Industry Sector," which address safety-related and safety instrument-
ed systems that may have some application where credit is taken for alarms during risk assessment.

The Abnormal Situation Management (ASM) consortium of major companies has done considerable work in abnormal situation
management and has a number of good articles at www.asmconsortium.com.

A good presentation on alarm systems, "Useful and Usable Alarm Systems: 43 Recommendations," can be found at
http://tcr.home.cern.ch/tcr/projects/TCR_Sim/docs/alarm_system_CERN_number_3.ppt

6. Rationalize the Alarm Database

This workhorse step is where the alarm management procedure and other methods are applied to the information gathered in the
previous steps to optimize the alarm system. Other methods can be statistical analysis; alarm filtering and suppression; bad actor(s)
identification; logic, event, and fault trees; alarm organization and presentation; operator dialogs; adaptive alarm techniques; and
expert systems.

This step considers alarms not only individually but on a system level as well as correlated to process, equipment, and system events
and abnormal conditions. This step is further complicated by the fact that the process is not static--the process that the operator and
the alarm system see may change based on operating conditions, and case scenarios may need to be developed against which you can
analyze the alarm database.

THE ONLINE RESOURCE OF CONTROL MAGAZINE

24
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 25

7. Implement the Rationalization

This is where the alarm rationalization is implemented on the control system. While one might assume this is a simple step of modify-
ing the control system as the result of the previous steps, it is not that simple. Since the rationalization may add or remove alarms;
change presentation, organization, or setpoints; add or modify procedures; modify or add training; etc.; it is necessary to have an
implementation plan that involves the operators and operating staff and other appropriate personnel.

Failure to involve the operating staff can lead at the least to some upset people or loss of some of your expected gains with alarm
rationalization. At worst it can result in a hazardous event due to the operators not understanding the new system (one of the things
that alarm rationalization is supposed to improve).

Remember that the system on which you are implementing the alarm rationalization can affect the outcome. Such things as prioriti-
zation scheme, alarm filtering, screen organization, alarm presentation, documentation capabilities, alarm procedure capabilities,
and other alarm capabilities of the control system will impact some of what you can do in your alarm rationalization.

Also, third-party software products for some control systems can enhance your alarm capabilities, but poor application can give you
a worse system than you started with.

8. Benchmark the New Alarm System

Once the alarm rationalization is implemented, benchmark the final product to determine the success of the rationalization effort. (It
also makes management happy to see the results of a successful project--they like numbers.) Do this by determining the alarm system
benchmark metrics identified in step 2 using the same methods as used in step 3.

Don’t forget the operators and operating staff interviews. These people will know about improvements that the mathematical bench-
marking will not tell you.

Life After Rationalization

A good audit process is necessary to ensure your alarm system stays manageable and in control. Online dynamic alarm monitoring
systems can assist in this. If you are not following a comprehensive alarm management procedure, your alarm system may go out of
control again in the future.

There are great opportunities for improvement in our alarm systems. Due to their complexity, alarm rationalization requires careful
planning and organization and is generally a team effort.

The market for alarm management and rationalization products is still underdeveloped but the marketplace is growing with
improvements by control system manufacturers and third-party products. This is being driven by more and more people realizing that
there are issues and problems with their alarm systems as well as opportunities leading to the desire to optimize their alarm systems.

William L. (Bill) Mostia Jr., PE, League City, Texas, has more than 25 years experience applying safety, instrumentation, and control
systems in process facilities. He may be reached at wmostia@msn.com.

References:

1. Johannes Koene & Hiranmayee Vedam, "Alarm Management and Rationalization," Third Annual Conference on Loss Preven-
tion, 2000.

2. A. Nochur, H. Vedam, & J. Koene, "Alarm Performance Metrics," IFAC 2001, www.asmconsortium.com.

3. Edward Marszal, "The Longford Gas Plant Explosion: Could Alarm Management Have Prevented This Accident?" Exida, 2003,
www.exida.com.

4. W.H. Smith, C.R. Howard, & A.G. Foord, "Alarms Management--Priority, Floods, Tears, or Gain?" 4-sight Consulting, 2003,
http://4-sightconsulting.co.uk.
THE ONLINE RESOURCE OF CONTROL MAGAZINE

25
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 26

5. Yoshitaka Yuki & Kimikazu Takahashi, "Event Analysis Based on Causal Relation of Events, Alarms, and Operator Actions,"
ISA, 1999.

6. E.H. Bristol, "Improved Process Control Alarm Operations," ISA, 1999.

7. Yoshitaka Yuki & Jim Parks, "Alarm and Event Analysis for detecting Productivity Bottlenecks," ISA, 1999.

8. "Use Critical Condition Management to Improve Your Bottom Line," ARC Strategies, ARC Advisory Group, April 2002.

9. Donald Campbell Brown & Manus O’Donnell, "Too Much of a Good Thing? Alarm Management Experience in BP Oil, Part I:
Generic Problems With DCS Alarm Systems," www.asmconsortium.com.

10. Dick Perry, "Alarm Systems and Their Role in Abnormal Situation Management, Part II of IV," Instrument and Controls,
SAIMC, July 2000, www.instrumentation.co.za.

11. C.T. Mattiasson, "The Alarm System From the Operator Perspective," ASM Consortium, www.asmconsortium.com.

12. D. Shook, "Alarm Management White Paper," Matrikon.

Suppliers Exist

There a number of suppliers and the market is growing for alarm management and rationalization products and related services. The
key is to find the products, services, and experience that most closely match your in-house capabilities and philosophy.

Some of the companies that supply alarm management and rationalization products and/or services (in alphabetical order) are Con-
trol Arts (www.controlartsinc.com), Exida (www.exida.com), Honeywell (www.assetmax.com), Matrikon (www.matrikon.com),
Process Automation Services (www.pas.com), Process Systems Consultants (www.prosysinc.com), and TiPS (www.tipsweb.com).

There also are some companies that provide expert systems for improving the operator decision process and detecting developing
problems before the alarm stage, thus reducing the alarm load. Examples are Gensym (www.gensym.com) and Nexus Engineering
(http://www.nexusengineering.com).

Many DCS and SCADA systems and vendors provide alarm management features or products. It generally is more cost-effective to
use whatever features your system provides rather than a third-party add-on. However, some of these are somewhat primitive and don’t
provide the necessary functionality by themselves, though they are generally improving.

There also some add-on alarm products on the market that enhance a control system’s basic alarm capabilities by providing
online alarm management (alarm controls, alarm parameter management, alarm documentation, alarm system auditing, change
control, etc.), alarm filtering, cause-and-effect analysis, alarm patterns, and dynamic reconfiguration of the alarm system for vary-
ing operating conditions.

BACK TO TABLE OF CONTENTS

THE ONLINE RESOURCE OF CONTROL MAGAZINE

26
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 27

SPEAK! New Tricks to Get Information


Out of Legacy Systems
How to retrofit your system for modern communications
RICH MERRITT, TECHNICAL EDITOR

Dealing with legacy equipment is occupying many engineers these days as they struggle to bring ancient control systems into the 21st
century and meet the ever-increasing hunger of information technology (IT) software.

Everybody wants that data: accountants, bean counters, upper management, plant engineers, process engineers, operations engi-
neers, maintenance people, and even the governor of New York. Some of the software packages you must feed include process histori-
ans, asset management, ERP, MES, LIMS, CMMS, SCM, and SPC, to name a few.

The Catch-22 is that many of these old systems, despite their inadequacies, are still performing incredibly well the jobs they were
designed to do. It's hard to discard or even alter an old system that's in the midst of setting a reliability record. In this article, we'll look
at a few ways that may help you extract data from your legacy systems.

Still Controlling After All Those Years

Foxboro Spec 200 process control systems (Figure 1) were installed at the PCS Nitrogen facility in Augusta, Ga., in 1977. They per-
formed admirably over the years, according to Walter Anderson, instrument and electrical maintenance engineer. “In October 2001,
we set a world record for running an ammonia plant: For five years, we made ammonia every day,” he reports, proudly. “So when we set
out to modernize the plant, the operators were understandably nervous.”

In 1999, in the middle of that record run, Anderson was given the task of modernizing the system. “We had two black and white
monitors with unbelievably terrible stick graphics, yet that system did supervisory control, trending, and data logging,” says Ander-
son. Alas, time was taking its toll. “Every other week we had a problem of some kind. We had to improve reliability, provide redundan-
cy, and speed up the response.”

The biggest problem was the proprietary Foxboro system. “Foxboro’s analog input modules [AIMs] produced a 0-10 V signal that
was converted to a proprietary digital signal,” says Anderson. “We couldn’t economically add additional flow, level, or pressure data
points to the Foxboro system.”

Sound familiar? Scores of engineers with legacy systems face similar situations. Like PCS Nitrogen, their process control systems are
still working after 15, 20, even 50 years. Many engineers ask, “Why replace a perfectly good control system?” They would rather just
extract data from a legacy system and use it to analyze the process, perform more advanced control functions, feed the voracious IT
software machines, and preserve the original investment.

Lee Ready, president of Spruce Grove, Alberta-based Ready Engineering (www.readyengineering.com), helps his clients extend the
life and functionality of legacy Bailey (now ABB) DCSs. “We replace legacy operator consoles with modern HMIs and plug into the
same DCS communications modules,” reports Ready. “The DCS hardware, software, and configuration is unchanged.”

Likewise, Cytec Industries, a specialty chemicals manufacturer in Stamford, Conn., is sticking with its Moore Products (now
Siemens) APACS+ legacy systems. “Cytec’s goal was to increase the facility’s output utilizing mostly existing equipment by maximizing
process efficiencies and reducing waste,” says John Ward, manager of automation. “Adherence to the S88.01 standard for batch recipe
management would be a critical part of the venture.”

Clearly, it can be done. We’ve heard from engineers who are working with nuclear plant control systems dating back into the 1960s,
and from one steel mill that is extracting data from equipment installed before World War II.

THE ONLINE RESOURCE OF CONTROL MAGAZINE

27
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 28

That doesn’t mean it’s easy.

Down the Upgrade Path

For our purposes here, let’s define a legacy system as any process control system that does not have a modern open architecture. This
can refer to a DCS you bought 27 years ago, or one that you bought yesterday. If you can’t simply plug into the system’s database or
SCADA/HMI software and get what you want, then it’s a legacy system.

To get data out of a legacy system, you probably have to upgrade it in some way. This ranges from simply installing an I/O board or
software module that extracts data from the old system, to upgrading the control system to the latest version.

In most cases, you do not want to “bulldoze” your legacy system (i.e., replace it with all new hardware). Your first step, therefore, is
to see if you can easily upgrade to a more modern version of the original system.

Obviously, any legacy system equipped with elements of open systems, such as Ethernet or fieldbus communications, or any system
with reasonably modern PC-based HMIs, poses much less of a problem. For example, APACS+ and Bailey systems, noted above, appear
to be easy to deal with because they were built with early versions of open architectures.

“The legacy systems that are easiest to upgrade are ones that were forward thinking, with standards like OPC, ActiveX, or common
protocols like Modbus or serial,” says Dave Quebbemann, industrial automation product manager at Omron Electronics
(www.omron.com).

The worst systems to deal with are those that put up proprietary barriers in both hardware and software. “Many legacy systems are,
in effect, standalone ‘islands of automation’ that might not have any capability of connecting to the outside world,” says Don Holley,
industrial automation manager at National Instruments (www.ni.com). “Even if the legacy system does have external communica-
tions, it is most likely based on proprietary hardware interfaces and communications protocols that are not well documented.”

“The problems caused by proprietary architectures are wide ranging,” adds Greg Nelson, product manager at Schneider Electric
(www.squared.com). “It ranges from something as simple as the types of cabling to the protocol or software involved. Often, the sys-
tem was programmed by former employees or by software companies that are no longer in business.”

In all fairness, many DCSs were built long before the days of PCs and open architectures. In the 1970s, it was impossible to buy suit-
able hardware off the shelf, so companies like Honeywell, Foxboro, Fisher Controls, Bailey and others had to invent it themselves.
Once they developed their own proprietary architectures, it was very difficult for the vendors to change gears and move over to open
systems. That’s why we have 27 years worth of proprietary DCSs to deal with. Alas, support for these old systems can be spotty.

“Getting support out of manufacturers for old hardware and control systems tends to be difficult,” says Jacky Lang, senior technolo-
gy developer at Citect (www.citect.com), “Manufacturers tend to not want to deal with any legacy system older than a few years.”

“The industry does not have a good record of updates,” says Pat Kennedy, president of OSIsoft (www.osisoft.com). “It is the ven-
dor’s responsibility to construct this upgrade, because it is not simply a matter of changing an operating system. There are lots of con-
figurations, reports, displays, and history that have to be moved over as well.”

Kennedy practices what he preaches about vendor responsibility. “We have systems installed 20 years ago running our current ver-
sion,” he says. “If a vendor does not maintain a system, it dies in four to five years.”

“Developing drivers for the old hardware is difficult because although the manufacturers don’t want to support them, they still don’t
want to release the technical details,” adds Lang. “Or they cannot find the technical details anymore.”

Some detractors say that the vendors don’t want you to upgrade, and that’s why they offer little help. They say vendors want you to
replace your entire legacy control system with their expensive new hardware.

But others don’t agree. Although Kennedy and Lang point out that the level of support is not what it could be, we see plenty of evi-
dence that control system vendors not only continue to support legacy systems, they offer upgrade paths to keep their old systems com-
patible with newer equipment.

The first place to look for support is from the original control system vendor. As Pat Kennedy says, it’s their responsibility, after all.

THE ONLINE RESOURCE OF CONTROL MAGAZINE

28
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 29

Honeywell, for example, claims that it still supports the very first TDC 2000 from 1975. David Novak, control system engineer at
BASF in Monaca, Pa., is a Honeywell TDC 3000 user, and has been constantly upgrading his systems to keep up with modern technolo-
gy. “Most of our changes are expansions, not total upgrades,” he explains. “With very few exceptions, the control system has all the
functionality of a new system.”

If your old Honeywell DCS has been kept up to snuff, connecting it to a modern IT system appears to be a piece of cake. “A modern
system can easily accommodate an integration strategy, keeping the existing application logic and user interface of the legacy system,”
says Garry Lee, information management & analysis product manager for Honeywell Industry Solutions (www.acs.honeywell.com).

Honeywell’s recommended solution is to use its Experion PKS data historian, which can deal with Honeywell’s networks, plus other
industry standards such as Modbus, 4-20 mA, and RS-232/422 serial communications.

Foxboro/Invensys (www.foxboro.com) recently announced a series of IA software modules that work on legacy systems up to 15
years old. Foxboro also encourages its users to migrate their legacy systems over to fieldbus. “In situations where the existing system is
already doing a good job controlling the plant, users should seriously consider upgrading to Foundation fieldbus (FF),” says David
Shepard, vice president at Invensys. “It allows migration to be performed on a maintenance budget, and preserves the existing invest-
ment in hardware, software, training, and intellectual property.”

Shepard says automation vendors are split on the fieldbus issue. “While some vendors are telling customers that they have to replace
their systems to take advantage of fieldbus, other vendors provide a migration path to FF from existing systems.” Each approach--bull-
dozing and migration--has its advantages and disadvantages.

Upgrading to fieldbus can be a bit expensive, especially if all you want is to get a process data value out of a field instrument. Wal-
ter Driedger, senior process control engineer at Calgary, Alberta-based Colt Engineering (www.colteng.com), says the cost premi-
um is about $500 to $2,000 per point. “A standard 4-20 mA transmitter costs about $800. The same with FF is about $1,300,”
explains Driedger. “To get a valve with FF you have to get an FF positioner instead of a simple I/P. That adds about $1,000 to the
cost. A simple switching valve has two limit switches and a solenoid. To connect these to an FF module, such as TopWorx, adds
about $2,000 to the cost.”

Emerson Process Management recently announced a migration path that not only upgrades old Fisher-Rosemount systems, it
upgrades everybody else’s systems, too. The path migrates legacy systems to Emerson’s DeltaV platform, and leaves the original I/O
intact. Emerson claims that this upgrade is often less expensive and faster to implement than upgrading to the latest version of a legacy
system.

Emerson says it supports old Fisher Provox and RS3, Bailey Infi90 and Net 90, Honeywell TDC 2000/3000, GSI D/3, GE Genius
I/O, Siemens Teleperm, Moore APACS, Taylor Mod 300, Yokogawa, and Foxboro Spec 200, Spectrum, and I/A control systems.

The Siemens/Moore Products APACS+ is fairly easy to upgrade. At Cytec, to add S88 capability, Ward says they expanded their exist-
ing system with the help of Siemens and systems integrator Avid Solutions (www.avidsolutionsinc.com) and then installed Siemens’
ProcessSuite Batch Manage. “We were able to transition the plant to the batch manager system with minimal production outages using
a series of short partial shutdowns followed by a one-week cutover,” reports Ward. Once that was accomplished, they installed
OSIsoft’s PI historian. “This permits operators and engineers to gain access to current and historical data.” The historian data is also
accessed remotely from network and dial-up connections.

Over on the PLC side of the control business, similar support exists. “We have migration paths for products that were developed 20
years ago,” says Nelson of Schneider. This is important, because changing out a PLC processor is much easier than replacing an entire
process control system, so control engineers upgrade PLC systems all the time. “Going from an old PLC to a new one could cause you
to lose all your application programming effort if the supplier hasn’t provided a migration path,”

They Can Do Anything

If DCS vendors can find out enough about other vendors’ control systems to offer a migration path, it seems that other companies
should be able to do the same. As it turns out, many companies offer products and services to upgrade legacy systems.

One way to extract data from a legacy system is to install a process historian package to obtain all the data, and then just take whatev-
er you need from the historian. A process historian typically consists of a separate computer and software package that acquires real-
time data from sensors and the control system, and stores it into an historical database. The historian vendor supplies the interface to
the legacy system and to IT software.
THE ONLINE RESOURCE OF CONTROL MAGAZINE

29
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 30

“A typical data model for users these days is to have a data historian connected to each legacy device,” says Michael Paulonis, techni-
cal associate at Eastman Chemical, Kingsport, Tenn. “User-written applications will use the data historian’s application program
interfaces [APIs]. This removes the need for the user to be proficient in all operating systems and legacy APIs.”

Process historian vendors have to be proficient in control system APIs, or they wouldn’t sell any software. OSIsoft, for example, has
been selling process historian software for 20 years. Over that time, the company has developed interfaces to just about every control
system on the market. “We have more than 350 interfaces,” says Kennedy.

Arcom Control Systems, Stillwell, Kan., and IBM UK, Hursley, England, have collaborated on an MQSeries “telemetry integration
solution” (as IBM calls it), which connects to legacy control and SCADA systems. Arcom supplies its Director Series hardware and soft-
ware interfaces to various legacy systems, and IBM UK provides its WebSphere enterprise software, which has links to SAP, Oracle,
and similar IT packages.

Each Director module has protocol drivers that access data from field devices. Drivers available include HART, Modbus, TCP/IP,
terminal server, UDP, PPP, Telnet, and other systems primarily used in oil & gas, electric utilities, water, and telecomm industries.

InStep Software, Chicago, says it has installed its eDNA process historian on just about everything. The company seems to relish
tackling tough and obscure systems. Anthony Maurer, a partner at Instep, says the company can reach legacy systems with file-based
transfers, serial interfaces, parallel interfaces, TCP/IP socket reads, DMA, and printer device sniffing techniques.

InStep has been interfacing to control systems for 15 years, starting with nukes. “VMS-based systems are the easiest,” says Maurer,
“and dedicated nuclear-grade plant process computers are the hardest.”

Although process historians do make your task much easier, we’ve heard that this convenience and process expertise comes at a very
steep price, up to $1,000 per data point.

You Can Do Better?

Some plants have very little money available for bulldozing, complete system upgrades, process historians, and similar solutions. What
they need is to extract the necessary data without spending a lot of money and without compromising their current control system.

Ready Engineering, for example, knows Bailey systems fairly well, and Lee Ready cautions users about going too far. “We see many
end users replace their Bailey DCS with ‘modern’ equipment that has less functionality and with HMIs that are inferior,” notes Ready.
“An engineering study would have shown them that maintaining their Bailey DCS and installing a new, third-party HMI would have
resulted in a superior system at a fraction of the price.”

Dave Siever, APC project manager at Air Liquide America, Houston, agrees. “Bailey DCS systems are reasonably easy to upgrade to
Wonderware HMIs, using DDE servers to get the data,” he says. “Using DDE servers with Bailey Net90 and Infi90 allow integration
with Industrial SQL server (iSQL), which is a good way to extract key data. We’ve done the same with Foxboro systems, but the I/A sys-
tem is weak with DDE/OPC. Our best solution with Foxboro is to use OSIsoft’s PI historian, although it is costly.”

Siever calls in experts when needed, but usually does the work in-house. “We keep IT folks away from process control systems,”
notes Siever. “Vendor help is typically crappy for this, since they usually prefer to keep their legacy systems a closed architecture and
want us to use their proprietary software and hardware. Fortunately, we have good in-house software capability and have developed
our own DDE servers to interface with legacy systems. Vendors like Matrikon and Standard Automation have helped as well.”

When he’s not using historians, Paulonis at Eastman Chemical puts his control systems on networks. “It is certainly important to
get a legacy system on the network,” he advises. “This pretty much implies Ethernet. Most vendors have an application server of some
sort that provides a network interface.”

If not, how do you develop one? Few end users have the programming resources to develop their own APIs and DDE interfaces. “In
most cases, ‘you’ do not,” says Paulonis. “An IT supplier provides the interface. If there is a reasonably large installed base of systems,
it is usually cost-effective for IT software suppliers to write an interface.”

“Typically, the most significant cost for anyone wanting to convert from legacy systems is the programming development involved,”
cautions Nelson of Schneider Electric. “It’s the programming, not the hardware, that requires the biggest investment. All the hassles,
costs, and potential downtime associated with reprogramming is what keeps people from moving forward.”

THE ONLINE RESOURCE OF CONTROL MAGAZINE

30
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
If you don’t have programming talent in house, you may want to bring in a systems integrator who knows your particular system and
knows how to write C code and APIs.

Of course, if you have clever people around, who needs outside experts? George Lister, computer technician at U.S. Gypsum, Sweet-
water, Texas, has a sweet and simple solution. “Most legacy systems have a serial port. The trick is to figure out what data you want to
export, format it for a serial stream, and then output the data,” he says. Sounds easy. But how do you output it? “Port the legacy data to
a printer port as if it were a ‘print’ command. Format the print command in a form that will be accepted as a serial stream.”

Lister connected his Foxboro Fox III SCADA system (circa 1975) to a Unix system that supported multiple serial ports. “We config-
ured the serial port on the Unix box to accept the serial streamed data and imported it into a database record. We used Foxbase for Unix
software. The database software writes the serial streamed data into a record delimited by spaces. Once we got the data into the data-
base, we could do anything we wanted with it.”

Walt Anderson at PCS Nitrogen wound up replacing his legacy Foxboro system with another legacy system. In 2001, PCS Nitro-
gen closed down two nitrogen plants in Iowa and Nebraska, both of which had three-year-old Moore APACS+ systems. So Anderson
shipped them down to Georgia to replace the aging Foxboro Spec 200 controllers. “In 2001, we upgraded 1970s technology to
1990s technology,” quips Anderson. He also installed a Wonderware HMI/SCADA system to replace the old Foxboro Spectrum
supervisory computer.

“We used the Internet to find specialized data conversion instrumentation,” he reports. The system had been modified in the 1980s
with a Transmation temperature monitor, Fischer & Porter MicroDCI single-loop controllers, and a Tensa Unix box with optimization
software.

The operators were not entirely happy at the prospect of losing their familiar Spec 200 analog controller faceplates, so Anderson
installed thin clients on the Wonderware system. “We formatted thin client displays so they looked exactly like a Spec 200 faceplate,”
says Anderson, “and installed thin clients in place of Spec 200s. With a touchscreen, the operators could adjust the loops the same way
they did on the old hardware.”

The Cavalry Arrives

Define a need, and somebody will meet it. Hardware and software vendors and systems integrators are constantly solving your problems.

For example, new software makes it possible for you to extract all the data you need from your existing HART-based field instrumen-
tation. Bud Adler, their director of business development at Moore Industries (www.miinet.com), says there are six million HART-
enabled instruments in process plants around the world, but only a handful of plants are taking advantage of the digital data contained
in the 4-20 mA HART signal.

“The HART Foundation offers an OPC server that greatly facilitates connecting a large number of HART devices to a system,” says
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 32

“CitectSCADA comes with over 170 drivers bundled with the base product at no extra charge,” reports Richard Bailey, product mar-
keting manager, Citect. “More and more customers are extending the life of their legacy systems with add-on tools. Software modules
connect seamlessly to the legacy system and allow bidirectional data transfer, including migrating legacy data into a format and loca-
tion that is useful.”

The beauty of an HMI/SCADA system or process historian is that once it’s installed, you have access to all the data in the legacy con-
trol system, so you don’t have to work with additional I/O hardware or software drivers. But if you need to work with the actual I/O,
there is help available from hardware vendors such as Sixnet, Opto 22, and Lantronix.

“An Opto system can be added to the existing control architecture without affecting any of the core processes,” says David Crump of
Opto 22 (www.opto22.com). “The specific machines, equipment, and system components that are involved almost without exception
have attachment capabilities to our hardware. And that attachment can be direct, via serial communication, or via analog, digital, or
serial modules.”

This worked at the Callaway Golf Co. ball manufacturing plant in Carlsbad, Calif. Callaway recently installed a modern control sys-
tem in a brand new plant, but neglected to provide any data acquisition capability. Although it was a state-of-the-art control system, it
wasn’t “open,” so getting data posed a problem. “The Snap Ultimate I/O system monitors thermocouples, pressure and conductivity
sensors, and other equipment used in the production of golf balls,” says Crump. “We didn’t modify any of the actual production
processes, but came in under the control layer, at the instrumentation level, strictly for the purposes of capturing data.”

Such add-on I/O hardware is available from a host of companies. Unless you like contacting every vendor in the world, your best bet
might be to prowl the Internet.

At some point, you have to ask yourself if it’s worth the time, trouble, and expense of working with a legacy system. You might be
postponing the inevitable, because all systems die eventually. However, if funding and other problems dictate that you work with what
you have, remember that data in any legacy system can be accessed. It just takes time and money.

BACK TO TABLE OF CONTENTS

THE ONLINE RESOURCE OF CONTROL MAGAZINE

32
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 33

State of the HART


The Compelling Case For the Most Used Digital Communications Protocol in The World
RON HELSON

The impact of HART Communication on the process automation industry is immeasurable. No other field communication technology
comes close in size, scope of installation or overall effectiveness. HART is the industry's most cost-effective, easy-to-use and low-risk
communication solution and a key enabler for asset management and process improvement.

HART is best known for ease in digital process instrument calibration, but the modern HART device has many more capabilities that
are increasingly useful to the end user. Most process automation suppliers now offer control system interfaces, remote I/O systems,
and PC-based software applications that leverage the intelligence of HART-smart field devices to deliver continuous, real-time device
diagnostics, multi-variable process information and much more.

Real-time HART integration into DCS architectures enables users to get the full benefit from intelligent devices “making HART
Communication an important part of plant applications for control, safety and asset productivity. Continuous, intelligent communica-
tion between the field device and control system allows problems with the device, its connection to the process, or inaccuracies in the
4-20mA control signal to be detected automatically, within seconds” all of which enables proactive action to avoid process disruptions
and unplanned shutdowns.

HART Communication Basics

HART Communication is a backward-compatible enhancement for 4-20mA instrumentation that enables remote, two-way, digital
communication with smart microprocessor-based field devices. HART devices support two simultaneous communication channels on
the same “wire” the 4-20mA “current loop" analog communication channel and the HART digital communication channel.

The 4-20mA analog communication channel is significant because it ensures compatibility with legacy systems and fast transport of
control variable information to/from the process connection and controller.

Continuous, simultaneous and complementary real-time use of both communication channels provides a high level of control secu-
rity and loop integrity far beyond what is achievable by using either channel alone.

The Smart Part of HART

All HART-enabled field devices, regardless of manufacture, contain 35-40 data items of rich information for improving plant opera-
tions and managing assets. The inherent intelligence of these devices allows them to perform internal diagnostic checks and communi-
cate information regarding their status continuously.

Standard HART Commands make it easy for systems to access the real-time data in HART devices with valuable device status infor-
mation being part of every response packet from the device.

Integration with control systems allows both communication channels to be used for multi-variable process data and real-time detec-
tion to any problems impacting the device or the integrity of the 4-20mA current loop.

Detailed Device Diagnostics

In addition to standard indicators for device status and process variable quality, the HART Protocol provides an efficient mechanism
for control systems to access detailed device diagnostics. Up to 136 device specific diagnostic parameters can be accessed with in a sin-
gle HART command.
THE ONLINE RESOURCE OF CONTROL MAGAZINE

33
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 34

Control Loop Validation

Reliable, continuous communication is critical to making good control decisions. Quality, integrity, and accuracy of the 420mA cur-
rent loop signal is also essential for good control. HART Communication enables control systems to continuously monitor and validate
the integrity of the 4-20mA current loop. Real-time integration with control, safety and asset management systems delivers tremen-
dous benefits as previously undetectable device or loop integrity problems are detected within seconds of occurrence.

Today, most process automation system suppliers offer a variety of HART interface solutions to support integration with their con-
trol systems. Many have intelligent remote I/O subsystems and system interfaces for direct connection to HART devices. Most allow
the HART data to be used in real-time for operator display, alarm and control functions. Third-party I/O systems and interface prod-
ucts are available to support integration with legacy control systems that might not be easily upgraded for HART communication.
Gateway interface solutions are also available for linking HART devices to systems based on network protocols such as Ethernet, Mod-
bus, and Profibus.

Real-time HART integration with plant control, safety and asset management systems unlocks the value of connected devices and
extends the capability of systems to detect any problems with the device, its connection to the process or interference with accurate
communication between the device and system. The justification is two-fold: (1) cost reduction due to improved operations and
increased efficiencies, and (2) cost avoidance as early warning to impending problems enables the high cost of process disruptions and
unplanned shutdowns to be averted.

HART is the global standard for smart process instrumentation communication. With its huge installed base, ease of use, global
acceptance, and the support of major process automation suppliers, HART Communication will continue to lead the world market
well beyond the next decade”creating an ongoing demand for development of new HART-enabled products and advanced applications.

Ron Helson, Executive Director, HART Communication Foundation, can be reached at ronh@hartcomm.org or 512/794-0369.

BACK TO TABLE OF CONTENTS

THE ONLINE RESOURCE OF CONTROL MAGAZINE

34
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
The term “Safety Instrumented Function” or SIF is becoming common in the world of safety instrumented systems (SISs). It is one of
the increasing number of S-words--SIS, SIL, SRS, SLC, etc.--that are coming into our safety system terminology.

The definition of a SIF as provided in IEC standard 61511, “Functional safety: Safety Instrumented Systems for the process industry
sector,” leaves a bit to be desired as a practical definition, and the application of the term leaves many people confused.

IEC standard 61511 defines a safety instrumented function as a “safety function with a specified safety integrity level which is neces-
sary to achieve functional safety. A safety instrumented function can be either a safety instrumented protection function or a safety
instrumented control function.”

A safety function is further defined in 61511 as a “function to be implemented by a SIS, other technology safety-related system, or
external risk reduction facilities, which is intended to achieve or maintain a safe state for the process, with respect to a specific haz-
ardous event.” The standard 61511, however, uses the term SIS and SIF somewhat interchangeably in places.

From this definition we can also see that there are two types of safety instrumented functions. The first is a safety instrumented pro-
tection function, which is a safety instrumented function operating in the demand mode. The second is a safety instrumented control
function, which is a safety instrument function operating in the continuous mode.

Let us look at some of other definitions of SIF that may make things a bit more clear. In their book, Safety Integrity Level Selection,
Systematic Methods Including Layer of Protection Analysis, Ed Marszal, PE, and Eric Scharpf descringsn7(om t)-18(hil(mak .)(g into 7 Td(6)Tj0.48198 0
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 36

* Flame-out in an incinerator that could cause fuel gas accumulation and explosion causes a main fuel gas trip: The specific hazard is
a flame-out. The automatic instrument protection action is a main fuel gas trip, which cuts off the fuel and prevents fuel gas accumula-
tion, bringing the system to a safe state.

What a SIF Is Not

There are functions that may seem like a SIF or part of a SIF, but are not. A SIF is normally associated with life-and-limb protection. If
you have identified an instrumented protection function and the consequence of the hazard could be killing or injuring, the function is
a potential SIF (pending SIL analysis--there may be adequate layers of protection so that identification of the protective function as a
SIF is not required).

However, when a SIF operates, there may be related actions that occur at the same time that place portions of the process in
desirable operating states to minimize startup time, loss of inventory, process equipment problems, etc. Operating companies
sometimes fall into the trap of considering these related actions as part of the SIF. Considering related actions that are operational
complicates the SIF and can increase the difficulty of achieving the target SIL. This can lead to increased and unnecessary cost,
burden, and complexity.

Equipment or asset protection functions also are not SIFs. Every plant has protective functions that protect the plant’s equipment
and assets. This is primarily a commercial or money issue. If there are no safety aspects to these protective functions, they are not SIFs.

But since there are few to no standards in this area, some people do assign an asset integrity level (AIL) to these protection functions
and treat these systems like safety instrumented systems. For example, if high-high level in a knockout drum to a compressor shuts it
down to protect it from mechanical damage due to liquids, and there is no anticipated safety issue (such as rupture of the compressor
case), then this is not a SIF but rather an equipment protection function. Considering asset protection functions as SIFs generally
leads to a large number of SIFs, each of which has to conform to the relevant safety standard. This creates a large burden on the operat-
ing company to meet safety standards and regulations for protective functions that are not required to meet the safety standards and
regulations.

Environmental protection is a bit more difficult to categorize, as it is not directly life-and-limb protection. Many people currently
have a separate class of protection function and assign an environmental integrity level, sometimes called an EIL. While the principles
of ANSI/ISA 84.01 are many times applied to environmental protection systems, there is not a specific requirement in 84.01 to do so,
nor any specific regulatory requirement to apply 84.01.

This does not, however, necessarily let you off the hook. EPA regulations in CFR 40 part 68, “Risk Management Programs for
Chemical Accident Release Prevention,” have virtually the same language as OSHA 1910.119, “Process Safety Management,” only dif-
ferent end goals. As a result, CFR 40 Part 68 requires recognized and generally accepted good engineering practices to be used to
achieve the goal of protection of the environment. As such, the principles and practices of 84.01 may represent a recognized and gen-
erally accepted good engineering practice that could be used for environmental protection systems.

Also, in IEC 61511, Section 1.2 states that “this standard in particular, applies when functional safety is achieved using one or more
safety instrumented functions for the protection of personnel, protection of the general public or protection of the environment,”

Another example of what is not a SIF is an operational protection function. This type of function is design to keep the plant within
predetermined operational boundaries for commercial or operational reasons but not safety.

One of the keys to successful SIL selection is to correctly identify the safety instrumented functions for a facility. Failure to identify
true SIFs leads to less safety, conversely, identifying things as SIFs that are not leads to unnecessary cost, burden, and complexity.

How SIF Fits With SIS and SIL

ANSI/ISA 84.01 does not always make a clear distinction between a SIF (a safety function) and a SIS (a Safety Instrumental System).
IEC 61511 makes a bit clearer distinction but still intermixes some. A SIS is made up of one or more SIFs.

By definition, each SIF must have a SIL based on how much risk reduction the SIF must provide to help reduce the risk of a particular
hazard to an acceptable level when considered with the rest of the protective layers that reduce the risk of that particular hazard. The SIL is
selected based on the risk posed by the hazard the SIF is protecting against. This risk is composed of a consequence (what bad things that
can happen) and a pre-safeguard frequency (how often the hazard is expected to occur if no protections--SIS or non-SIS--are provided).

THE ONLINE RESOURCE OF CONTROL MAGAZINE

36
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 37

However, while you have a single hazard (and generally a single consequence) associated with a SIF, you can have multiple initiating
causes, each with its own frequency of occurrence. For example, overpressure of a vessel due to loss of cooling (with a consequence of
vessel rupture and fire/explosion) could be caused by loss of cooling water supply, loss of cooling water pump(s), temperature control
loop failure, plugging of tubes, etc. Each of these initiating causes can have a different frequency of occurrence, and thus different
risks (consequence x frequency) for the same SIF.

When determining the target SIL of a SIF with multiple initiating cause scenarios, the highest SIL of all the scenarios is normally
used. In cases where there are a large number of causes or multiple scenarios with the same or similar SIL (risk), a look at the overall
risk may be warranted and may result in a higher SIL for the SIF. Fault tree analysis or other quantitative methods are sometimes used
for this purpose.

William L. (Bill) Mostia Jr., PE, League City, Texas, has more than 25 years experience applying safety, instrumentation, and control
systems in process facilities. He may be reached at wmostia@msn.com.

Definition of the SIF in the SRS

Standards IEC 61511 and ANSI/ISA 84.01 have specific requirements for defining the safety instrumented function (SIF) for the Safety
Requirement Specification (SRS), including:

1. A physical and functional description of the SIF.

2. A definition of the safe or mitigated state of the process for the SIF.

3. A definition of any interaction of the SIF’s safe state with other concurrently occurring safe states or events that may create a sep-
arate hazard (i.e., overload of emergency storage, multiple relief to flare system, subsequent downstream or upstream tripping, etc.).

4. Initiating causes (sources of demand) and frequency of causes (demand rate) of the hazard related to the SIF.

5. Identification of the proof testing methods and intervals for off-line and online testing for the system and for individual compo-
nents if they are not tested as a system.

6. Response time (speed) requirements for the SIF to bring the process to a safe state. This includes detection time, decision time,
final element action time, transmissions times, and time to bring the system to a safe or mitigated state.

7. The safety integrity level (SIL) and mode of operation (demand/continuous) for the SIF.

8. Identification of the SIF process measurements, their normal measurement ranges, normal operating ranges, and trip points.

9. A description of the SIF process output actions (final element actions) and criteria for successful operation, i.e., tight shut-
off, speed.

10. The functional relationship between process inputs and outputs, including logic, mathematical functions, conditions, and any
required permissives.

11. Identification of any common-cause failure modes that affect the SIF.

12. Requirements for manual shutdown.

13. Requirements for resetting the SIF after a shutdown.

14. Trip philosophy: energize-to-trip (ETT) or de-energize-to-trip (DTT).

15. Identification of SIF failure modes and desired response of the SIF to them (e.g., alarms, automatic shut-down).

16. Any specific requirements related to the procedures for starting up and restarting the SIF as well as for maintaining the SIF.
THE ONLINE RESOURCE OF CONTROL MAGAZINE

37
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 38

17. Definition of all the interfaces between the SIF and any other systems (including the basic process control system and operators).

18. A description of the modes of operation (normal and abnormal) of the plant that affect the SIF and the its response or opera-
tional mode for these modes of operations (startup, reduced rates, high rates, shutdown, different product grades, known upsets, etc.).

19. Application software safety requirements pertinent to the SIF.

20. Requirements for maintenance, testing, or operational overrides/inhibits/bypasses, including how they will be initiated, how
they will be monitored while in place, and how they are cleared.

21. Identification of any action necessary to achieve or maintain a safe state in the event of fault(s) being detected in the SIF. Any
such action shall be determined taking account of all relevant human factors, procedures, training, etc.

22. The mean-time-to-repair (or restoration) that is feasible for the SIF, taking into account the in-house maintenance capabilities,
procedures and practices, spare part availability, etc. If the required maintenance is out of house then capability, travel time, location,
spares holding location, service contracts, environmental constraints, etc., must be considered.

23. Maximum allowable spurious trip rate. This should also consider whether there are any safety issues to spurious trips such as the
potential hazards involved in restarting the SIF.

24. For SIFs that have multiple final elements affecting different process functions (different equipment, valves that isolate or vent
different process streams, etc.), identify any possible dangerous combinations of output states (where not all the final elements operate
properly) that need to be avoided.

25. Identify the extremes of all environment and abuse conditions likely to be encountered by the SIF. This may require considera-
tion of temperature, humidity, contaminants, grounding, electromagnetic interference/radio frequency interference (EMI/RFI),
shock/vibration, electrostatic discharge, electrical area classification, flooding, lightning, human factors, and other related factors.

26. Definition of the requirements for the SIF necessary to survive a major accident event, i.e., time required for a valve to remain
operational during a fire.

BACK TO TABLE OF CONTENTS

THE ONLINE RESOURCE OF CONTROL MAGAZINE

38
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 39

TiO2 Facility Finds Hidden Plant


Millennium Inorganic Chemicals increased capacity, improved quality,
and reduced maintenance costs with loop performance analysis
PAUL BERWANGER

I was brought over to Millennium Inorganic Chemicals from Equistar in 1998 to manage new capital projects. Market demand for tita-
nium dioxide was expected to outpace installed capacity by 2005 due to the closure of outdated and noncompetitive lines, and Millen-
nium wanted to know what could be done to meet this demand while maximizing shareholder value.

The traditional--and assumed--answer was to build new plants and additions. You know the routine: Sales & Marketing tells manage-
ment “X%” more of this-and-that product is needed. Management then tells finance to tell manufacturing to spend big money to build
or expand facilities to meet those demands.

But as many know, things often start to go off the track at that point. Eyes grow big in manufacturing: “Oh, wow, we're so lucky. We
get to build the next great facility. We've got an unlimited budget. This is tremendous. We're going to keep our jobs going for the next
20 years.”

New plants take money (a lot of money), people, and time to build, and even more time to pay back. Therefore, they don't necessari-
ly maximize profit. To maximize profit, a company must maximize the value added for each incremental investment dollar spent. In
other words, a sound business case must be made. That became my first task.

The idea was to follow the usual “build” formula where finance and manufacturing negotiate with sales to determine how much
product can be moved at a range of prices. We then developed price/output/profit curves. Such curves are rarely linear, and profits
don't always rise with volume.

Once we agreed on the point of maximum projected profit, my job was to give sales that level of production as cost-effectively as I
could, using our existing capacity. If we then found that we were still capacity-constrained, our next step was line expansion, and final-
ly, a new plant.

Assessing the Options

Titanium dioxide (TiO2) is one of Millennium's top profit-makers. It's the ideal white pigment for paint-makers. Our first project was
to increase Millennium's current production of titanium dioxide (TiO2) from the Ashtabula, Ohio, plant. Then the company would
introduce a new, higher-tier TiO2 trade-named Tiona 596.

To meet the immediate needs of 4-5 metric tons/day to replace the closure of the inefficient lines, we would need a $10 million line
expansion. Calculations estimated the cost of a new greenfield TiO2 facility at about $2,000 per installed ton/year of capacity. If we
decided to plan for capacity expansion of future sales projections, we would need to build a new plant. Since an optimally efficient,
world-class TiO2 plant sizes out at about 150,000 tons/year, the cost would be $300 million or so--a big, risk-filled chunk of change for
any company. Covering depreciation alone would absorb $15 million per year in gross profit over 20 years.

Added to the $300 million would be the cost of servicing the debt, plus the cost of other needed infrastructure. Infrastructure work
at the location chosen would add $150 per ton/year, or another $22.5 million to the investment. It obviously makes sense to build a
greenfield facility at times, but we felt there had to be a better way than to just start pouring concrete.

The obvious question was whether Millennium's current TiO2 manufacturing assets were fully optimized. The Ashtabula plant, the
company's largest, looked highly efficient. Year after year, our plant took honors for being the most efficient plant in our system.
Could we wring out additional tons? Even if additional production could be squeezed out, how much would it be? How would the cost
of releasing each additional hidden-plant ton compare to the cost of each new ton from a line extension or new plant?
THE ONLINE RESOURCE OF CONTROL MAGAZINE

39
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 40

So we set out to find if a larger, hidden plant lay within the Ashtabula TiO2 facility. Two intertwined avenues were explored: process
reliability and its evil twin, process variability. I define process reliability broadly here as the percent of time plant assets are available
for their intended purpose at full design capacity; downtime or poor performance due to plant problems or constraints, and which
result in shutdowns or slowdowns ordered by management, are excepted and addressed.

Start With Loop Performance

Process variability is a demon that often lurks unseen in reliability's shadow, cutting efficiency and output and damaging the
plant. Variability problems can lead to failures of rotating equipment, mechanical erosion, and losses in product quality, through-
put, and yield.

We approached our investigations systematically. First, we needed people with lots of experience in evaluating processes and
their control; we didn't have the staff to properly do it ourselves. I had earlier been impressed by feedback loop auditing work per-
formed by Emerson Process Management's EnTech team at an Equistar plant. Loop audits seemed a good place to start developing
hard data.

Rather than starting the consultant with a process in the TiO2 chain where we suspected problems, we gave him a process where we
thought we were running nearly perfectly. You might call it a challenge: “If you can find problems here, you can find problems any-
where.” The process was the oxidizer, a main operating unit where titanium tetrachloride is reacted with toluene, oxygen, and nitro-
gen to create TiO2.

We presented is a typical trend chart to the operator at that time, detailing the reactor's TiO2 output in metric tons per hour. Some
cyclic variability was notable, with flows ranging around 6.7 tph. The design rate was 7.0 tph. Product quality was 90% first-pass
prime. We were happy as clams.

The consultant looked at the chart and immediately discounted the data presented to the operator-something that I and engineers at
most companies would probably not think to do. He compared it with field data using an EnTech auditing tool connected to the final
control element I/O.

Surprise. Instead of 6.7 tph with a variability of +/- 2%, they found 6.7 tph with a variability of +/- 20%. How can that be, we asked?
The consultant overlaid his chart on the chart presented to the operator. Both showed the same flows and period, but the cycles were
out of phase by 180 degrees, which didn't make sense. He had seen this behavior before, however, and proposed the reason: Most prob-
ably, a process problem arose years ago. Because the output remained high, rather than solving the problem someone decided to filter
the output signal. What the operator saw was a time-weighted average that understated the true design output, which should have
been more than 9 tph.

Variability Thief Apprehended

This discovery really piqued our interest. We wanted to know what prompted the filtering in the first place, so we asked the consultant
to perform an audit of every loop in the reactor system. He shortly found the culprit to be a scrub salt additive controlled by an on/off
valve, a place we had never looked since it looked so good at the operator station.

The hot TiO2 is quenched within the pipes of a flue-pond after leaving the reactor. Rapid cooling is necessary to control particle size,
and particle size equates to quality. To prevent the oxide from agglomerating in the piping, scrub salt is added just before quenching.

In the chart we showed the operator reactor output cycled on an exact 38-second period. The full audit indicated that the chart for
the scrub salt valve had the same 38-second period in a cycle of 20 seconds open and 18 seconds closed. To determine if the valve was
the cause, it was switched to an arbitrary 48-second cycle of 25 seconds open and 23 seconds closed. Voila! Reactor output cycling
tracked exactly.

We asked Emerson to fix the problem and properly tune the reactor with a Lambda tuning algorithm. The result: TiO2 output
advanced to 9.7 tph, a 3.0 tph boost. Smoothing the output permitted the process to be run much closer to its theoretical design limits.
Salt valve correction alone effectively increased plant size by 45%-a “hidden plant” we didn’t know we had.

We are also saving salt. The existing on/off scrub salt addition amounted to an average flow of 53% of maximum pump capacity. Salt
addition has now been made continuous through a regulating valve and has been reduced to 35% of maximum pump capacity, for a
34% reduction in salt consumption. The reduction reflects the benefit of more even mixing of TiO2 and salt.

THE ONLINE RESOURCE OF CONTROL MAGAZINE

40
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 41

The former, highly cyclic product output also resulted in an average of five, two-hour flue-pond downtime episodes per month total-
ing $75,000 in maintenance charges. These failures, which were the highest-cost repetitive maintenance items in the plant, have
essentially been eliminated.

More Infected Loops Cured

We learned even more in a subsequent loop audit of the entire plant. Filtered operator data was found to exist throughout the plant. It
took a full year to evaluate and remove unwanted filtering from infected loops and retune all loops in the plant. Although the addition-
al improvements couldn’t match the results of the initial audit, they did add to the size of the hidden plant savings.

Production capacity was not the only benefit; product quality improved as well. TiO2 first-pass prime has been raised from 90% to
99.96%. (The figure would have been 100% for the past 15 months except for two substandard pallets.) Because all production is effec-
tively prime today, grade mix can be optimized without transition losses.

Millennium recently converted the reactor to produce the Tiona 596 product, one extremely important to the company’s future.
Making the new product required a 20% reduction in throughput through the oxidizer. Prior to the loop auditing work, we had antici-
pated an investment of about $12 million to maintain capacity at the 6.7 tph level. Today, because the audit revealed a hidden plant,
we’re producing Tiona 596 in amounts substantially above that volume with no additional capacity investment.

A summary of the Ashtabula TiO2 plant costs and benefits to date is shown in Table I. The success of loop audits in Ashtabula has led
Millennium to take the process worldwide, spending about a half-million dollars with the consultant so far. Few of those audits have
paid back as handsomely as Ashtabula, but every one has paid back well and work is continuing.

Paul Berwanger, global project engineering director, Millennium Inorganic Chemicals, Ashtabula, Ohio, may be reached at
Paul.Berwanger@MillenniumChem.com.

What’s Next?

The TiO2 process is complex and requires a lot of operator input. Eventually we would like push-button operation when starting cold or
transitioning from one product to another. The operator would just key in what product and how much, and hit Go.

The application of process automation is obviously the key. We sought a single platform that can integrate well with PLCs and labo-
ratory information management systems (LIMS) and other analytical instruments, and also provide advanced control techniques, pre-
dictive maintenance, exhaustive data collection and processing, seamless communications with the MES and ERP levels, etc.

We investigated various DCS and PLC turnkey automation platforms to replace the Ashtabula plant’s Emerson RS3 DCS. Central to
this effort was a lifecycle, total-cost-of-ownership bid analysis.

Bids often are close in price and difficult to compare. One bid may be heavy on I&E, another on the mechanical aspects. Therefore,
the bids had to be conditioned to make sure we were comparing apples to apples. Important was the age of the candidate platforms:
Buying soon-to-be-outdated technology would be a disaster.

We also looked at the bidders’ grasp of schedule and commitment, the people issues and rsums. We asked for references from former
projects of a similar nature, suggestions on improving project performance beyond our ideas, and information about project and sub-
contractor management and staffing approaches and experience. We looked at the migration path and cutover from the existing sys-
tem, ability to perform not only in the U.S. but in Timbuktu for future projects, local support worldwide after commissioning, etc.

Each bidder had to allow our evaluation team to attend, gratis, a weeklong training course on their automation software. We know
that buying and using software often costs more than hardware over the long run. Some bidders balked, saying they charge $3,000 a
head for training. We suggested they build it into their prices.

I’ve yet to see a software presentation that didn’t look terrific, but we had to get an idea how much time would be spent learning,
programming, updating, and modifying the software and integrating it with other equipment and systems. Our team conducted after-
hours research during each course, trying to duplicate existing Millennium DCS graphics and configurations. Ease-of-use here varied
widely among bidders.

THE ONLINE RESOURCE OF CONTROL MAGAZINE

41
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 42

The 12 people on the evaluation team unanimously chose Emerson’s PlantWeb digital plant architecture. Primary reasons for the
choice were ease-of-use and robust connectivity to field equipment, to other systems, and to Enterprise Resource Planning (ERP) and
IT networks. PlantWeb hardware and software were not the least expensive, but bid-conditioning demonstrated that they provided the
greatest added value, lowest total cost of ownership, and most promising future viability.

The PlantWeb selection paves the way for us to look at connecting the process to our SAP ERP system in the hope of optimizing the
supply chain and gaining even more efficiencies. The goal of supply chain management is to relay information to the right people in a
manner and speed that facilitates the decision-making process for buying raw materials and making and delivering products.

We quickly discovered we needed a detailed plan to implement an ERP-to-process connection, which required that we flow-chart a
base communications layer comprised of two pieces: the PlantWeb architecture and an existing, OSI PI-based, custom LIMS.

The PlantWeb-LIMS base layer connects a manufacturing execution system (MES) layer. This layer consists of the DeltaV system’s
PC-based historian, engineering, and application workstations. The application station stores such items as advanced control, batch
records, process recipes, planning and scheduling tables, etc. It is also the station that connects to the ERP layer.

Information must flow bidirectionally and seamlessly among the three levels. We are working on developing periodic forecasts,
material resource plans, process orders, customer inputs, and material consumption, confirmation, and finished goods reports. As
yet, our supply chain does not extend to vendors.

In the future, customer orders will automatically feed into the system and could affect production in minutes. If the Ashtabula TiO2
plant continues to run at 100% first-pass prime without grade transition losses, it’s possible to optimize the grade mix by dialing in the
quality.

If a customer changes its requirements, we will learn of it instantly to better meet its just-in-time delivery requirements. This is slat-
ed as a service to those customers who make up the majority of our business. Supply chain optimization will allow us to be so close to
those customers that they’ll have no incentive to look to any other vendor.

Today, we are continuing to reveal hidden plants throughout Millennium facilities worldwide. Some 75% of our savings still come
from working in the basement: efforts like correcting a scrub salt valve. The rest (25%) results from adding and replacing automation
and instrumentation.

We’re not seeing supply chain savings yet, but we’re developing the capability. Eventually, we expect half of savings will come from
the supply chain side, the other half from the automation side, with both locked intimately together.

BACK TO TABLE OF CONTENTS

THE ONLINE RESOURCE OF CONTROL MAGAZINE

42
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 43

Steam Cracker Performance


Optimized by Process MRA
Magnetic resonance analysis gives BASF AG a consistent,
low-maintenance method to accommodate fluctuating feedstocks
JOHN EDWARDS

Two Spyro steam crackers from Technip/KTI are the heart of the BASF AG Ludwigshafen integrated chemicals site in Germany. BASF
operates the crackers mainly for captive use, as logistically they are at the end of the pipeline.

The crackers produce in excess of 610,000 metric tons of ethylene and other products mainly for internal consumption. The compa-
ny aims for 365 days per year operation with extended cracker shutdown every five years. Feed characterization in the form of gas chro-
matography has been used for proper unit operation for a number of years.

With a measurement time of more than one hour, the existing gas chromatographs (GCs) were slow to respond and had high mainte-
nance overhead. More frequent analysis and lower-maintenance technology was required to allow constraints to be filled, reduce coil
outlet temperature and severity variability, optimize the yields of the most valuable products, and, in time, allow greater feed variabili-
ty and flexibility.

A feasibility study investigated a number of options. A process magnetic resonance analyzer (MRA) system was chosen due to its
measurement linearity, fast-track project execution, and high availability.

BASF purchases naphtha of variable feed specification, depending on strategic or tactical requirements. More than 140 naphtha
types are regularly used. With tank stratification and regular feed tank changes, good monitoring of feed quality is essential to mini-
mize lost production during feed transitions and to maintain stable operation in the case of a slug of high-variability naphtha potential-
ly violating a constraint.

Heavy naphthas could violate the coil outlet temperature, causing excess coking or tube wall damage. A light naphtha could exceed
the downstream compressor loading limit.

Working in partnership with BASF, Invensys Process Systems GmbH supplied a feedstock analyzer system that has been successfully
extended to both crackers, including naphtha recycle streams. The measurements cover some 29 components plus four calculations
from c4-c11, including paraffins, isoparaffins, naphthenes, and aromatics.

The Process MRA System

The MRA analysis system uses high-resolution FT-NMR proton spectra in conjunction with partial least-squares modeling techniques
to obtain highly linear and robust predictive models. Modeling requirements are limited, with single predictive models applied across
the entire variability range of each property.

The technique, developed in the 1950s, reveals the hydrocarbon structure of a fluid without temperature or chemical precondition-
ing. When a hydrogen proton is introduced into a homogeneous magnetic field, the magnetic moments of the protons align with the
field; these magnetic moments can be described as vectors.

If the sample is then given a short-duration radio-frequency pulse at the proton resonant frequency of 58 MHz, the vectors will rotate.
A 90º rotation yields optimum signal strength. Once the radio frequency pulse has caused a 90º rotation, the RF pulse is turned off and
the vectors relax back to the original state in a manner characteristic of the proton location within the molecular structure.

When a Fourier transform is performed on the decay, the structural information is revealed in the resulting spectrum and is com-
monly known as chemical shift. These shifts have textbook locations, and with partial least-square chemometric software, the chemi-
cal compositions are correlated.
THE ONLINE RESOURCE OF CONTROL MAGAZINE

43
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E
0412CT_Whitepapers.qxp 11/10/04 10:13 AM Page 44

The installation included integration of feed characterization from analytical and optimization software Process MRA, plus
ROMeo Optimization software to improve the operational efficiency and reliability of this site-critical operation. Availability has
been high (greater than 98%) with prediction models updated remotely (typically once per quarter) as part of an inclusive three-
year support agreement.

Project Execution

Since September 2000, the process MRA system has been functioning online, providing feed-forward stream characterization to the
steam cracker reactor model.

Implementation of the process MRA took place over a period of about 22 weeks. During the first two weeks, BASF was provided with
an MRA that measured a starter set of some 100 naphtha samples spanning the expected operational range. Fewer than 50 were incor-
porated into the final model.

In the next 16 weeks, an online system was installed and online model development commenced. Another 75 samples were gathered
prior to the decision to run the validation phase.

During the last four, a validation phase was conducted and the unit was accepted and transitioned to operations.

The system was installed and commissioned without any disruption to production and within the operational constraints of the local
maintenance, engineering and laboratory staff.

Since the sampling requirements are straightforward, the system was integrated into the established shelter for other analytical
equipment associated with the crackers. No water removal is required and the filtering was set at 100 microns to prevent valve seat
damage only (the sample passes through a relatively wide-bore 6 mm tube). For the multistream sampling, an inline heater is preferred
to clamp stream-to-stream temperatures within +/- 5º C.

Performance on the Line

An important measurement of the naphtha feed is the quality index (QI), often expressed simply as:

QI = 2 x nP + iP + N + 2 - A

where nP is normal paraffin content, iP is iso-paraffin content, N is naphthene content, and A is aromatic content.

Running rigorous test cases has shown that this simple model does not fully express the value available for capture. Process MRA
makes available a higher-fidelity measurement (Table I), superior in terms of accuracy and precision to that previously available.

Additional daily full chemical analysis was conducted during the project phase. However, since the system has gone closed-loop, the
on-stream time has been very high (greater than 99%), with minimal maintenance required from the cracker department.

The system required a single model update after installation to achieve results to BASF’s specifications. Invensys has responsibility
for support of the system for three years from validation, and to date has replaced a couple of selection valves, made a software release
update, and adjusted the models on average once per quarter.

BASF will not disclose the bottom-line improvements except to say the payback is within the return expected of a capital project,
along with soft benefits such as a direct reduction in maintenance and in the laboratory support for full GC analysis work.

John Edwards, manager, analytical and process NMR services for Process NMR Associates (www.process-nmr.com) may be reached
at john@process-nmr.com.

BACK TO TABLE OF CONTENTS

THE ONLINE RESOURCE OF CONTROL MAGAZINE

44
T H E D I G I TA L R E S O U R C E O F P H A R M A C E U T I C A L M A N U FA C T U R I N G M A G A Z I N E

T H E D I G I TA L R E S O U R C E O F P L A N T S E R V I C E S M A G A Z I N E

Вам также может понравиться