Вы находитесь на странице: 1из 28

139a

PROCESS CONTROL IN ETHYLENE PLANTS:


A HISTORICAL PERSPECTIVE

Sati sh Bal i ga
Technology Manager
ABB Inc.

Eddy Fontenot
Principal Consultant
Aspentech Services

Peter Le
Technical Consultant
Invensys Operations Management

Sanjay Sharma
Senior Principal Consultant
Honeywell Inc.


Prepared for Presentation at the 2013 Spring National Meeting
San Antonio, Texas, May 2, 2013


AIChE and EPC shall not be responsible for statements or opinions contained in papers or
printed in its publications

Process Control in Ethylene Plants: A Historical Perspective
Sati sh Bal i ga
Technology Manager
ABB Inc.
Eddy Fontenot
Principal Consultant
Aspentech Services
Peter Le
Technical Consultant
Invensys Operations Management
Sanjay Sharma
Senior Principal Consultant
Honeywell Inc.

Abstract: Process control has been the cornerstone of automation technology in ethylene
plants for the past several decades. Stiff global competition combined with intermittent periods
of uncertain market growth has forced the industry to re-evaluate business models and to
continuously search for innovative ways to achieve operational excellence and optimize
production efficiency. Over the years, the development and convergence of many different
niche technologies has significantly contributed to enhancing process controls role as the
primary enabler for improving ethylene plant performance. To commemorate the twenty fifth
anniversary of the Ethylene Producers Conference, this paper traces the evolution of process
control over the past three decades and examines the various technological developments as
well as human factors that have collectively shaped what process control is today. The paper
reviews the evolution and commoditization of the Distributed Control System (DCS) and its
eventual transformation into a graphical, highly interactive integration platform that provides
process control functionality as well as real-time data connectivity between the plant floor and
the enterprise. In addition, the paper describes the revised role of the ethylene plant operator
and the process control engineer, the emergence of analyzer technologies and smart sensors
that have significantly improved the ability to monitor and control difficult processes, and the
proliferation of advanced process control and real-time optimization applications in ethylene
plants.


Introduction

The cyclical nature of the global olefins market is typical of the petrochemical business
and has presented a constant challenge for ethylene plant operators over the past three
decades. As noted in a CMAI study [1], volatility in energy and feedstock markets, rapid shifts
in global economic growth, and changing consumer consumption patterns, have all had a
significant impact on market cycles and created intermittent periods of imbalance in the supply
and demand of ethylene. This has forced the olefins industry to constantly re-evaluate their
business models, and search for innovative ways to achieve operational excellence, optimize
production efficiency, and manage the effects of market cyclicality as best as they can. With
constant pressure to reduce manufacturing costs, improve product quality, and meet stringent
health and safety requirements, ethylene producers are relying on the use of best available
technologies as well as best practices to remain profitable in a highly competitive global
market.

To commemorate the twenty fifth anniversary of the Ethylene Producers Conference,
this paper traces the evolution of process control over the past three decades and examines
how the convergence of various technological developments as well as human factors have
collectively influenced the way process control is implemented today in ethylene plants. The
evolution and commoditization of the Distributed Control System (DCS), the emergence of
analyzer technologies, and the proliferation of advanced process control and real-time
optimization applications is reviewed in some detail.


Evolution of DCS Technology

It would be exceedingly difficult, if not impossible, to operate a present-day ethylene
plant without good closed-loop process control. With so many process variables, interactions
and nonlinearities involved in a typical operation, plant owner-operators place a high value on
reliable control system performance.

The evolution and commoditization of Distributed Control System (DCS) technology
during recent decades has led to todays graphical, highly interactive integration platform that
provides process control functionality as well as real-time data connectivity between the plant
floor and the enterprise.

Over the years, the DCS has moved from a proprietary, system-centric architecture to
one that is more focused on supporting collaborative business processes and enabling
operational excellence in process manufacturing. There is also a growing trend amongst
vendors to incorporate commercial off-the-shelf (COTS) technology into their DCS platform,
especially technologies related to desktop computing, and to fully leverage the speed, power,
and flexibility that these fast evolving systems provide.

Path of Devel opment

Process control has been the cornerstone of automation technology in the ethylene
industry for the past several decades. Stiff global competition combined with intermittent
periods of uncertain market growth has forced the industry to re-evaluate business models
and to continuously search for innovative ways to achieve operational excellence and optimize
production efficiency.

As is the case with the overall hydrocarbon industry, control technology for ethylene
plants has gone through waves of evolution: from pneumatic and panel board controls to DCS
and then Advanced Process Control (APC), Real-Time Optimizer (RTO) and Manufacturing
Execution Systems (MES).

Early minicomputers were used in the control of plant operations since the beginning of
the 1960s. This led to analog panel boards with instruments, gauges, lights, and push buttons
serving as the main control room interfaces for operators to interact with the process.
Historical data were kept on paper charts, which had to be manually replaced and stored
(Figure 1).



Figure 1: Analog panel board control system.

In the mid-1970s, automation suppliers developed the DCS, a system in which the
controller elements were not central in location but distributed with each component sub-
system controlled by one or more controllers. Microprocessor-based, multi-loop controllers
were connected to supervisory computers, floppy disk drives, CRT-based operator displays and
push-button-equipped workstations, and line printers, now often located in a central (rather
than local) control room via a proprietary data highway (See Figure 2).



Figure 2: CRT-based DCS operator station.

As PC technology became more prevalent in the industry, panel board replacements
with fixed hierarchies and faceplate displays gave way to process graphics and flexible
hierarchies and navigation methodologies. Users were no longer stuck with having to do things
a set way. This introduced greater flexibility in developing operator interfaces, but also
challenged users to make decisions on hierarchy, navigation, pictorial representation of data,
and many other choices such as colors, symbols, overlays, and pop-ups.

In the 1980s, plants began looking at the DCS as more than just basic process control.
The drive toward openness from closed, proprietary technology increased the adoption of
commercial off-the-shelf components and IT standards. Most DCS platforms transitioned from
a UNIX operating system to the Microsoft Windows environment. This, in turn, led to a
natural migration from hardware-based to software-based systems.

During the same period, leading automation suppliers introduced the first smart
pressure transmitters. When combined with the respective suppliers proprietary digital field
communications scheme, these smart pressure, temperature and flow transmitters improved
performance over analog transmitters by transmitting the process variable(s), and often
secondary measurements, in a precise digital format.

In the 1990s, the introduction of Microsoft at the desktop and server layers led to the
development of technologies such as OLE for Process Control (OPC), which became a de-facto
industry connectivity standard. Internet technology also made its mark in the automation
world, with DCS Human-Machine Interfaces (HMIs) supporting Internet connectivity.

Finally, the 1990s also saw the rise of open, non-proprietary fieldbus control
strategies, with several automation industry consortiums competing to define the International
Electrotechnical Commission (IEC) standard for digital communication with field
instrumentation instead of 4-20 milliamp analog communications.

The opening of the DCS network, or the data communication highway, is the other
area where the adoption of COTS has made a significant impact on the process control
industry over the past decade. Specifically, prior to this development, DCS data highways were
based on proprietary, vendor-specific architecture. Due to limited standardization, custom data
links had to be frequently engineered to enable communication with devices and networks
outside the data highway. With the development of standard, high speed Ethernet architecture
and the adoption of the same by DCS vendors, control systems are now more open and can
be easily interconnected with third-party devices as well as corporate information networks.

Modern DCS Sol uti ons

While DCS technology as a whole has reached a degree of maturity in the new
millennium, process control solutions continue to advance in exciting and beneficial ways,
including greater field integration capabilities, increased functional distribution, and better
integration with other plant and enterprise-level systems [2].
The modern DCS has evolved into an Integrated Control and Safety System (ICSS)
unifying the Basic Process Control System (BPCS), Safety Instrumented System (SIS) and Fire
& Gas systems. Under a single HMI, operator effectiveness is driven through common
alarming, trending and overall operator experience. The ICSS integrates a wide range of plant
functions related to process control, safety, Supervisory Control and Data Acquisition (SCADA),
process video and even electrical controls (See Figure 3).




Figure 3: Integrated Control and Safety System (ICSS)

The current breed of DCS is designed to manage process knowledge through a
combination of advanced technologies, industrial domain expertise, and Six Sigma
methodologies. These open, scalable systems are embedded with best-in-class applications for
advanced control, asset management, operator effectiveness, control monitoring and more.

Instead of requiring plants to support an outdated HMI or abandon it entirely, DCS
solutions now provide the means to leverage existing automation investments and intellectual
property, and at the same time, to migrate plant control rooms and engineering stations to
newer, more robust technology.

Control systems employing open network protocols also provide process plants with
new levels of connectivity. Users have the freedom to select the best control and
instrumentation solutions for a given task. They can mix and match devices from a variety of
manufacturers, and transparently integrate them in a field network strategy that suits their
needs.

MESH control networks (see Figure 4) offer system scalability by interconnecting
Ethernet switches for connection stations with multiple connection paths between any two
devices connected in the network. MESH control networks are offered with standard and
security enhanced configurations. This allows for redundant data paths between devices
eliminating single points of data connection failure and flexibility in designing optimal network
topology. Furthermore, network performance is greatly enhanced as a consequence of higher
speed and data throughput, greater reliability (ability to recover from multiple fault points),
and higher capacity (more devices on network).



Figure 4: MESH Control Networks for high performance and fault tolerance.


Latest Technol ogy Advancements
The latest trend in control system technology is the enterprise-wide DCS, which
integrates people and processes for better performance. These systems are designed to merge
traditionally disparate functions and systems across the manufacturing enterprise, as well as
capture the knowledge of plant personnel and their workflows to deliver sustainable
efficiencies. This union streamlines information flow to the right place at the right time to the
right people (See Figure 5).


Figure 5: Enterprise-wide DCS integrating people and processes for better performance.

Recent milestones achieved in process automation can be summarized as follows:

Increased computing power in the DCS
Todays increasingly powerful computers make it possible to integrate robust control
algorithms within the DCS. This helps the system to become safer and more secure, and
allows developers to host advanced functionality such as Multivariable Predictive Control
(MPC) in controllers.

Integration of open IT technologies
Process automation has been revolutionized by the integration of open IT technologies
(i.e., personal computers and networks) into the control system environment, replacing
what in the past were proprietary solutions. Societys embrace of PCs has made operators
more comfortable running processes from their desktops. And although the initial role of
the DCS was to replace panel boards, users now expect the system to act and respond like
a high-end web browser.

Adoption of software-based architectures
With the movement from hardware-based to software-based control systems, new
functionality is easily and cost-effectively added to the DCS by means of software
applications.

Utilization of intellectual property assets
The modern DCS is also useful for capturing the procedural expertise of soon-to-be retired
engineers and operators. These systems can formalize measurements, collect data based
on them, and help provide consistent performance recommendations. Plus, suppliers are
now able to embed operator effectiveness solutions in controllers to automate functions
such as plant startup, grade transitions, unit shutdowns, etc.

Integration of process and safety systems
Advanced digital technology has made it feasible to combine process control and Safety
Instrumented Functions (SIFs) within a common automation infrastructure, all while
maintaining the segregation necessary to ensure regulatory compliance. With this
approach, plant personnel can view the status of the safety system and its applications,
and combine this information with process control functions. The evolution of the HMI
allows critical information to be shared between the safety system and controllers, and
between the safety system and third-party subsystems via a digital bus interface.

Development of innovative human factor designs
Organizations such as the Abnormal Situation Management (ASM

) Consortium have
developed innovative human factor designs, which are now being applied to standard
operator displays to reduce fatigue, improve responsiveness and enhance product quality.
The goal of the ASM work is to establish a high level of situational awareness, align with
work processes, minimize operator workload and errors, enable effective abnormal
situation responses and enhance task performance.

Design of flexible I/O solutions
A new generation of flexible I/O simplifies engineering and configuration during the design
stage of a project, providing significant savings in installation costs. These solutions allow
I/O modules to be mounted close to the process unit, reducing or eliminating the need for
marshalling panels and homerun cables, and reducing field auxiliary rooms. Users can
achieve safety at remote locations with high architectural flexibility. Flexible I/O minimizes
overall capital expenditures as well as maintenance costs, and allows flexibility for last-
minute design changes.

Unification of process and business applications
Companies wanting to unite the process and business worlds can employ the latest
automation technology to establish a single, global database for all manufacturing
information. This includes applications synchronizing the database for use in the business
environment. For example, users can directly integrate a plant-wide historian with the DCS
without duplicate engineering of points. The historian then provides analysis functions and
higher-level application support to diverse roles in the plant or business.

Deployment of industrial wireless networks
With secure wireless connections now available in industrial facilities, a mobilized workforce
is becoming an essential asset for enhanced operations. Integrated with the plant DCS,
wireless mesh networks extend the process control network into the field, where the
workforce can access critical data using mobile computing devices such as handhelds and
tablets.

Implementation of virtualization solutions
Once limited to IT data centers, virtualization has begun to impact the way plant control
systems are designed, implemented and maintained. For those building new plants,
virtualization can extend design freeze dates, delay hardware purchases and provide the
option of a virtual Factory Acceptance Test (FAT) to validate configurations remotely. For
existing operations, it reduces system lifecycle management costs via non-disruptive
hardware upgrades, insulation of operating system to computer hardware dependencies,
simplified maintenance and provisioning of control system computer nodes, more efficient
upgrades, greatly reduced physical node count and increased system availability.

Extendi ng Asset I nvestments

Control system evolution has now arrived at an integrated, open, interoperable
infrastructure to which traditional DCS companies add value through their knowledge, services
and expertise. The benefits of process knowledge/integration and long-term support have
become key supplier differentiators. Leading control system suppliers have responded to
current business demands by offering a variety of options to keep plants updated on the latest
technology while safeguarding existing investments. The goal is to enable automation
upgrades without replacement of the entire control system platform. This extends the life
expectancy of installed assets and positions the facility for future growth.

Some suppliers offer multi-year lifecycle management agreements guaranteeing asset
support for hardware and software products until they are modernized or retired. The
programs bundle site support services into a single, cost-effective solution that ensures users
achieve their asset management goals without having to renegotiate multiple service contracts
every year.

A lifecycle management agreement can allow plant owners to start down the path to
modernization today, and get there incrementally as their needs and schedule dictate. It may
also offer flexibility in how to manage plant assets and predictability in how technology
investment choices are financed.
Evolution of Process Analyzer Technology

It is well established from basic process control theory, that the variability inherent in
an industrial process can be minimized by increasing the speed of feedforward and feedback
control loops. A necessary prerequisite for enabling such control strategies is the availability
and use of fast, accurate, and robust process analyzer data. Looking back over the past three
decades, it is therefore evident and not at all surprising that significant strides have been
made in refining, transforming, and commoditizing what were essentially laboratory-based
systems into online, and in many cases, to real-time process analyzers. During this time there
have been quite a few noteworthy advances with regards to successful commercialization of
new analytical technologies, but to a large extent development in this area has been as much
evolutionary in nature as it has been revolutionary. This is perhaps a good indication of the
high level of technological maturity attained by the analytical instrumentation industry even
during the early part of this era.

Enabling technologies that have catalyzed the evolution, progression, and wide-spread
use of process analyzers include microprocessor-based technology, microelectronics, fibers
optics, and internet technologies. The rapid development and proliferation of these enabling
technologies and their projected impact on process analytics has been very nicely summarized
in a report [3] written by the ARC Advisory Group almost a decade ago.

High-speed microprocessors in particular have spurred the development of
spectroscopy-based systems and enabled their applicability to real-time applications. For
instance, due to the almost exponential growth in computer processing power coupled with
the development of new optics technology, it is now possible to accurately scan and analyze
spectral rich data from near-infrared (NIR) analyzer systems rapidly enough for real-time
monitoring and process control. It is therefore not surprising to see the deployment of such
instruments in process areas that previously relied mostly on gas chromatographs for
compositional analysis.

In general, regardless of the underlying technology being used, process analyzers today
are significantly more compact than their predecessors from the early 1980s. This is largely
due to the rapid advances in microelectronics and surface-mount technology that has
significantly simplified electronic board configuration and its customization for niche, dedicated
applications. Coupled with advances in fiber-optic technology and the use of solid-state
detectors, equipment vendors today are able to deliver reliable, compact, fully digital analyzers
with minimal moving parts. Furthermore, by embedding web-server technology directly into
the analyzer equipment, an end-user is able to remotely monitor not just the process value
being measured but also the health of the equipment itself.

There has also been a very strong drive amongst analyzer vendors to move away from
the use of proprietary communication protocols and switch to universal, open standards that
simplify the integration of their equipment with systems from other vendors including the DCS.
In an analyzer network existing at a plant today, it is not so uncommon to see various
communication technologies & protocols such as Ethernet, TCPIP, Modbus/RS485/RTU/TCP,
PROFIBUS, HART, working seamlessly and in parallel.

With the wide variety of analyzer technologies available today, the choice of analyzer
used for process control depends on a host of factors such as the inherent speed of response
of the process, the accuracy and sensitivity requirements, whether the measured stream is
single or multicomponent, ease-of-use, and finally, the operating and life cycle costs of the
equipment. In ethylene plants, process gas chromatographs (PGC) have long been considered
to be the analytical workhorses for on-line compositional measurements. Ever since their
introduction to the process industry in the 1950s, gas chromatographs have rapidly evolved
from large, slow, complex laboratory-based systems to compact, precise, accurate, and easy-
to-use modular systems with significantly reduced analysis times.

While the fundamental principles by which process streams are analyzed in gas
chromatographs have not changed, some of the enabling technologies that perform these
measurements have advanced considerably. In particular, the use of cryogenic focusing
systems as well as rapid temperature cycling of chromatograph columns, has reduced analysis
time from tens of minutes to just a couple of minutes, thereby making such chromatographs
suitable for control applications in a majority of process units found in an olefins plant. For
instance, PGCs are most widely used across the entire fractionation train where process
dynamics are relatively slow and therefore analysis times in the order of a few minutes is
generally acceptable for effective feedback and feed-forward control. Also, PGCs are well
suited for product quality monitoring and control due to their excellent sensitivity even at low
concentration levels of impurities in multicomponent streams.

There are two specific areas within an ethylene plant where there is a growing trend
related to the use of spectroscopy-based systems as well as non-optical, magnetic resonance
analyzers (MRA) in lieu of traditional gas chromatograph-based process analyzer systems. The
use of such technologies in these specific process areas is described below:

Furnace Secti on Area

Since their initial introduction to the process industry in the late 1980s, the use of
FTIR/NIR spectrometers, mass spectrometers (MS), and MRAs is gradually gaining a foothold
in the on-line, detailed compositional analysis of furnace feeds and pyrolysis effluents in
ethylene plants. A recent paper [4] presented at the AICHE 2010 Spring National Meeting
highlights a case study of an NIR application that has been in operation at an ethylene plant
since the early 1990s and that has achieved significant benefits through its use in the plants
process control and optimization system. An earlier paper [5] presented at the same
conference, provides an in-depth comparison between various on-line process analytical
techniques used for compositional analysis of both feed and furnace effluent. The key findings
from these two studies, as well as those from several other published reports [6, 7], point
towards some interesting emerging trends in ethylene plants.

Furnace Feed Analysis

In ethylene plants that process complex liquid feed-stocks, such as full-range naphthas and
heavy raffinates, FTIR/NIR spectrometers and MRAs can offer several advantages over
PGCs with regards to their ability to provide very quick and detailed compositional analysis
of the feed. Unlike most chromatographs that require an hour or more to provide a detailed
breakdown of the feed in terms of the paraffin (iso and normal), naphthene, and aromatic
(PNA) content, these analyzers can provide between 50 to 75 PNA components as well
predictions of molecular weight, density and the ASTM distillation curve, all within a couple
of minutes or less. Stratification of individual components in complex liquid feeds, caused
primarily by poor mixing in feed tanks, is a fairly common problem that can lead to a
steady drift in the quality of the feed entering the furnace. In such situations, the benefits
of having a quick analysis are quite obvious. Figure 6 shows some examples of Naphtha
spectra.

With the availability of industrial strength, high speed computers, many ethylene plants are
now embedding rigorous, first principles-based kinetic models within their model-based
severity control and online optimization strategies to fully leverage the granularity and
sensitivity offered through these very fast and robust compositional analysis. Benefits
captured include more accurate representation of the furnace kinetics allowing for higher
confidence in the ability to make timely compensatory feed-forward adjustments to coil
outlet temperatures to maintain cracking severity, and to optimize the furnace operations
based on improved predictions of pyrolysis tube fouling, selectivity, product distribution,
yields, and tube metal temperatures.



Figure 6: Example of NIR [6] and MRA Spectrum for Naphtha

In ethylene plants that process gas feeds such as ethane/propane (E/P) mixtures, the
benefits captured through the use of FTIR/NIR spectrometers and MRAs may not be as
significant as with complex liquid feeds, especially if the E/P split remains fairly constant.

Pyrolysis Effluent Analysis

The benefits of using effluent analysis feedback in a control strategy to minimize "over-
cracking" or under-cracking of the feed stream are well proven. For the past several
decades, PGCs have been widely used in ethylene plants to provide this feedback to the
process control systems. However, even though enabling technologies developed over the
past three decades have helped reduce analysis times of these analyzers quite significantly;
most process chromatographs used for effluent analysis at best work on a 5-8 minute
cycle. In many situations these chromatographs are setup as multi-stream analyzers and
maybe shared between 3-4 furnaces. As a result, the control system may typically receive
effluent analysis feedback only once every 15-30 minutes.

To compensate for slow effluent analyzers, most ethylene plants utilize some form of
model-based conversion or severity control software to predict the effluent composition
and to provide higher frequency effluent analysis feedback to the control system. Given the
well documented success of such applications, the adoption of technologies based on
FTIR/NIR, MS, and MRA to replace PGCs for effluent analysis has been relatively slow.
Nevertheless, as noted in the Johnson et al paper [5], a MS is capable of measuring the
composition of a single effluent stream in 1015 seconds, which is several orders of
magnitude faster than the analysis time of a PGC. Given that a faster and more precise
analysis feedback allows the control system to make faster adjustments to the operating
parameters of the furnace, a process MS-controlled ethylene furnace has been shown to
achieve a 50% reduction in process variability compared to GC-controlled furnaces. Similar
benefits have been reported with the use FTIR/NIR based analyzer systems [6].

Hydrogenati on Reactors

Historically, PGCs have been widely used to monitor the trace amounts of acetylene
exiting the final hydrogenation reactor bed. It is well known that acetylene in excess of two
ppm in the ethylene product can poison the catalysts used in the production of polyethylene.
Though specialized PGCs are known to reach the parts-per-billion (ppb) sensitivity in acetylene
monitoring, their lack of ability to conduct this analysis rapidly enough has been a significant
technology gap in the olefin industry for many years. At best, the analysis time required by the
GCs to detect acetylene has been in the order of several minutes.

In the late 1980s, Anthony OKeefe founder of the Los Gatos Research laboratory first
demonstrated [8] the high sensitivity of a first generation cavity ringdown spectrometer
(CRDS) to study gaseous samples that absorb light at specific wavelengths, and in turn to
determine mole fractions down to the parts per trillion levels. Another similar technology based
on absorption spectroscopy is the off-axis integrated cavity output spectroscopy (OA-ICOS)
that has been shown to provide [9] real-time analysis (30 seconds response time) of
acetylene from 1000 ppm all the way down to 0.050 ppm and with a precision of 0.025 ppm.

Continued research in photonic and spectroscopic technologies over the past ten years
has resulted in the development and commercialization of several niche analyzers [10] for
acetylene measurement based on CRDS, ICOS, as well as Tunable Diode Laser Absorption
Spectroscopy (TDLAS).

The use of multiwave process photometers based on IR and UV sources, for
continuous, real-time measurement of acetylene in the hydrogenation reactor inlet or mid-bed
is another emerging trend in ethylene plants. The net result is that it is now possible to
achieve dramatic improvements in the performance of hydrogenation reactors through the use
of such spectroscopy-based analyzers in feedback and feed-forward control strategies. The
benefits include tight hydrogen-to-acetylene ratio control, ability to redistribute conversion
more evenly across the reactor beds in multi-bed systems (results in longer reactor run length
of the lead bed), improved activity and selectivity of the beds, and significantly reduced
occurrence of acetylene contamination in the ethylene product.


Evolution of Advanced Process Control Technology

Advanced process control (APC) refers to a broad range of techniques and technologies
implemented within industrial process control systems. APC applications came into existence
to provide tangible benefits for early digital control installations. Common types of advanced
process control one finds in an olefins plant include:

Advanced Regulatory Control (ARC) generally refers to feedforward, override or adaptive
gain control techniques. ARC is also a catch-all term used to characterize any customized
or non-simple technique that does not fall into one of the other categories. ARCs are
normally implemented using function blocks or custom programming capabilities at the
DCS level. In some cases, such as furnace constraint control applications, ARCs reside on
the supervisory control computer.
Multivariable Model predictive control (MPC) is control technology, usually deployed on a
supervisory control computer, that identifies linear and dynamic relationship between
important independent and dependent process variables and uses matrix-math based
control and optimization algorithms, to control multiple variables simultaneously. MPC has
been a prominent part of APC ever since supervisory computers first brought the necessary
computational capabilities to control systems in the 1980s.
Inferential control utilizes a stream property estimated from readily available process
measurements, such as temperature and pressure in place of an actual online analyzer.
Sequential control refers to discontinuous time and event based automation sequences that
occur within continuous processes such as furnace decokes or dryer swaps. These may be
implemented as a collection of time and logic function blocks, a custom algorithm, or using
a formalized Sequential function chart methodology.
Compressor control typically includes compressor anti-surge and performance control.

Computers

With the advent of a digital interface to process data, computer control of the process
became possible. Early real-time process control computers with data interfaces included:

DACS from IBM
PDP-8 and PDP-11 from DEC
MODCOMP from Modular Computer Systems

The early process control hardware had limitations in memory, CPU speed and
communications throughput. These imposed restrictions on the size and scope of early
advanced process control applications.

As DCS started gaining general acceptance in ethylene units during the 1980s, the
supervisory control layers containing the APC functionality began to migrate to workstations
running UNIX and VMS operating systems and interfaced to the DCS with custom interfaces.
As the PC computing power increased and the Windows operating system became stable, the
supervisory control layer gradually moved to the PC servers. Today, APC systems are being
migrated to virtual machines.

Software

Some of the earliest APC applications on a supervisory control computer include furnace
control applications that predicted furnace yields and constraints to control the pyrolysis
severity/conversion and to balance pass temperatures while honoring constraints [11]. These
applications were customized for each system.

During the 1980s, MPC software became commercially available. Qin and Badgwell
[12] present a detail review and comparison of MPC technologies from various vendors. The
software initially ran on the supervisor control computer, typically a workstation running UNIX
or VMS operating systems. Model identification software was among the first to move to the
PC platform. Declining PC costs, increased robustness and reliability of the Windows
operating system (OS), coupled with orders of magnitude increase in computing power then
led to the migration of supervisory control computer to PC servers.

With standardization on the Windows OS platform, most APC software systems are now
packaged with a graphical user interface even for their engineering and configuration tools
making them much easier to use. With feedback from the user community, many advanced
features have been incorporated into the software including automatic data slicing and co-
linearity detection, online controller diagnostics, and model checkups.

As MPC applications became larger in scope and complexity, techniques were developed
to segregate the applications and provide the operator the look and feel of smaller
applications, while still capturing all the interactions between the different sections. Because
of the significantly different dynamics and response times for the different sections of an
olefins unit, algorithms were specifically developed to maximize feed by coordinating moves
for multiple MPC applications while honoring plant-wide constraints.

MPC technology used in olefin plants continues to evolve and now includes non-linear
steady models that dynamically update the steady state gains used by the linear models.
Furthermore, state space model identification and closed loop model identification techniques
continue to improve the model quality while reducing step testing time.

I nterfaces

Early operator interfaces to MPC applications consisted of customized native graphics on
the DCS layer. As the DCS platform migrated to PC servers running under the Windows OS
environment, MPC software vendors began providing web-based displays accessible directly
from the DCS console. Historians for MPC applications are also freely available to monitor
controller performance.

Early MPC implementations included deployment of a custom communication interface
between the supervisory control computer and the DSC or control system. The Object Linking
and Embedding for Process Control (OPC) Foundation was founded in 1994 to create and
maintain standards for open connectivity of industrial automation devices including the DCS.
This standardization has led to plug-and-play installation of the interface enabling easy
connection between the DCS, MPC application servers, programmable logic controllers (PLCs)
and other industrial automation systems.

In 2006, the OPC Foundation released OPC-UA (Unified Architecture) as the next
generation OPC standard that focused on interoperability a more uniform and reliable
framework for connecting diverse types and brands of devices and accessing real-time and
historical data. The primary goal of the OPC-UA standard was to provide a path forward from
the original OPC communications model (namely COM/DCOM - a proprietary Microsoft
technology) to a cross-platform service-oriented architecture (SOA) for process control that
provides a secure connection between systems and devices on the plant floor to other systems
across the enterprise.

MPC Appl i cati ons

Limited by communications throughput and computing power, most early MPC
applications were small in scope, for instance, a composition control for one or two towers. As
hardware advances relaxed these limitations, the scope of MPC applications expanded rapidly
to capture interactions between the several cold-side towers and the refrigeration systems.
Early furnace MPC applications in olefins units were usually deployed for severity control and
designed to accept set points from the Real Time Optimization (RTO) application. As
downstream throughput constraints became common, the furnace and recovery section
controllers were combined using an application that coordinated moves to maximize furnace
feed against downstream constraints. Furnace applications evolved to include pass balancing
and steam-to-hydrocarbon ratio control. With newer furnaces operating with multiple feed in
co-cracking and split-cracking modes, and the controllers becoming more complex, furnace
MPC applications began to utilize real-time gain adjustments rather than swapping models to
account for changes

Evolution of Real Time Optimization Technology

Due to increasing competition in a dynamic and global market, olefins plant operations
need to be optimized dynamically to adapt to the changing market conditions and to reduce
the operating cost. Closed-Loop Real Time Optimization (CLRTO) refers to evaluating and
changing operating conditions of a process continually to maximize the economic benefit/value
of the process.

Many CLRTO approaches have been applied in the ethylene industry and new
approaches continue to be developed. One approach for CLRTO uses steady-state models to
provide optimum operating point for the entire ethylene plant. Plant measurements collected
by the real time system are first checked for steady state operation. If the plant operation is at
steady state, data validation is then performed on the measured data, and the process model
is updated to the current operating point. Rigorous optimization is then carried out using the
updated model along with constraints for the advanced control system, economic data and
product requirements, to find the new set-points for the operating variables. The new set-
points are sent to the advanced control system for implementation on the plant.

Another approach to carry out CLRTO was introduced during the past two decades and
involves the use of dynamic models for optimization. In this approach, real-time dynamic
models are used to calculate the optimum operating point for the entire plant without having
to rely on steady state detection. Non-linear rigorous models of the process can be integrated
to provide steady state gains for the dynamic models [13].

RTO applications add most value when the process includes non-linear trade-offs such
as furnace yields, distillation and refrigeration. In addition, significant progress has been made
in the integration between the CLRTO applications and the underlying MPC applications. RTO
applications enable the modularization of the MPC applications and provide global optimum for
entire ethylene plant.

Computers

The computer technology used for process simulation has evolved over the years; the
cost of ownership and maintenance has decreased sharply while the computing power has
increased dramatically. PC servers of today are much faster with significantly more memory
and disk storage than the work stations of the prior generation. The following computer
systems provide a good example of how the hosting platform for real-time optimization
applications has progressed over the years:

Offsite mainframe servers at a service bureau or technical center
Individual Workstations running PRIME or VMS or UNIX operation systems
PC servers running Windows
PC servers running virtualization software


Software

Process simulation software has evolved from in-house applications on mainframe
servers to commercially available programs on personal computers. While plant designers and
process engineers still routinely use sequential modular algorithms, the solution algorithms for
online process optimization models have evolved to faster techniques that solve the entire flow
sheet simultaneously. Examples of such techniques include:

Sequential Modular with Generalized Reduced Gradient (GRG) algorithm to optimize
solution while converging tear streams
Simultaneous solution of entire flow sheet using an inside-out algorithm where the material
balances are solved in the inside loop and the energy balances are solved in the outer loop
Open equation algorithm with sparse matrix solvers
Dynamic simulation

With standardized software running on the Windows OS, process simulators of today
have superior graphic user interfaces making them much easier to use. They have enhanced
troubleshooting features that include the ability to solve sections of the model offline and
update the online model. Inputs that represent a snap-shot of current plant operation and/or
results from the simulation of a certain section of the model can be captured to perform offline
case studies and optimizations. Todays online applications can be integrated with the
spreadsheets, web portals, and SharePoint.

As the optimization software has evolved; the tools and reports uses to interpret the
optimization results have improved. A list of scaled optimization opportunities can be
generated by multiplying the shadow prices for active constraints by typical moves for the
constraints. The impact of individual setpoint changes can be generated by multiplying the
gradients by the optimization moves, often demonstrating a seemingly incorrect move for one
setpoint that allows for additional benefit by moving another more profitable setpoint.

One key aspect for modeling olefins units is the ability to reconfigure the model to
reflect changes in plant operation. These changes include furnaces taken offline, decoking,
different furnace feeds, downstream equipment out of service, and different control system
strategies being enabled or disabled. Adapting the model requires that sections of the model
be turned off and variable specifications and/or values changed. The methods for configuring
the model to match current plant operation have evolved with the process simulation software
and include the following options:

Different models used for different operating scenarios
Custom FORTRAN subroutines complied and linked to the application
Application scripting and visual basic scripting

The online execution of the RTO has also changed significantly over the years, and the
following three stages represent the evolutionary path taken:
Manually collecting data from process measurements, entering it into the simulation,
solving and optimization the process, manually downloading the targets to the control
system
Custom interfaces and DCS displays to read measurements and constraints and download
targets
Standard interfaces to DCS, plant historian and operator displays, all inter-connected using
the OPC standard.

Model s

As the computer speed has increased and faster solution algorithms evolved, the rigor,
scope, complexity, and granularity of olefin plant models has been significantly extended.
Pyrolysis yield models, one the most important aspects for optimization of olefins units, have
evolved from simple representations to integrated first principle models as shown below:

Yield shift vectors similar to those used for planning applications
Simplified correlations based on fits of rigorous first principle yield models
Rigorous first principle yield models integrated through the radiant coils

First principle models that are widely used today are applicable over a large range of
operating conditions and feed stocks, and require reduced model maintenance effort since
updates are not required to accommodate a new feed stock or a wild feed. Recent furnace
models include even greater granularity with two sides or even quadrants being modeled
separately. This segregation is often necessary as different zones could be processing
different feedstocks.

As with the furnace area, the need for modeling short cuts has diminished significantly
in the product recovery and fractionation train. For instance, short-cut distillation models,
especially in the quench area, have been replaced with rigorous tray-to-tray distillation
models. Regressed k-value options are no longer used to reduce the calculation time for the
recovery section towers.

Downstream units, such as C4 recovery, pyrolysis gasoline hydrogenation units (GHU)
and aromatics recovery units are now routinely included in the RTO model to better account
for constraints and their impacts on the objective function. The effluent component slate has
been expanded to improve the robustness of the quench models and the accuracy of the C4
and gasoline hydrogenation unit models.

With historical archiving of the online calculations, the RTO model provides equipment
performance monitoring that includes calculation of:

Heat transfer coefficients
Heat exchanger pressure drop factors
Compressor performance
Turbine performance
Tray pressure drop parameters
Distillation column efficiencies
Furnace run-length and coking
Measurement offsets

RTO models have become increasingly more tightly coupled with planning and
scheduling models. The planning models often provide feed, product, and utility prices, as
well as logistic constraints to the RTO system. In turn, the RTO models are often used to
update gains in the planning models.

A paper [14] presented at a recent conference discusses an application where only
sections of the plant detected to be steady were updated between optimization cases. Another
paper [15] discusses using models with rigorous kinetics and material balances for the
recovery section updated with the steady state targets and constraint models from the APC
layer.


Evolving Roles of Ethylene Plant Personnel

As control system technology has evolved significantly over the past 25 years, the roles
and responsibilities of plant personnel in ethylene plants has also undergone considerable
changes (See Figure 7). In many plants, some of these roles may be combined and assigned
to a single person or to a single team possessing the required skill sets.





Figure 7: The evolution of control systems has impacted the roles of plant personnel.

Rol e of Operator
Ongoing developments in the DCS world have enabled operators to carry out higher
value added functions. In the earlier days of pneumatics, board-mounted controllers and strip
chart recorders, operators spent time scanning the unit operation as they walked past the wall
of controllers and recorders. They also kept busy documenting the shift performance, such as
key operational metrics and shift logs for the next shift on paper. Most procedural tasks were
carried out manually.

The DCS has fundamentally altered the way operators go about their daily tasks. For
example, procedural tasks like steps in the furnace decoking procedure can now be performed
by advanced, automated applications in the control system. This automation provides the
operator with more time to focus on other duties. Similarly, operators no longer need to
document shift performance since operational performance metrics, alarm logs, process
change logs, etc., are now automatically logged by the DCS.

In addition, the DCS has transformed the way process information is presented to
control room personnel. While sitting at an ergonomically designed console station that takes
into account human factor considerations, the operator is presented with a dynamic and highly
interactive graphical view of the plant operations. The alarming functionality is usually context
sensitive and work area guidelines specify alarm management and rationalization strategies
that reduce occurrence of nuisance alarms.

Todays flow of information is also faster, and operators need to be more technically
savvy. Unit operating guidelines are sent to operators electronically. They are able to log
instrumentation issues and work orders in enterprise systems so that necessary resources can
be assigned and appropriate action is taken in a more timely fashion. Mobile devices are also
becoming more common place to monitor equipment diagnostics, trouble shoot, propagate
work orders, and to track/audit them through to completion. Operating manuals, P&IDs, alarm
limits and safe operating limits/operating procedures are also available to operators on their
desktop.

Simulators based on rigorous dynamic models of the ethylene plant unit operations
coupled with accurate DCS emulations, are now being widely used to train operators on all key
facets of plant operation; including startup/shutdown, decoking, and most importantly on how
to react to abnormal or emergency situations. Some plant sites are even mandating that their
operators receive official certification through the use of such training simulators.

For a new plant, use of an operator training simulator can contribute to shorter initial
start-up, enhanced operator performance as well as trip and incidence avoidance. It also
allows the testing of operational procedures and the tweaking of display and control strategies
before initial start-up, when changes are always easier and less risky to make.

With the wealth of process information that is now quickly and readily available at their
fingertips, operators are able to multi-task very effectively. It is not surprising therefore that
control rooms are now getting leaner with regards to the number of operators required to
manage the operation of the plant under normal conditions.

The information empowerment has significantly elevated the role of the operators
today and they now play a more pivotal role in the decision making related to plant
operations. This proliferation of information to lower levels is causing organizational structures
to flatten, pushing down many of the responsibilities to the plant floor personnel.

Rol e of Process Engi neer

In the past, process engineers normally spent a great deal of time gathering process
data and information from operator log sheets so they could carry out process analysis and
seek ways to improve the operation. With most process information historized by the DCS,
process engineers can now obtain unit operational data with a few keystrokes. And most of
this information is available to them in their office, in real time. As a result, process engineers
can devote more effort to engineering analysis and developing strategies to optimize unit
performance.

Rol e of Process Desi gn Engi neer

The role of the process design engineer has been heavily influenced by the availability
of faster computers and the ability to therefore increase rigor and complexity in the
engineering of process design and simulation systems. Access to vast amounts of historized
process data as well as engineering data allows the process design engineer to update and
fine tune the rigorous first principle-based engineering tools used for process design. They
also work with advanced engineering groups and operators to better understand the true
constraints that limit plant production rates. Such close interaction between cross-functional
teams was uncommon in previous decades.

Rol e of I nstrumentati on Engi neer

On a typical day a few decades ago, maintenance engineers would have potentially
wasted 40% of their time determining root causes of problems related to equipment,
instrumentation, and analyzers. However, with the advances in fieldbus technology as well as
with the ability to embed web-server technology directly into the equipment and devices, an
instrumentation engineer is able to remotely monitor not just the process value being
measured but also the current health of the equipment.

Armed with asset monitoring tools that are based on sophisticated algorithms, an
instrument engineer can now easily conduct root cause analysis of equipment failure and even
suggest remedial actions while submitting work orders for the same. To a large extent, the
role of the instrument engineer has expanded to include detailed equipment health analysis.
But in many ways, due to advances in technology, it has also become simpler.

Rol e of Process Control Engi neer

In ethylene plants today, there are usually two classes of process control application
engineers, the engineers that develop and deploy the applications and applications engineers
that maintain the applications on a day-to-day basis. Over the past three decades, the field of
APC engineering has grown from its infancy to maturity, in particular, in ethylene industry.

As the MPC implementation has matured from a black art to a plug-and-play science,
the basic steps of implementing an APC application are now formalized into a standard
methodology shown below:

From the control objectives and constraints, define the relevant manipulated (MV),
controlled (CV) and disturbance variables (DV)
Test the plant systematically by varying the MVs and DVs storing the data for the CV
responses
Derive a dynamic model either from first principles or from the plant test data using a
model identification package
Configure the MPC controller and enter the initial tuning parameters
Test the controller offline using close-loop simulation to verify controller performance
Download the configured controller to the supervisor control computer and test the model
prediction in open loop mode
Close the loop and update the tuning as required

The plant test procedures have evolved from tests where operators made a series of
manual steps to unattended test using automated procedures. The automated testing
procedures not only improves the step-testing efficiency but typically also yields data-set that
is rich in information content. There is significantly less effort nowadays to get
communications working between systems and devices.

New tools in the model identification packages perform initial data slicing and model co-
linearity checks while tools in the MPC historian make it easier to validate the controller
performance. The key to success is ultimately still process engineering and the ability to
effectively design the application to achieve the specified control objectives.

The long term success of the control is monitoring KPIs and the active constraints to
ensure the application is consistent with operating objectives. The MPC historian makes this
task much easier.

Rol e of RTO Engi neer

As the RTO software has become more integrated and reliable, the role of an RTO
engineer has evolved from just being a systems expert keeping the application online, to one
where a good understanding of the process and the plant is required. Key responsibilities now
include:

Reviewing the RTO results and making sure the solution is consistent with operating
strategies
Analyzing solutions and challenging operating limits when opportunities are indicated
Running offline case studies based on current operating conditions to validate opportunities
such as evaluating other economics or operating scenarios, modifying constraints,
modifying equipment or evaluating process revamps

Rol e of I nformati on Technol ogy (I T) Engi neer

The introduction of the DCS expanded the role of control engineer to include the
responsibilities of DCS reliability. Some of the tasks included periodic back-up of the
proprietary DCS databases and files and infrequent system software updates. As the DCS
platforms transitioned from the proprietary closed systems to open Microsoft Windows based
systems, the role of control engineer expanded significantly. The Open architecture added
to the concerns about the reliability and the security of the control system platform.

Further, as the control system platforms began to be integrated with the plant-wide
network and the internet, the complexity of maintaining the control system platforms and
networks increased even more. Consequently, in many facilities, a new role of IT Engineer or
a Network engineer was added. These engineers focus on the reliability and security of the
control systems and networks while also conducting routine tasks such as creating periodic
system back-ups, software and hardware updates, security (virus protection), and user-access
control.

Rol e of Management

With the advent of DCS, the open architecture and the integration of the MES systems
with the DCS, the quality, the quantity, and the speed of the data that is available to the
ethylene plant management team has increased exponentially over the past twenty five years.
This opens up many new collaboration opportunities for plant management and allows them to
make informed business decisions based on feedback received from the various systems that
they can now directly access and interact with. The challenge however lies in the ability to
quickly transform these large volumes of data into quantifiable metrics, KPIs, and other
meaningful information that can assist in the decision making process.

Over the years, APC and RTO applications have evolved into robust and reliable
systems that consistently achieve tangible economic benefits. They are therefore being widely
accepted by ethylene plant management as high-value systems. Consequently, the lifecycle
costs of developing and maintaining APC and RTO applications are now being included early
on in the capital appropriation process for new plants as well as for existing plants that are
scheduled for a turnaround. Alarm management, equipment health monitoring, and integration
of the plant floor with SAP and ERP systems are other areas that plant management are
actively focusing on in recent times.
Conclusions

Looking back at the past three decades, it is quite easy to observe and conclude that
industrial process automation has gone through some remarkable changes that have
collectively influenced how we control and optimize the operation of ethylene plants today.

The three main growth inflection points have been the DCS, sensors/analyzers, and
application software. Development in these areas has been as much evolutionary in nature as
it has been revolutionary. To a large extent, this evolution has been catalyzed by the rapid
development and deployment of enabling technologies such as high-speed computers and
microprocessors, microelectronics, fibers optics, and internet technologies. There has also
been a very distinct and growing trend amongst vendors to incorporate commercial off-the-
shelf technologies, to utilize standard communication protocols and architectures, and to
open their previously proprietary, vendor-specific systems to third-party systems, devices,
and corporate information networks.

The Distributed Control System has evolved from a proprietary, system-centric
architecture to a graphical, highly interactive integration platform that not only delivers core
process control functionality but also supports collaborative business processes by providing
real-time data connectivity between the plant floor and the enterprise. Ethernet is fast
becoming the de facto automation network. Tight integration between the DCS and the plant-
wide historian now allows access to vast amounts of information, analysis functions, and
business intelligence, and enables higher-level application support to diverse roles within the
plant or the corporation.

With regards to the use of process analyzers, in most part, gas chromatographs remain
the analytical workhorses for on-line compositional measurements in ethylene plants. Though
the fundamental technology behind gas chromatographs has not changed, these analyzers
have now become more compact, modular, and significantly faster with respect to analysis
time. However, due to the availability of high-speed microprocessors that can process spectral
rich data in real-time, spectroscopy-based systems (such as NIR, MRA, FT-IR) are gradually
becoming more common-place in ethylene plants, especially around the furnace area.
Advanced research in photonics and laser-based absorption spectroscopic technologies has
continued to spur the development and commercialization of niche analyzer systems and their
use in real-time monitoring and process control applications.

In the past two decades, APC and RTO have both received wide acceptance in ethylene
plants as high-value applications that achieve significant economic benefits. Over the years,
both these systems have been commoditized to a large extent and have evolved from a black
art to a plug-and-play science with standard, well-defined methodologies for their
implementation. These applications are designed nowadays to fully leverage the speed of
computers, advances in numerical solution techniques, availability of real-time process
analyzers, and connectivity to planning and scheduling systems.

With the myriad of desktop technologies available and the ability to gather and visualize
data from different sources, the role of the ethylene plant personnel has changed dramatically
in the past three decades. Operating plants have become leaner as a consequence of
increased cross-functional training of the personnel who are now expected to multi-task on
almost a daily basis. The ability of a process expert to remotely monitor the plant, when
required, from any off-site location coupled with existence of sophisticated asset monitoring
tools now allows the plant to conduct root cause analysis of equipment failure and activate
work orders for remedial actions within a short period of time. This era has definitely seen an
increased amount of collaboration between plant personnel, corporate engineering teams, and
management as they collectively continue to identify new ways for the plant to remain
profitable in a very competitive, global market.

So whats next for this industry? It is a challenge to extrapolate forward the past twenty
five years of automation history and predict the next inflection points that would trigger
growth in this industry. A lot of the growth today is tied to the ability of corporations to
expand globally and offer their products and services to markets that demand it. But such
growth is only incremental and cannot be sustained. Long term growth can only come through
development of disruptive technologies (such as wireless, nanotech sensors) and the ability to
deploy them in the market at the lowest cost and in the shortest time.

Some industry analysts believe that the Internet of Things would generate the next
huge leap in industrial productivity as it would allow access to even more amounts of real-time
data. But the underlying assumption is that there will be technologies available to host this
data, analyze it, and transform it into meaningful and useful information for the end user.
Cloud computing could very well be shaping up to be this enabling technology.


Acknowledgements

The authors would like to extend their thanks to the Ethylene Producers Process Control
Subcommittee for their guidance and feedback during the development of this paper.


References

1. Mark Eramo, John Stekla, Ethylene Market Trends, CMAI, Energy Global, Palladian
Publications (June 2010)
2. Gregg, J., Control System Selection, Chemical Engineering, pp. 62-66 (August 2002)
3. John Moore, ARC Advisory Group; Trends In Process Analyzers; Control Engineering
(October 1999.)
4. Didier Lambert, C. St Martin, M. Sanchez, B. Ribero, Rjane Dastillung, and Marc Loublie;
Steam Cracker Optimization through on-line TOPNIR analysis and rigorous kinetic
models, 22nd Ethylene Producers Conference, San Antonio, TX; Session 50, Paper 50b
(2010)
5. Ron L. Johnson, Jerry Clemons, Kelley Bell, Jim Kelly, and Mark Podvorec; Compositional
On-Line Analysis for Furnace Feed and Effluent, 12th Ethylene Producers Conference,
Atlanta, GA; Volume 9, Session 90, Paper 90c (2000)
6. Hideko Tanaka, Toshiki Ohara, DukWon Ryu, Chris Hopkins; Rapid Analysis of Gas &
Liquid Phase Using NR800 Near-infrared Analyzer; Yokogawa Technical Report, English
Edition Vol.53 No.2 (2010)
7. G.C. Pandey, A. Kumar, R.K. Garg; Industrial applications of near infra-red spectroscopy:
Quality evaluation of naphtha feed for olefin production; Indian Journal of Chemistry,
p.327-330, 40A(3) (2001)
8. A. OKeefe, D.A.G. Deacon; Cavity ring-down Optical Spectrometer for absorption
measurements using pulsed laser sources, Review of Scientific Instruments 59, 2544
(1988)
9. Linh D. Le, J. D. Tate, Mary Beth Seasholtz, Manish Gupta, Thomas Owano, Doug Baer,
Trevor Knittel, Alan Cowie, and Jie Zhu; Development of a Rapid On-Line Acetylene Sensor
for Industrial Hydrogenation Reactor Optimization Using Off-Axis Integrated Cavity Output
Spectroscopy, Applied Spectroscopy, Vol. 62, Issue 1, pp. 59-65 (2008)
10. Paul Nesdore; Laser-Based AnalyzersShining New Stars; Gases & Instrumentation, pp.
30-33 (March/April 2011)
11. Sourander, M.L.; Cugini, J.C. ; Kolari, M.; Poje, J.B.; and White, D.C.; Control and
optimization of olefin-cracking heaters; Hydrocarbon Processing, Volume 63 (1984)
12. S. Joe Qin, and Thomas A. Badgwell; A survey of industrial model predictive control
technology, Control Engineering Practice 11 pp. 733764 (2003)
13. Ravi Nath, Zak Alzein, Rogier Pouwer, and Mario Lesieur; On-Line Dynamic Optimization of
an Ethylene Plant Using Profit Optimizer; 12th Ethylene Producers Conference, Atlanta,
GA; Volume 9, Session 90, Paper 90b (2000)
14. Kiran Sheth, and Don Bartusiak; An Improved Approach for Online Optimization of Large-
Scale Process Facilities, 24th Ethylene Producers Conference, Houston, TX; Session 50,
Paper 50b (2012)
15. Sam Valleru, Thomas Kelly, James Hackney; LyondellBasell Industries multi-unit RTO
implementation using combination of rigorous and simplified methodology ; 24th Ethylene
Producers Conference, Houston, TX; Session 50, Paper 50a (2012)

Вам также может понравиться