Вы находитесь на странице: 1из 6

2000 CPPA TECHNICAL SESSION

USING OPC TO INTEGRATE CONTROL SYSTEMS FROM COMPETING VENDORS Eric J. Byres, P. Eng.
Artemis Industrial Networking 30 Gostick Place North Vancouver, BC V7M 3G3 Email: eric.byres@artemisnetworks.com ABSTRACT As it evolves, the typical industrial plant will find it has a myriad of different control systems and communications networks. Interconnecting these dissimilar systems can be difficult and expensive, yet vital to efficient production. Alcan Smelters was faced with such a problem when it wanted to upgrade its Hydrate-2 plant in Jonquiere, Quebec the new design had resulted in six process control systems and seven different process communications networks. This paper looks at the techniques selected to ease this integration based on the OLE For Process Control (OPC) standard. Also discussed are the results of staging and loading tests to prove the robustness of this single standard OPC strategy. Keywords: OPC, Ethernet, TCP/IP, PLC, DCS, Local Area Networks, Network Integration 1. TOO MANY NETWORKS In an ideal world, process control integration would be simple buy all the computer, instrumentation and electrical equipment from a single vendor and connect it all together using a single network standard. But real life is never that simple; rarely are the PLC, DCS, drives, motor controls, field instrumentation and data historian all purchased from the same vendor. And even when they are, the vendor will usually have a number of different vintages or models of equipment that will not interconnect cleanly. Alcan Smelters and Chemicals Ltd. was faced with just this problem as they started to upgrade their Hydrate-2 plant in Jonquire, Quebec. The plant has separate vendors for PLC, DCS and motor control and these used four different types of control networks (Allen-Bradley Remote I/O (RIO), Allen-Bradley Data Highway Plus (DH+), Fisher-Provox Data Highway (DH) and Ethernet). With the planned addition of three new networks (DeviceNet, ControlNet and Profibus) to connect overload relays and variable speed drive controllers, the plant would be faced with seven different process network systems. Alcan wanted to rationalise this collection of networks and allow centralised data management of all process data. 2. A STANDARD NETWORK INFRASTRUCTURE 2.1 Network Cabling The first step in consolidating the numerous networks was to select a common cabling strategy. For this, Alcan chose fibre optic cable for all communications cabling outside the control room or rack room. The fibre was a tight-buffered 62.5/125?m multimode fibre with steel armoured protection and a 48 fibre count. While the noise immunity and high data carrying capacity of fibre optic cable was a factor, the primary reason was that fibre cabling was the only system that could provide a single medium suitable for the very wide range of communications needs in the Alcan facility[1]. 2.2 Network Connectivity Ethernet and TCP/IP were chosen be the site standard for all non-critical process information traffic. This simplified a number of connections because these two protocols have become the dominant communications technologies and are common to nearly every manufacturer of process control equipment. The question of Ethernets non-deterministic nature was resolved in two ways: 1. 2. The bulk of all process data was shown not to be time critical and thus scheduling was not an issue. This includes the data from the variable speed drive controllers and overload relays, plus most of the PLC and DCS data. Current Ethernet technologies, such as use of switches, have addressed earlier issues about Ethernet's determinism[2][3].

Tan-Trung Nguyen, P. Eng.


Laurentide Controls 18000 Rte Transcanadienne Kirkland, QC H9J 4A1 Email: Tan-Trung.Nguyen@frco.com

As for the critical process data (such as the PLC to DCS link or the PLC to remote I/O connections), the issues are more clouded. Vendor connectivity issues tend to dominate, rather than pure technology issues. As well, some equipment did not have Ethernet interfaces, such as the overload relays. Thus, some non-site standard solutions were still required, such as using DeviceNet for the overload relays. Wherever possible, these protocols were converted to Ethernet using a gateway.

D:\CONFERENCES\CPPA 2000 OPC

2000 CPPA TECHNICAL SESSION 2.3 Application Layer Standards Fibre optics, Ethernet and TCP/IP are not sufficient in themselves to create a functional network. They require additional application layer protocols to actually carry out the tasks over the network, such as sending or receiving a specific data file. The combination of OLE for Process Control (OPC) and Microsofts DCOM (distributed component object model) was selected for this function. 3. WHAT IS OPC? OPC is a standard set of interfaces, properties, and methods for use in process-control and manufacturing-automation applications. It is based on Microsoft's OLE (now ActiveX), COM (component object model) and DCOM (distributed component object model) technologies. These technologies define how individual software components can interact and share data[4]. Many people heard of OLE (object linking and embedding) and have used its capabilities whenever spreadsheet is added to a word processing document. OLE allows the spreadsheet application to dynamically update the information in the word processing document. Typically the user isnt required to do even the slightest configuration beyond the click of a mouse. The OLE specification completely defines how the spreadsheet (in this case the OLE server) will format and send data to the word processor document (in this case the OLE client). OPC adds a number of features to the OLE specification to address requirements for use in process control. For example, an OLE server does not check to see if the data is actually received by a client. It just sends and forgets. Since this is clearly not sufficient for process control applications, OPC adds the ability for acknowledgement of server/client transactions. The real value of OPC is that it provides a common interface for communicating with diverse process-control devices, regardless of the controlling software or devices in the process. Before OPC was developed, application developers had to develop specific communications drivers for each control system they wished to connect with. For example, one Human Machine Interface (HMI) vendor developed over 100 drivers for different DCS and PLC systems.

HMI Application DCS PLC Brand A PLC Brand B Driver 1 Driver 2 Driver 3 DCS PLC Brand A PLC Brand B

OPC Platform OPC Srvr 1 OPC Srvr 2 OPC Srvr 3 OPC Client OPC Driver HMI Application

Before OPC

After OPC

Figure 1: Prior to the development of OPC, applications vendors needed to develop separate communications drivers for each type of controller they wished to connect to. With OPC they only needed to develop a single OPC driver. With OPC, application vendors no longer need to develop separate drivers for each new network or new processor. Instead, they create a single optimized OPC driver for their product. This communicates to other OPC servers designed and sold by the manufacturers of the other networks and controllers. It is important to understand that OPC does not eliminate the need for drivers. It is up to each manufacturer to develop OPC drivers for their specific product, since they are best suited to build the driver that will take full advantage of their particular controller. However, once an OPC driver exists for a piece of equipment or an application, it is a simple matter to integrate its data with other OPC compliant devices. 4. TEST SYSTEM ARCHITECTURE A realistic plant architecture had to be assembled to test the different OPC clients and OPC servers that Alcan was expecting to use in the finished facility. Three levels of networking were built: ? ? The information network using Ethernet and TCP/IP.
D:\CONFERENCES\CPPA 2000 OPC

2000 CPPA TECHNICAL SESSION ? ? The controller networks with Fisher-Rosemount DeltaV Control Network, Fisher-Rosemount Provox Data Highway and AllenBradley ControlNet. ? ? The field network using DeviceNet. Attached to these netwo rks were a Data Historian, an Expert System, two DCS systems, and three PLCs, as shown in Figure 2: The OPC Test Architecture and detailed in Table 1: Equipment and Software used for OPC Testing. Table 1: Equipment and Software used for OPC Testing

Information Systems
Central OPC Server: ? ? Single Pentium II 400Mhz ? ? 128 Mbytes RAM ECC ? ? 2 Ethernet 10BASE-T Adapters ? ? 4 gigabytes Hard Disk UDMA ? ? Windows NT 4.0 c/w Svc. Pack 3 ? ? DeltaV OPC Server ? ? OPC Mirror OPC Client ? ? Provox Application Server ? ? RsLinx v2.00.97/ OPC Server ? ? PI OPC Client Driver ? ? G2 OPC Bridge Engineering Station: ? ? DeltaV ProPlus Station (Also acted as the operator interface) ? ? WRQ ReflectionX ? ? RsLogic5 and RsLogic5000 ? ? DeviceNet Manager Software The PI Data Historian Server: ? ? PI Server ? ? PI ProcessBook G2 Expert System Server: ? ? G2 Application S/W

Control Systems
DeltaV DCS: ? ? ProPlus engineering/operator station ? ? M5 controller

Field Devices
DeviceNet Flex I/O

Provox DCS: ? ? ENVOX configuration software running on a VAX/ALPHA ? ? OWP operator station using VAX ELN and X-terminal ? ? SRx/IFC controller PLC-5/20C: ? ? Processor ? ? Dual ControlNet Ports ControlLogix Gateway with: ? ? Processor ? ? Ethernet Module ? ? DeviceNet Module PLC-5/40B with: ? ? 1785-ENET Piggyback Adapter

Photoelectric Sensor

SMP-3 Motor Control Monitor: ? ? 1203-GK5 DeviceNet Interface

It is important to note that all the OPC drivers were loaded into a single OPC Server platform. While it is tempting to have separate computers for each vendor, this will defeat some of the key ease of use and performance features that OPC can offer. 5. THE OPC TEST PROCEDURES 5.1 Functionality and Configuration Tests The first phase in the testing procedure was to prove the ability of OPC to pass data between the various devices, networks and platforms. To generate constantly changing data, each controller was programmed to iterate ten values from 0 to 1000 at its fastest scan time. A program in the Provox SRx Controller ramped 10 analog output tags changing every 0.1 seconds. The M5 DeltaV controller was programmed using 10 counter blocks that iterated at 0.1 seconds with an auto reset at 1000. The PLC-5/40B and the PLC-5/20C each used 10 timers with 0.01 second time base and a preset of 1000. The timer done bit was used to reset each timer. All points were then mapped over to their corresponding OPC servers. OPC servers are only designed to communicate to OPC clients and not other OPC servers. As a result, a Fisher-Rosemount application called OPC Mirror was used to map data from one OPC server to another. This allows data to be sent between all the controllers and not just from controller to application software. Figure 3: OPC Data Flow shows the various data paths in the test configuration.

D:\CONFERENCES\CPPA 2000 OPC

2000 CPPA TECHNICAL SESSION

Figure 2: The OPC Test Architecture To duplicate a typical industrial facility, the OPC testing used a three layer hierarchy of communications to integrate the various field devices, controllers, networks and applications. All OPC software ran in the central OPC server platform.
D:\CONFERENCES\CPPA 2000 OPC

2000 CPPA TECHNICAL SESSION Before configuring the OPC Mirror, we had to define data recipients in the two DCS systems. Data mapping between DeltaV and Provox was intuitive since OPC Mirror could browse the structure of the OPC server of Provox and DeltaV. For the PLC systems, configuration was a little more complex, as the RsLinx OPC server did not support the OPC browse method. This meant the addresses of the data had to be manually entered. The RsLinx OPC server also did not support Symbols defined in PLC-5, but did support tags from the ControlLogix processor. (The current release of RsLinx, version 2.10.118.0, allows you to specify the PLC5 symbol table file that is created by RsLogic5. With this feature you can browse the symbol table using the browsing method of OPC.) We first tried the PI-OPC driver loaded in the remote PI-Server computer. This remote DCOM method worked, but the application to application initialization took 30 seconds. When we moved the PI-OPC driver to the OPC server computer, the initialization time was reduced to less than a second. Both the PI-OPC driver and the G2-Matrikon OPC bridge did not like the special characters (such as spaces and square brackets) used to name the OPC server and the items by RsLinx. These characters are allowed in the OPC specification, since the format for naming convention is to use UNICODE[5]. It is our understanding that the vendors have now corrected this problem. Once the minor configuration problems were resolved, the OPC system operated without incident, transferring 10 data values from each processor to every other processor and application at a 1 second rate and a second rate.

PI Server

G2 Application

Network

PI OPC Driver

OPC Mirror

G2 OPC Bridge

Local OPC

OPC Server Chip NT

OPC Server DeltaV Application

OPC Server AB RsLinx

OPC Server Computer

Provox SRx / IFC

DeltaV M5 Controller

PLC-5/20 ControlNet

PLC-5/20 ControlNet

PLC-5/40 Ethernet

DeviceNet

Figure 3: Data flow between the various OPC clients and servers. Notice that the OPC Mirror was critical to permit traffic between OPC servers, since these can not normally communicate with each other. 5.2 Performance Tests Once the OPC servers and clients functionality were demonstrated, we needed to see the impact on performance as the amount of information was increased. To do this, the number of timers in the PLC-5/40E with the Ethernet piggyback card was modified to 5000 timers at 0.01 second time base. We monitored the CPU of the OPC server and the memory loading from the Windows NT task manager as we configured the PIOPC driver to monitor all 5000 timers at a 1 second scan rate and the OPC server to scan at a 0.5 second rate from the PLC-5. We noticed a small increase in memory usage, from 103Meg to 105.2Meg of RAM. The CPU loading jumped from 5% to 52% utilization. Since all 5000 values were constantly changing, we estimated that the PI historian was storing 300,000 values per minute. After an hour at this rate, we had to disconnect the network connection to the PI server as it was receiving data faster then it could store to its hard drive (Note: The PI server was undersized in this test system this overloading of the hard drive is unlikely to occur in a properly designed production system). This illustrated that OPC throughput would not be the limiting factor in a typical system.

D:\CONFERENCES\CPPA 2000 OPC

2000 CPPA TECHNICAL SESSION 5.3 Reliability Tests As the first reliability test, the OPC server computer was rebooted to determine how it would recover. The DeltaV and Provox OPC servers run as Windows NT services and started normally. The RsLinx OPC server is only started when an OPC client tried to initialize with it. When the OPC-Mirror is started, it loads itself as a service and had no problems. The two OPC clients both worked well but had to be loaded as NT applications to prevent synchronization problems on startup. As the second reliability test, we severed the communication links to each device, one at a time, and watched to see if every data consumer marked the data as bad, if the data loss from one OPC Server system impacted another and if the data streams recovered once Computer Single Pentium II 400Mhz the link was restored. As each link was disconnected, the PI 128MegRAM 4 Gigabytes Hard Disk server identified the data as bad and as each link was re2 Ethernet Adapters Windows NT 4.0 Workstation established, the status went to normal. The only problem was that different error indications were reported depending on the failed system. Figure 4: During the OPC performance test the CPU loading went up to 52% and the memory usage rose to 105 Mbytes, on a transfer rate of 300,000 values/minute. 6. CONCLUSIONS These tests clearly showed the advantages of the OPC architecture and indicated that even a relatively small OPC server could handle data rates far in excess of those found in most process control environments. It also showed that multiple vendors OPC software could coexist on a single computer and, in fact this was the optimum configuration for OPC. The effects of a communications link failure were correctly handled by all OPC applications and the system automatically recovered when the link was restored. Only the failure of the entire OPC Server Platform would cause unrecoverable link loss and this only occurred in a few of the OPC products. This would suggest that redundant OPC servers be used if the OPC data is mission critical for the process. Finally OPC configuration was significantly simpler than configuring separate application drivers because it: ? ? Does not require intermediate data mapping that has to be maintained. ? ? Provides the information in its native format and syntax. ? ? Provides a browser to facilitate configuration. From these tests, Alcan has determined that OPC is ready for the production environment and will be the primary process data transfer mechanism when the new plant starts up in the Spring of 2000. SPECIAL THANKS TO: Andr Lalancette, Jocelyn Moore and Bertin Schmidt from the ALCAN engineering team who requested the OPC testing. Vincent Landry and Martin Jett from COGEXEL for their support for OSI-PI products. Gerry St-Amand from Rockwell Automation for his support with Allen-Bradley products. Alden Hagerty, Gord Gillespie, Gerry Murenbeeld and Laura Moroz of the Artemis Networking team for their editing assistance. REFERENCES

[1] BYRES, E. J., What Superintendents and Engineers need to Know about Fibre Optics, Pulp and Paper Canada 99 (8): T270-273 [2] BOGGS, D. R. et al, Measured Capacity of an Ethernet: Myths and Reality, Western Digital Research Report 88/4, Sept. 1988 [3] BYRES, E. J., Ethernet to Link Automation Hierarchy, InTech Magazine, June 1999, P45 [4][5] OPC FOUNDATION, OLE for Process Control - Data Access Standard, Version 1.0A, September 11, 1997

D:\CONFERENCES\CPPA 2000 OPC

Вам также может понравиться