Академический Документы
Профессиональный Документы
Культура Документы
Initial Steps
Many different skills are required to transform a process from an idea into a moneymaker, after the process has been shown to work on a small scale. The first things that need to be established are the extent of the demand for the product created by the new process, the probable duration of that demand, and the possible selling
20
Chapter 2
price. These establish an upper limit for the total cost of designing, building, operating, and decommissioning the new process. Market timing heavily influences the construction schedule. If the dollars and timing are acceptable to top management then representatives from Marketing, Engineering, and Operations meet to determine feasibility and risk. Each representative should be experienced in all three fields as well as dealing with top management. A representative should be able to see the connections between many aspects of a problem that other specialists cannot see because of their narrow training. Ideally, each has grown up in the organization and learned from senior people because they need to be capable of accurate guesswork and to know who has the information to support it. The key factors are analytical and intuitive intelligence, broad experience, and communication skills. Experience must be tempered by the fact that rapid changes in technology can make feasible something that failed only a few months ago. An organization that outsources any of these functions loses the training, experience and connectedness that accumulate with time spent working for the organizations goals. If only the senior people are kept to write requirements for outsourcing organizations, then there will be no one to replace them. Even if top management expects to grow by buying new companies, when the purchased companies have been stripped and used up, a time will come when there is nothing available to purchase. Such management, frequently encountered in takeovers, replaces the goal of making good products for satisfied customers with the very narrow goal of making money now. If the process appears to be feasible, then the representatives prepare a preliminary project plan with a rough budget and schedule for presentation to top management, since they control the flow from the bucket of money that is the company. After discussions intended to reveal and resolve any unknown negative management factors, the plan is presented at a meeting to ratify the informal decision to proceed. If some negative factor cannot be resolved, then the project dies or goes to the back burner for further simmering, and the representatives go back to being knowledge and experience resources for their branches of the organization chart. Several things happen if the preliminary project plan is approved. Operations names a Plant or Operations Manager, who chooses the necessary people from Operations resources to review current project work, answer questions from other groups, and draft changes to the project as necessary. Project Management creates a new project and names a Project Manager who designates the people from Project Management resources that will work on the initial stages of the project, such as scheduling and site selection. Engineering names a Process Engineer for the new project and selects people from the engineering specialties to work on the initial design.
21
Engineering, in consultation with Operations and Projects, prepares a working process flow diagram (PFD) in enough detail to allow Projects to prepare an estimated budget. Adding a process to an existing plant introduces complications that are not found in a new plant. An operating budget is negotiated with Top Management if everything looks manageable. The project completion date is also set because it affects the budget. If approved, available money and the deadlines begin to make things happen. The Process Engineer meets with counterparts from Operations and Projects to prepare the final, detailed PFD and a detailed budget that provides money for the planning resources and gives a firmer estimate of the costs of major equipment and process operation. Instrument, piping, and construction costs are still just estimated percentages of the approved budget because no detail work can be done until the final PFD is available. Given the final PFD, project deadlines, and budget limits, the functional groups can now start the detailed designs for their functions. The number of groups becomes larger with the addition of Quality Assurance, Maintenance, Information, and other groups from Operations like Safety, Operator Training, Storage, Utilities, and Transportation. The representatives may stay in the design process or go back to what they were doing as the design is subdivided and becomes more internal to the functional groups. Both the level of experience and communication skills become more important as more people become involved. The mix of people will change over the life of the project. Adding a person that has insufficient experience and no one to turn to for help is like adding cold water to a boiler. Momentum is lost and will have to be built back up again. Adding an outside organization after the project has started is like putting out the fire and opening the safety valves, but it may be necessary if consultants dont help. A successful project depends on a well-understood fixed goal, clear definitions of requirements and cross-functional as well as internal communications. Someone from top management must keep track of the project and show some enthusiasm for it, while refreshing the view of the goal and not allowing feature creep to dissipate the resources before the project is complete. Representatives must periodically review the work in progress to assure that it stays on track toward the stated goal. If your company has only one employee, all of the above still applies. You just have to write notes to yourself from the different perspectives described. Meetings are simplified, but the review function still needs to be scheduled, so that you dont go too far without looking around.
22
Chapter 2
23
24
Chapter 2
who do not even share the goal. All aspects of the outside work must be checked for misunderstandings because people will nod and say, Yes, I understand when in fact they dont know what they failed to understand.
Analysis
The process design, as expressed by the PFD and recipe, must be analyzed down to the last little detail. The amount of detail necessary to describe, review, locate, purchase, install, test, operate, and maintain controlled equipment can be overwhelming. Analysis is the art of breaking down big problems into smaller ones. The first breakdown is done by drawing some nonintersecting boundaries around major sets of items on the PFD. The boundaries must minimize the interaction of the bounded details with the details of other processes, if they are to be useful. A change in the class of process strongly suggests a location for a boundary. Consider the specialty baking plant in Chapter 1. The seven labeled boxes and two conveyors in Figure 1-3 are a way to begin the analysis, even though Figure 1-3 is not a PFD. The Solids box conceals details about the kind of solids, their storage and measurement, when to feed what and how much. The Dough Mixer has special agitators (beaters, really) to mix the dough and the assembly must be cleaned and sanitized periodically. The Weigh Feeder may have a nozzle that moves with the baking pan in order to fill the pan properly. The Oven has to be designed to retain heat even though it runs with its doors open. The combustion system has to be properly sized and have the correct distribution of burners. And so on, from process to systems to components.
25
26
Chapter 2
pipes shown connecting to other pages, where possible. When the P&ID settles down, the instrument engineers can develop enough data to specify the sensors and valves that attach to pipes or vessels, and give the specs to Purchasing. Then the model is used to choose locations for junction boxes, conduits, and trenches or cable trays, and these are added to the model to prevent interference. It is expensive to modify the location of a vessel penetration or to rework a section of prefabricated bent and welded pipe after it has been brought to the construction site. Cable trays and conduits can be relocated on site, but the amount of wire ordered is based on lengths taken from the model. The P&ID is the basis for assigning tag numbers to field instruments. Someone takes a pencil and numbers each instrument from top left to bottom right with two digits for the P&ID page number followed by two digits counted from the top left. The last number used on a page may be noted in the notes section of the page. Your plant may require more digits, but this works fine for 500 to 1000 indicators and loops. ISA has a standard for naming tags with prefixes that give the purpose of the instrument, such as FT for Flow Transmitter. Ask for ISA-5. If the first instrument on page 1 of a relatively simple process is a flow transmitter, then its tag would be FT-101. The last tag of a P&ID with more than nine pages might be TT-2734. Computer systems need to allocate memory space for tag names. That space has grown from 5-8 characters at the dawn of the DCS to 32 circa 1995. Now the tag can contain the function and the site GPS coordinates for any instrument, but the 32 characters are usually used to add more hierarchical location information. Each instrument is ordered with its unique tag name, preferably engraved on a stainless steel tag that is affixed to the body of the instrument. This allows each instrument to be tracked through the vendors facility to the construction sites receiving warehouse and then to its correct location in the process. The guy dragging the big pipe wrench on the ground just has to look for a tag number, without concern for the instrument span or materials of construction, and match it with a tag name at a location on a pipe or vessel drawing. This works well for construction as long as nobody transposes digits in all of the order processing. The maintenance department will not have stainless tags for the spare instruments. Spares are tracked by the serial number of the instrument, which is independent of location. The tag name defines a unique plant location and the type of instrument required. A physical digital instrument is uniquely defined by its 32-character Device ID. The stainless steel tags are going away as digital instruments replace analog.
Loop Sheets
When the P&ID drawing set has been approved, work can begin on the final level of physical instrumentation details. A drawing is made for each indicator or control loop. Each drawing is called a loop sheet and is identified by an instrument tag number, usually the tag of the transmitter. A loop sheet shows the locations of instruments,
27
wiring, junction boxes, cable trays, blockhouse/control room entry points, and control system connections. Some loop sheets even show calibration values and other useful information, such as the things that will be affected if this instrument is disconnected. An example of a Loop Sheet is shown in Figure 2-3. It is for the TPA Process example below.
Figure 2-3 Example of a Loop Sheet The set of loop sheets provides the information necessary to order peripheral items such as junction boxes and spools of cable. The instruments are detailed in Specification Sheets (see ISA-20-1981), which are referenced on the loop sheets. Loop sheets also provide the information required to maintain the sensors and actuators during the life of the process. An operator uses the controller tag to report an instrument problem. A maintenance person uses that tag to locate a loop sheet for that indicator or control loop. Depending on the problem, the wiring may be examined from the control system to the field instrument, looking for junction boxes without covers and cables that have been used as ladder rungs. The instrument may be given a functional test and either repaired or replaced. Sometimes an operator will report a problem because a pressure gage or thermometer does not give the same reading as the transmitter, which costs ten times as much. A good loop sheet will reference or show instruments that give a second opinion, which allows Maintenance to ask the operator about this possibility and to check the less expensive instrument first. If the replacement instrument uses digital communication then it will need to have the proper tag entered before it is installed, along with the correct configuration for the new location. The data that stays with the instrument, such as calibration coefficients, is not changed when the tag is changed. The replaced instrument may be repaired and returned to the storehouse, but the stainless tag on it (if any) is now obsolete.
28
Chapter 2
29
The engineers produced a process flow diagram which was used to calculate material and energy balances for each stage of the process for one reactor, and to provide data for material flow rates for four reactors. The PFD is shown in Figure 2-1. The following material balance for one reactor is shown in the following table: Table 2-1 Material Balance for One Reactor
Material Steam for Heat Steam for Sparge Water Charge DMT Charge Vent to Flare Dump to Slurry Flow 30 KPPH 40 KPPH 1000 GPM 800 GPM 12 KPPH 2200 GPM 1.5 0.96 1.2 Density Time 0:20 0:30 0:10 0:20 0:10 0:10 Amount 10 KLb 20 KLb 80 KLb 160 KLb 2 KLb 268 KLb Metric 4.5 Ton 9.1 Ton 36.0 Ton 73.0 Ton 0.9 Ton 121.7 Ton
The Gantt chart four reactors over a two-hour cycle appears below: Table 2-2 Gantt Chart for Four Reactors/2-Hour Cycle
Time 10 20 30 40 50 1:00 React Water DMT Dump Steam Cool 1:10 1:20 1:30 Cool React Water DMT 1:40 1:50 2:00 Dump Cool React Water
Dump Steam
Combining the material balance for one reactor with the Gantt chart produces the overall material balance for one hour of operation: Table 2-3 Overall Material Balance
Material Accum Steam Accum Water Recycle Water BFW to Acc DMT Flare Stack Slurry Tank Liquor Methanol Dryer Vapor Dry TPA English 60 KLb 160 KLb 70 KLb 150 KLb 320 KLb 4 KLb 536 KLb 85 KLb 15 KLb 21 KLb 430 KLb Metric 27.2 Ton 72.0 Ton 32.0 Ton 68.0 Ton 146.0 Ton 1.8 Ton 243.4 Ton 39.0 Ton 6.8 Ton 9.4 Ton 195.0 Ton
30
Chapter 2
Plant design engineers and draftsmen chose locations for the vessels and added the details to support them, providing decks for human access and piping to connect them. A physical model was built to verify that everything would fit together and that maintenance would have access to maintainable equipment. Instrumentation designers used the process flow diagram and physical model to locate and tag process control instruments. An instrument specification sheet was prepared for each tag on the drawing. A P&ID drawing was produced that showed all of the control functions (boxes or blocks) required to support the process sensors and actuators in order to provide the required process control. See Figure 2-2 for a sketch of an example. A loop sheet was prepared for every sensor tag that showed the location of wiring and junction boxes, along with calibration information. See Figure 2-3 for an example. If an operator reported trouble with FC1227 (the 27th tag on P&ID page 12), then Maintenance pulled the loop sheet for FT1227 in order to check it out. Any sequence or interlock logic was referenced to a separate diagram because this was regarded as a separate skill from designing or maintaining control loops.
Batch Design
Meanwhile, logic design engineers used the sequence required to process a batch of TPA to produce a list of process stages (called steps at that time): 1. Verify that the reactor is ready from level, pressure and temperature sensors. 2. Heat the massive vessel for 20 minutes with flow-controlled high-pressure steam. 3. Charge a measured amount of hot water in 10 minutes; verify level, pressure, and temperature. 4. Start the agitator at high speed. 5. Add a measured amount of DMT to the reactor in 30 minutes. 6. Agitate for 30 minutes; begin venting non-condensable poisonous gasses to the flare stack. 7. Reduce agitator speed and start cooling along a temperature trajectory for 30 minutes. 8. Drain the reactor. A team of process, safety, and instrument engineers took this simple set of process stages and added things to do when something went wrong. They started with those things that might break a reactor, like hitting a hot reactor with cold water and vice versa. Overfilling a reactor would not be fatal because safety valves would relieve pressure, but there was a risk that someone would be in the area when a safety valve let go. Adding molten DMT with no agitation could result in a solid plug of DMT in the bottom of the reactor, making the dump valves useless. Cooling too fast would strain the reactor and the overhead condensers. Rate of cooling was controlled by the
31
level of condensate in a condenser. No cooling occurred if the condenser tubes were flooded, so the rate at which condensate was removed controlled cooling. Safety interlocks were designed by a group that was perhaps more paranoid than the optimistic control engineers. This was a high energy process capable of releasing poisonous gas as well as energy, so their job was not trivial. The interlocks varied during the processing of a batch of TPA. For example, the high level shutdown was lower for the water charge than it was for the DMT charge. The agitator had to be at high speed during DMT charging and low speed during cooling. State-sensitive interlocks were built that had simple integrated circuits, which were to be independent of the Digital Equipment PDP-11 that controlled the batch sequence. This activity provided the data for the design of a computer program, along with a list of computer I/O points and their connections to physical process control equipment. Additional logic design was required for the process interlocks and for the fixed sequence for selecting the vent condenser steam header. A process interlock differs from a safety interlock in that it is good if it works (the computer hasnt gotten lost), but not essential to protect people and equipment. Each step was expanded in detail with reference to control equipment tags. The tags were generic in that the analysis was only done for one reactor procedure. The computer would handle substitution of the correct tags for the reactor to be run, a procedure called aliasing tags. Exception logic was designed to detect and handle hold, stop, and abort conditions. Then there was the problem of how to restart the computer while the process was running. On top of that, the existing DMT plant was all analog, untouched by computer control. Each stage had sub-stages (each of which had process actions): 1. Check that conditions are correct to perform this process stage. 2. Perform any setup actions, like clearing a totalizer or setting modes, and start processing. 3. Monitor process conditions and time, generate trajectory values as required, stop when done. 4. Generate and log the report items for the stage. 5. Check that conditions are correct to leave this stage. One of the setup actions was to change targets in a program that ran continuously to handle the process interlocks. In particular, the pattern of valve positions that must be open or closed or could be ignored changed during the stage. The response to a failure was always the sameclose all inlet valves and stop making changes. In other words, hold.
32 Construction
Chapter 2
When the process design was complete and approved, orders were placed for major equipment and ads were placed for operating personnel. Instrument spec sheets and logic design followed, and orders were placed for sensors, actuators and controllers, including the computer system. Operating personnel began training as plant construction started, the better to see how it all went together. Computer programs were coded and tested, then coded and tested again as changes rippled through the project. Instrument installation and checkout is always on the critical path of a project. First comes the mechanical completion of the vessels, supports, and piping along with equipment like pumps and valves, which are part of piping. Once that milestone has been reached, then instruments can be mounted and checked out. If mechanical completion is late, top management is fully alert and turns its spotlight on the instrument department, which makes it difficult to conceal the normal screw-ups. The insulating crew is also turned loose after mechanical completion, making things more interesting. When everything was physically ready to operate, water was run through the system to flush wrenches and lunchboxes out of the piping, along with drill chips and drawings. Any differential pressure instrument that was installed backwards was revealed. The TPA process required trial runs with hot water to see that nothing broke or leaked under pressure and that the cooling system worked as expected. The computer was allowed to control the second and subsequent hot water runs to flush out bugs in the programs. Finally, the process was ready to turn over to Operations. After the first run with DMT, fine crystals of TPA plugged all of the instrument impulse lines. The capacitance level probes in the reactors did not survive agitation of the thicker product. All pressure instruments in the water loop that used impulse lines had to be replaced with flush diaphragm instruments. The delay was not welcome, except to the computer programmers who were able to add enhancements whose need had become obvious when the process was run. Fortunately, the reactor vessels had spare penetrations at the top and bottom. Two of these on each vessel were used to install differential pressure transmitters with flush diaphragms on six-inch extensions in order to measure level. There was concern that the agitated erosive slurry would wear through the lower diaphragms, and so tests had to be run to determine the rate of erosion. You can imagine the interactions between top management and the instrument department as the project overran the estimated cost and schedule. After the startup difficulties were over, the process ran well for about ten years. Then it was closed down when a competitor developed a different process that made TPA directly with a one percent higher yield.
33
Later in this book we will go deep into the details of batch process control. The point of this example is to show that there is much more to realizing a batch process than detailing the recipe and procedure.
Modular Design
The section Analysis earlier in this chapter discussed the usefulness of boundaries. A module is enclosed by a boundary, and contains the equipment or functions necessary to do a specific job. A module is most useful when it is used multiple times, but it is also useful for unique cases. A module only has to be designed and thoroughly tested once, which is a fine thing for FDA process validation. The operation of a module only has to be taught once, no matter how many times it is used in the process. A physical module is a container for a set of parts, devices, and equipment, with a boundary that is crossed by input and output connections to those of other modules. An example is an automotive alternator. The input is a pulley that supplies mechanical power from a belt. The output is 13.7 volts DC, above a certain input speed. In between is a marvelous assembly of axles, bearings, wire coils, slip rings, magnetic pole pieces, rectifiers, and a voltage regulator circuit. To use the generator, we only need to know where to put the belt and where to get the DC power. It helps to know such catalog information as maximum ratings, but we dont need to know what goes on inside the alternator module. That knowledge is only required to maintain the module. Another example of a physical module is a television set. Connect it to power and an antenna and a vast wasteland appears, full of people pushing products or promoting themselves. An abstract module is defined by a boundary on a drawing of the contents. It has a defined interface in which lines representing information cross the boundary. Nothing enters or leaves the module without going through an interface. The module boundary is adjusted so that a minimum number of connections determine the behavior of the module. Each line in the interface has a specific behavior for a specific set of conditions at other lines. Think of the pins on an integrated circuit, the connections to a home hot water heater, or the public data for a software module. A module may be copied and used in another location, using the same interfaces with the same functions. If the interfaces or behavior have to be modified in the new location, then the copy is not a copy but a different module. For process control, a module is a closed box (not necessarily physical) that contains a fixed set of things that behave in predictable ways. It has interfaces at the sides of the box, like electrical connectors or pipes. The interfaces carry power, commands, and data into and out of the box. Every interface has defined behavior, so there is no need to guess what is inside the box. A box can be disconnected and replaced with another box of the same kind, perhaps one that has internal changes that improve cost or performance but do not change behavior. The box/module can be said to encapsulate a control function.
34
Chapter 2
Encapsulation is a lesson that can be learned from the evolution of computer programming. Back when 640 KB was enough for anybody, computer instructions allowed any running code to write any word in memory. Programmers needed programming rules to avoid stepping on each other. Even a single coder could forget where something was and write over it with another subroutine. Increasing computer capability required strong rules to prevent undesired interaction, which led to objectoriented programming (OOP). Basically, an OOP object contains methods and data that are only known to the object. This means that a subroutine and its data may be encapsulated so that no one who follows the rules can alter an object. Anyone needing the services of an object must send a request to an exposed receiver. An analogy is a person with a telephone and a set of ledger books. The telephone number is public but the location is not. The only way to read or write a number in the books is to call the person and ask for it to be done. The person organizes the books in any suitable way; the caller does not need to know how it is done. If you substitute methods and data for person and books then you have described an OOP object, which is very much like a module.
35
The module will try to obey commands, but something below the control interface may prevent that. A command to set 90% pressure cannot be obeyed if the source is only at 85% pressure. The module may include malfunction alarms that are reported through the command interface. A module built in this manner shields the user of the command interface from knowledge of the details of how the module does its job or what goes on at or below the control interface. Of course, the sensor, controller, and valve are each a module. The pressure control module contains three instrument modules. Modules can contain modules in the same way that a Russian matrioshka doll contains nested dolls.
Function Blocks
Block diagrams are a way of designing with modules. A rectangle (block) representing the module shows the possible interface connections for inputs and outputs. Other blocks representing other modules with needed functions are added to the drawing, and the proper interfaces are made between blocks. A block diagram is similar to a PFD, but it shows the flow of control information instead of fluids. Block diagrams are also useful for designing the contents of process control modules using function blocks. A simple example is shown in Figure 2-4.
Figure 2-4 Example of a Block Diagram The term function block needs to be qualified. The word function as an adjective of block can mean a program designed to accomplish a specific function, or it can mean the data associated with an instance of a common function. Here, the noun block is a metaphor for a module, be it hardware or software, that contains data or a program. The adjective function is actually a process control function, elementary or complex, that has been defined as a block. It does not refer to a process function, such as charging or reacting. Function blocks containing data are used to represent common control functions, such as PID, Ratio, Selector, and Device Control (for bistable devices like motors and
36
Chapter 2
block valves). The basic function that the block performs is mostly described by the name of the function. A function may have options whose function is not always clear without carefully reading the manual. Each kind (class) of data function block has a set of defined data that is put into each use (instantiation) of the block. Several kinds of data are stored in the block. Identification data includes a unique tag name, similar to the tags used to define process locations. Some of the data is configured by the user (control engineer) to adapt the function for a particular use, such as the tag, input interfaces, option selections, and tuning constants. This data is not modified by the program that uses the data in the block, so it is called static data. Some data may be changed by executing the block, but it must be remembered so that the block can be properly initialized after the processor restarts. It is called non-volatile data. The rest of the data is calculated by the function, but is stored in the block because it must persist from one execution to the next. It is called dynamic data. The program that manipulates the data in a data function block is built into the computers operating system. The user cannot change it in any way. Naturally, this doesnt cover everything that the user needs to do. Function blocks that contain executable code allow the user to enter the instructions in one of several popular languages (see IEC 61131-3). The data to be processed appears at named inputs that correspond to variable names in the program. The results to be passed on to other blocks appear at named outputs. The block has no persistent data other than its unique identification and possibly the sources of its inputs. An unconnected input may be used to store a value that is needed for the next invocation of the block. If the user is allowed to name a program function block as a class, then instances of it may be used without writing the code again for each use. This means that a change to the class program affects all instances of the class. Of course, all of the blocks must be in the same processor. The alternative is to copy the block to another location, but this will mean that changes to the code are local. Either kind of function block may be used in a block diagram. They are all functions with connectable inputs and outputs. A good system does not involve the user in the connection process. Standards are necessary to make it possible to design good systems. Anyone who has configured a DCS or perhaps a PLC has encountered function blocks. Anyone who has configured two different systems knows that more work is needed on the standards.
Summary
This chapter discussed some principles of process design that were used by the central engineering department of a medium-sized company thirty years ago and are still valid today. Well, except for the automation of all of the draftsmen. The principles were illustrated by an example of the construction of a batch process. A section on
37
modular design introduced concepts required to understand the 88 series that are useful in designing all kinds of processes. Function blocks were also introduced as a specific kind of module.