Вы находитесь на странице: 1из 13

Part 1: Our Simulation Platform

Posted by: Scott Munro

On many of the pages on our website, you will find references to a simulation platform that we use
to underpin our consulting solutions.
We havent been very specific about actual the simulation platform we use, as we believe it is the
solutions that matter most to our clients. Nevertheless, there is still a degree of merit in outlining
our approach to process simulation, as it can help our clients (current and prospective) understand
how we are able to address their challenges. Hopefully, the result is increased confidence in our
ability to deliver the said solutions.
This post is the first in a series that will describe the components of our simulation platform. Each
post will build on the last, hopefully painting a picture of a unique and highly flexible toolset that
can address a wide range of problems in minerals processing.
Our guiding philosophy in simulation is to make our models as predictive as possible. The goal is
to be able to take models out of the operational ranges they were calibrated in and still be
confident in the fidelity of the results.
This objective has driven us to add new, sophisticated capabilities and methods to existing
commercially-available software platforms.
The series of posts will also supplement the existing pages on our web site and provide a useful
context for a range of other simulation topics we have planned in the future.
So without further ado, we will launch into our first discussion and address the proverbial elephant
in the room we use the SysCAD simulation platform provided by Kenwalt.

SysCAD
SysCAD is a software application that simulates complex process flowsheets. You can visit
www.syscad.net for a longer overview of its capabilities.

In short, it allows a user to build a complete model of a metallurgical processing plant. Such a
model encapsulates all of the knowledge of the processing system, making it invaluable for plant
design, operational optimisation, business improvement and a multitude of other analytical tasks.
The applications of SysCAD (and other similar packages) are wide ranging. Here are some of
SysCADs key features we use in our solutions.

Steady-state and dynamic simulation


The main advantage from our perspective is the availability of both steady-state and dynamic
simulation modes. This gives us incredible flexibility in designing models to address particular
problems.
We tend to use steady-state models where longer-term performance is important, and dynamic
models for understanding the impacts of short term variability. Its a relatively simple workflow to
upgrade a steady-state model to dynamic, as both share the same underlying flowsheet and unit
operation model base.
We will discuss the benefits of steady-state and dynamic simulations in greater detail in a
subsequent post, once each of the building blocks of our complete platform are in-place.

SMDK
As you might have gathered from browsing our website, there is a pretty heavy emphasis on
communition and classification in our solutions. Off the shelf, SysCAD has an extremely limited
suite of unit operation models for mineral processing. This presents something of a problem,
as SysCAD has plenty of other advantages that we would still like to make use of.
To get around this problem, we use a highly customised version of the software. Through the
SysCAD Model Developers Kit (SMDK), we have created a suite of our own mineral processing
unit operations.
These models are public domain versions of current state-of-the-art models available in packages
such as JKSimMet and JKSimFloat (amongst others) or described in mineral processing literature.
For the technically inclined, our SMDK models are implemented as fast-executing native C++
dynamic link libraries that appear seamlessly as drop-in unit operations in the SysCAD application.
The same model code is integrated into Microsoft Excel, which is extremely convenient when it
comes to fitting parameters to survey data, validating unit models, analysing and visualising unit
operation performance etc.
In fact, the modular nature of our code base means it can be integrated directly into ANY
simulation package that allows the use of dynamic link libraries, COM object models etc.
Our current list of mineral processing models includes:

Blast fragmentation (Kuz-Ram and similar)

Jaw, cone and gyratory crushers, VSI/impact crushers, mineral sizers (Whiten and our
Kinematic Crusher Model)

SAG mill (Leung, Morrell et al, including full dynamic mode)

High Pressure Grinding Rolls (HPGR) (Daniel, Torres)

Ball mills, drum scrubbers and tower mills (various JKMRC)

Rod mills (JKMRC and Herbst-Fuernestau approaches)

Vibrating screens (Load-based VSMA etc., efficiency curves)

Hydrocyclone (various models)

Dense Medium Separators (cyclones, drums etc.)

Hindered settler (aka teeter bed separator, upward current classifier)

Flotation cell (P9 kinetic model, more on this in later posts)


All of the models above function in both steady-state and dynamic simulation modes. In addition,
the SAG Mill and Hindered Settler models have been implemented as true dynamic models; that
is, their internal contents are in a constantly unsteady state, resulting in residence time effects
(this has implications for short time-domain dynamic simulations).
Because we maintain our own model base source code, we are constantly adding new functionality,
testing and validating code etc, making our platform a live and continuously evolving toolset. In a
sense, we are not limited to ensuring long-term model compatibility for a wide range of users we
have the flexibility to improve our platform if and when it is appropriate.
On several occasions, we have built new models from literature sources into SMDK during the
course of projects, if particular issues arise that cannot be solved by our current model base. Its
about the using the right tools for the job, and incorporating new models is pretty straightforward
for us, given our experience and expertise.
The SMDK also allows us to expand the data contained in simulated plant streams to many
dimensions. An example is our diamond liberation framework, where ore types are subdivided into
multiple density fractions; each fraction has an ore particle size distribution; and each ore particle
size fraction has a population of discrete diamond stone sizes locked within it. Other properties
(vector or scalar) can be similarly added, such as liberation textures, magnetic susceptibilities etc.
The SMDK is the key tool upon which all of our enhanced capability is built. It give us the freedom
and flexibility to incorporate process knowledge and parameter sensitivity in the right places, for
the right applications giving us the insight we need to develop practical solutions to mineral
processing problems.
In our next post, we discuss one key SysCAD enhancement in more detail multicomponent
comminution and classification.

Updated: Read on in Part 2: Multi-component comminution modelling.

Part 2: Multi-component comminution modelling


Posted by: Scott Munro

Welcome to Part 2 of Our Simulation Platform blog series.


In Part 1, we discussed how we have developed a suite of comminution, classification and
concentration unit operations that complement SysCADs existing extractive metallurgical
capability.
In this post, we discuss a particular key enhancement to the comminution and classification
models; multi-component modelling.

Multi-component modelling
Mineral processing flowsheet simulation packages like JKSimMet treat ore as a single solid
component of a stream, possessing one set of comminution indices and a particle size distribution.
For an equipment scale-up or mass balancing approach, as intended by JKSimMet, this is often
sufficient.
However, operating plant streams are almost always more complex mixtures of many different ore
lithologies, each having different processing properties as received from the mine and orebody.
Active ore blending to control plant grades and throughput can further complicate matters.
In multi-component modelling, plant streams consist of multiple ore types, in varying quantities.
Each ore type retains its own set of comminution and physical properties as well as its own particle
size distribution.
Comminution and classification operations act on the ore types differently, yielding an overall
stream product that is the sum of its components (and often the only thing that can be practically
measured in a real plant).

Why multi-component modelling?


Understanding how ore mixtures are processed in a plant is becoming increasingly important for a
range of tasks, such as:

Quantifying the benefits of pre-concentration and ore sorting, especially energy efficiencies
and metal recoveries;

Mapping out plant performance over Life-Of-Mine as ore sources, compositions and grades
change essential for the planning of plant production, operations and capital expenditure;

Reconciling plant performance with geometallurgical data sets;

Best-utilising hard ore components as media in grinding operations;

Debottlenecking plants where hard components recirculate and consume processing


capacity for little return;

More accurate and flexible metallurgical models for diagnosis and optimisation.

To address such activities, we have modified our suite of unit operation models to handle streams
with multiple solid components.

Modifying the models


The SysCAD software inherently has the capability to simulate mixed streams. This belies its more
typical application to pyrometallurgy, hydrometallurgy, and alumina refining processes, where
various chemical species and phases are the primary stream components.
So in order to simulate multi-component mineral processing circuits, we need only modify the
existing unit operations to cope with the mixed streams that SysCAD presents them with.
Research in multi-component comminution has been ongoing for several decades. Greater focus
has shifted to the area recently, as our clients begin to consider many of the opportunities outlined
in the previous section.
Progress in the field can be hampered by difficulties in measuring the individual products of multicomponent comminution. Typically, a particular property of one component must be exploited in
order to measure its degree of comminution (such as magnetic susceptibility, acidic dissolution,
XRF response etc). Naturally occurring orebodies are not always so generous in their provision of
such properties!
Whilst this can limit some of the capabilities offered, our approach has been to take the path most
reasonably rationalised by fundamental understandings, empirical experiences and/or published
science.
Here is a cross-section of the approaches we have taken:

Crushers
For jaw, gyratory, cone, impact and roller crushers an assumption of single particle breakage is
largely valid. That is, individual particles are exposed to crushing actions without interaction from
other, adjacent particles.

In this case, multi-component modelling is fairly trivial, with ores being crushed independently of
one another.

Tumbling Mills
The situation becomes more complicated in the case of machines like AG/SAG mills, where internal
multi-component loads combine to impart breakage energies to the constituent particles.
The JKMRC have been active in this area and our own previously theoretical modifications to the
Leung/Morrell SAG mill model are strongly supported by their recent research results (Bueno et al,
2013).
Ball and rod mill models can similarly be modified to scale breakage rates/selection functions for
ore types in mixtures.

High Pressure Grinding Rolls (HPGR)


HPGR comminution presents an interesting case. Breakage in these machines arises directly from
the interaction of ore particles in a compressed bed.
Comminution is therefore affected by the composition of hard and soft materials in the feed
mixture. Hard materials are known to act as a grinding agent, transferring the bulk of compressive
forces (and hence comminution energy) to the softer components.
Research has shown the simulation of such mixtures in HPGRs is quite possible (Abouzeid and
Fuerstenau, 2009). However specialised test work is required to parameterise the split of energy
between the components during comminution. The test work also requires the exploitation of a
property that allows the comminuted products to be separated for analysis (acidic dissolution in
Abouzeid and Fuerstenaus case).
The multicomponent simulation of operating HPGRs is therefore quite limited in a practical sense.
Current research in fundamental breakage processes, such as force chain modelling, Discrete
Element Modelling etc., might provide a more useful solution in the future.

Classification
Classification is typically a size-based operation, although density can also play a role (e.g.
hydrocyclones).
Multi-component classification models which consider the individual size or density properties of
ore in mixtures can more accurately reflect the separation phenomena at work.
The energy implications of this are particularly relevant where hard, coarse or dense components
are recirculated for additional (and often unnecessary) comminution.

Extending the multi-component concept


Integrating a multi-component framework into our comminution and classification model suite has
opened up further opportunities for improving our simulation capabilities.
Utilising the same approach (and software modifications), we have also incorporated:

Ore mineralogies (as distinct from lithologies; required to simulate downstream flotation
and extractive metallurgy);

Valuable metal grades, distributed by size, ore or mineral;

Particle size-density distributions (essential for simulating gravity concentration);

Flotation rates/classes;

Liberation phenomena; and more.

Each of the examples above has been previously applied to our project work, providing genuine
insight and helping us develop practical solutions.

Conclusions
Hopefully the above post has given you a view of the benefits of multi-component comminution
modelling, our approach, and perhaps some of the limitations.
The framework we have described is ready to use, and we have successfully applied it to a range of
projects over many years.
The multi-component framework is the means by which we have our comminution and
classification models communicate directly with downstream concentration and extractive
metallurgy processes, all within the same computational platform. This forms the basis for our
consulting solutions such as Ore To Product.
In our next post, we will discuss how plant process control and operating philosophies not only
have a role in process simulation, but are actually essential for capturing the true behaviour of
complex minerals processing systems.

References

Bueno M P, Kojovic T, Powell M S, Shi F, 2013, Multi-component AG/SAG mill model, Minerals

Engineering, 43-44 (2013) 12-21, http://dx.doi.org/10.1016/j.mineng.2012.06.011


Abouzeid A-Z M, Fuerstenau D W, 2009, Grinding of mineral mixtures in high-pressure grinding
rolls, Int. J. Miner. Process. 93 (2009) 59-65, http://dx.doi.org/10.1016/j.minpro.2009.05.008

Part 3: Process control and operating philosophy


Posted by: Scott Munro

This is part three of Our Simulation Platform blog. In this months post, we discuss how
we integrate process control and plant operating philosophies into process simulations.
You can catch up on our previous blog topics here:
Part 1: Our Simulation Platform
Part 2: Multicomponent comminution modelling
So, onto the topic of process control and simulation. The role of process control and operating
philosophies (aka human control) in mineral processing plant simulation is often overlooked.

Why is process control important?


Process control is an integral part of how a mineral processing circuit operates and how well it
performs against the required benchmarks, be it throughput, recovery, revenue or otherwise.
Good process control systems are able to maximise the utilisation of fixed plant equipment, whilst
simultaneously smoothing out surges, variations and disturbances which otherwise lead to lost
production capacity (which can never be recovered, by the way).
In order to do this, the process control system will modify the configuration of particular unit
operations and hence how they respond to change in the system. For example, decreasing the
edge recycle component of an HPGR will increase the net throughput of the machine but at the
cost of increased product size. This will have downstream effects, which will in turn induce further
process control changes at other units.
So the simple adjustment of one operating variable can have cascading (and often counterintuitive) impacts across a whole circuit.
It is clear then, that the process control system driving a circuit is of equal significance to plant
performance as the physical unit operations themselves.

It also stands to reason that process control will be just as important to any simulation of a
processing plant for all the same reasons as the real plant.

SysCAD and process control


As mentioned in our previous posts, we use a customised version of the SysCAD process simulation
package.
SysCAD has its own built-in process control functionality in the form of Proportional-IntegralDerivative (PID) controllers and Programmable Modules (PGM).
The PID controllers allow set point and goal seeking based on parameter values that are measured
as a simulation progresses.
The PGM modules allow any logical behaviour to be directly programmed into the simulation in a
manner analogous to Visual Basic or similar coding. Indeed, we previously incorporated a
structured text program from a PLC directly into SysCAD PGM with only a few minor modifications
to variable naming and referencing, demonstrating the simplicity and flexibility of the solution.

Steady-state process simulations


At first glance, process control might seem unnecessary in a steady-state simulation that
effectively excludes the time domain. In the case of a mass balancing exercise, this is likely true.
However, process control is a critical part of turning an ordinary mass balance into fully-fledged,
predictive process simulation.
As discussed in our previous posts, our guiding philosophy in simulation is to use unit operation
models that are based on first-principles understanding of the phenomena and mechanisms at
work, and hence are as predictive as possible. This strengthens the ability of such models to be
taken outside the operating range at which they were calibrated (or have always been run).
It is the process control systems that do just this responding to disturbances or changes by
adjusting the configuration of unit operations. And so using predictive process models is only
effective when accurate process control system responses are also included.
This relationship is quite symbiotic. For example, a predictive crusher unit operation model (like our
KCM) will accurately report power draw for a feed ore of given fragmentation, hardness and rate.
The process control system may respond to excessive power draw by increasing the crushers
Closed Side Setting (CSS) until power draw normalises. Of course, the degree to which the control
system increases the CSS will derive from how the predicted power draw changes with CSS. The
degree of change in CSS will also drive the behaviour of downstream unit operations, like screens,
mills etc. and can ultimately affect the plants final product.
So, a combination of both unit operation response and control response are required to
correctly simulate the processing system.

These sorts of decisions occur all the time in real plants, and so should be factored in to any
steady-state simulation that is subsequently used to predict the performance of a plant under
different conditions (i.e. scenario analyses).

Dynamic Process Simulation


Dynamic process simulations are the natural home of process control and operating philosophies.
Indeed, a dynamic simulation will not function without some level of supervisory logic to operate
the model.
The time domain introduces delays between measuring system changes and process control
responses. The duration of these delays and the interaction of responses with other aspects of the
process typically make for highly non-linear and difficult-to-comprehend systems. Dynamic plant
simulations which include process control can help clarify the behaviour of such systems.

Dynamic simulations with small times steps (e.g. seconds) that simulate short periods (e.g.
hours) are ideal for the design, testing and improvement of process control philosophies
and systems. We conduct this type of simulation extensively for our clients, during both
design and operation phases (see our Process Control and Dynamic Simulation page).

Longer-term simulations also benefit, particularly where the transfer and storage of
material through a system is automated. Such a simulation can greatly simplify the talk of
estimating the change in monthly production arising from a proposed process
control improvement, for example.

The great thing about process control is that improvements represent a soft change to a system.
That is, the improvement is typically made by changing the input parameters of a PID loop
or performing some control software modification. Contrast this to other capital-intensive
improvements, such as installing new crushers or mills, adding or upgrading conveyor systems etc.
with the same intentions of increasing throughput.
In our experience, there is typically a lot of low hanging fruit opportunities around process control.
Consider the extract below which highlights some of the benefits our dynamic process control
simulations have identified or our clients. These improvements represent significant value for an
operation.

Our approach
Our approach is to incorporate as much as possible of the process control strategy and plant
operating philosophy into our simulations even the steady-state ones.
This ensures the simulated plant acts as close to the real plant as the available
information allows.
This makes for more accurate simulation and therefore higher-quality, higher-confidence analyses
and solutions.

What kind of controls are we talking about?


We have implemented a range of process controls in our simulations (both steady-state and
dynamic), such as:

Simple and cascading PID loops

Smith predictors

Dynamic constraint controllers

Model predictive controllers

Expert systems; and

Other heuristic decision-making algorithms.


And then there is the human element. How does management see the plant as being operated,
through to process operators at the system interface? What impact are the decisions of individuals
at all levels having on plant performance?
Process operators are a particularly critical part of the control system. The decision process they
use, whether it be an intuitive, experience or structured approach, can also be captured, assessed,
and modified using a dynamic simulation. Indeed, this is the basis of our Operator Training
framework, and is itself a heavily overlooked opportunity in most plants.

Conclusions
Improvements to process control is often where operating plants can best make sustainable gains.
Good process control results in reduced process variability and consistently, sustainably pushing
the plant towards its physical limits. The power of a dynamic simulation is that control strategies
can be tested, adjusted, and ranked relative to each other based on realistic measures of their
performance.
As is often noted, if you cant measure it, you cant manage it. With a dynamic process
simulation you can measure the performance of the control system. And best of all, any analyses,
evaluations, investigations and prototyping is all risk-free to actual production, meaning the best
solution can be determined well before anything in the plant is actually modified.
That concludes our process control and operating philosophy topic. You can read more on our
Process Control And Dynamic Simulation and Operator Training pages.
In our next post, we will discuss our Constraint Analysis (a.k.a debottlenecking) methodology and
how process simulations can help identify ways to increase plant throughput.

Вам также может понравиться