Вы находитесь на странице: 1из 37

Positive feedback From Wikipedia, the free encyclopedia Jump to: navigation, search Positive feedback is a feedback loop

system in which the system responds to the perturbation in the same direction as the perturbation (It is sometimes referred to as cumulative causation). In contrast, a system that responds to the perturbation in the opposite direction is called a negative feedback system. The term "positive" means responding to the same direction as the perturbation whereas "negative" means responding to the opposite direction. These concepts were first recognized as broadly applicable by Norbert Wiener in his 1948 work on Cybernetics.* [1] A system in which there is positive feedback to any change in its current state is said to be in an unstable equilibrium, whereas one with negative feedback is said to be in a stable equilibrium. The end result of a positive feedback is often amplifying and "explosive." That is, a small perturbation will result in big changes. This feedback, in turn, will drive the system even further away from its own original setpoint, thus amplifying the original perturbation signal, and eventually become explosive because the amplification often grows exponentially (with the first order positive feedback), or even hyperbolically (with the second order positive feedback). Indeed, chemical and nuclear fission based explosives offer an excellent physical demonstration of positive feedback. Bombarding fissile material with neutrons causes it to emit even more neutrons, which in turn affect the material. The greater the mass of fissile material, the larger the amplification, resulting in greater feedback. If the amplification is great enough, the process accelerates until the fissile material is spent or dispersed by the resulting explosion. Both positive and negative feedback are closed systems. They are called "closed systems" because the system is closed by a feedback loop, i.e., the response of the system depends on the feedback signal to complete its function; without such a loop, it would become an open system. In contrast, a feed-forward system is an "open system" since it does not have any feedback loop, and does not rely on feedback signal to perform its function. Examples of positive and negative feedback, open and closed systems can be found in ecological, biological, social systems and in engineering control systems such as servo control systems. Explanation The effect of a positive feedback loop is not necessarily "positive" in the sense of being desirable. The name refers to the nature of change rather than the desirability of the outcome. The negative feedback loop tends to slow down a process, while the positive feedback loop tends to speed it up. When a change of variable occurs in a system, the system responds. In the case of positive feedback the response of the system is to change that variable even more in the same direction. A simple example in chemistry would be the phenomenon of autocatalysis, where a reaction is facilitated increasingly in the presence of its product. For another example, imagine an ecosystem with only one species and an unlimited

amount of food. The population will grow at a rate proportional to the current population, which leads to an accelerating increase, i.e., positive feedback. This has a de-stabilizing effect, so left unchecked, does not result in homeostasis. In some cases (if not controlled by negative feedback), a positive feedback loop can run out of control, and can result in the collapse of the system. This is called vicious circle, or in Latin circulus vitiosus. People also refer to a virtuous circle, which is the same thing, but with an autocatalytic benign effect. Consider a linear amplifier with linear feedback. As long as the loop gain, i.e. the forward gain multiplied with the feedback gain, is lower than 1 the result is a stable (convergent) output. This is of course always true for a negative feedback but also for lower positive feedbacks. In electronic amplifiers the normal case is that the forward gain is quite high and the amplifier becomes unstable for quite small positive feedbacks. In the real world, positive feedback loops are always controlled eventually by negative feedback of some sort; a microphone will break or a beaker will crack or a nuclear accident will result in meltdown. This outcome need not be so dramatic, however. The variety of negative feedback controls can modulate the effect. Embedded in a system of feedback loops, a positive feedback does not necessarily imply a runaway process. Combined with other processes, it may just have an amplifying effect. An example of this is the role of water vapour in amplifying global warming; higher global temperatures lead to increased water vapour in the atmosphere, which pushes up temperatures further, and so on, but the overall effect is that of a convergent series, amplifying the original temperature rise by a relatively constant factor. The limiting control is that water vapour does not depend solely upon temperature. Water cycles in and out of the atmosphere for a variety of reasons. One common example of positive feedback is the network effect, where more people are encouraged to join a network the larger that network becomes. The result is that the network grows more and more quickly over time. This is the basis for many social phenomena, including the infamous Ponzi Scheme. In this case, though, the population size is the limiting factor. [edit] In Electronics Feedback is a process of sampling a part of the output signal and applying it back to the input. This technique is useful to change the parameters of an amplifier like voltage gain,input and output impedance,stability and bandwidth. Feedback is said to be positive if any increase in the output signal results in a feedback signal which on being mixed with the input signal caused further increase in the magnitude of the output signal. Hence it is also called regenerative feedback. Positive feedback is in the same phase as the input signal,therefore the final gain of the amplifier(Af) increases. Final gain Af=(output voltage/input voltage)=A/(1-A). Here A is the gain of the amplifier without feedback, and is the feedback factor

Advantages

Disadvantages Gain can tend to be unstable Higher distortion Bandwidth decreases Stability is difficult or impossible to guarantee [edit] Applications Positive feedback is used extensively in oscillators and in regenerative radio receivers and Q multipliers. The schmitt trigger circuit uses positive feedback to generate hysteresis and thus provide noise immunity on digital input. Audio feedback is a common example of positive feedback. It is the familiar squeal that results when sound from loudspeakers enters a poorly placed microphone and gets amplified, and as a result the sound gets louder and louder. In the world system development The hyperbolic growth of the world population observed till the 1970s has recently been correlated to a non-linear second order positive feedback between the demographic growth and technological development that can be spelled out as follows: technological growth - increase in the carrying capacity of land for people - demographic growth more people - more potential inventors - acceleration of technological growth accelerating growth of the carrying capacity - the faster population growth - accelerating growth of the number of potential inventors - faster technological growth - hence, the faster growth of the Earth's carrying capacity for people, and so on (see, e.g., Introduction to Social Macrodynamics by Andrey Korotayev et al.). Population and Agriculture In Feed or Feedback A. Duncan Brown argues that agriculture and human population are in a positive feedback mode[2], which means that one drives the other with increasing intensity. He ventures the case that this positive feedback system will end sometime with a catastrophe, as modern agriculture is using up all of the easily available phosphate and turning to monocultures which are more susceptible to collapse. Echo Chamber Effect Metaphorically, cumulative causation may emerge on the Internet as an echo chamber effect, which refers to any situation in which information or ideas are amplified by transmission inside an enclosed space. Another emerging term used to describe this "echoing" and homogenizing effect on the Internet within social communities is "cultural tribalism". The Internet may be seen as a complex system (e.g., emergent, dynamic, evolutionary), and as such, will at times illuminate the effects of positive feedback loops (i.e., the echochamber effect) to that system, where a lack of perturbation to dimensions of the network, prohibits a sense of equilibrium to the system. Complex systems that are characterized by negative feedback loops will create more stability and balance during emergent and dynamic behaviour. For example, observers of journalism in the mass media describe an echo chamber effect in media discourse. One purveyor of information will make a claim, which many like-

minded people then repeat, overhear, and repeat again (often in an exaggerated or otherwise distorted form) until most people assume that some extreme variation of the story is true. Due to this condition arising in online communities, participants may find their own opinions constantly echoed back to them, and in doing so reinforce a certain sense of truth that resonates with individual belief systems. This can create some significant challenges to critical discourse within an online medium. The echo-chamber effect may also impact a lack of recognition to large demographic changes in language and culture on the Internet if individuals only create, experience and navigate those online spaces that reinforce their "preferred" world view. In Biology One example of a biological positive feedback loop is the onset of contractions in childbirth. When a contraction occurs, the hormone oxytocin is released into the body, which stimulates further contractions. This results in contractions increasing in amplitude and frequency. Another example of a biological positive feedback loop is the process of blood clotting. The loop is initiated when injured tissue releases signal chemicals which activate platelets in the blood. An activated platelet releases chemicals which activate more platelets, causing a rapid cascade and the formation of a blood clot. In most cases, once the purpose of the feedback loop is completed, counter-signals are released which suppress or break the loop. Citations ^ Norbert Wiener (1948), Cybernetics or Control and Communication in the Animal and the Machine, Paris, Hermann et Cie - MIT Press, Cambridge, MA. ^ Brown, A. Duncan. (2003) [1] Feed or Feedback. Publisher: International Books. References Norbert Wiener (1948), Cybernetics or Control and Communication in the Animal and the Machine, Paris, Hermann et Cie - MIT Press, Cambridge, MA. Katie Salen and Eric Zimmerman. Rules of Play. MIT Press. 2004. ISBN 0-262-24045-9. Chapter 18: Games as Cybernetic Systems. [edit] See also Donella Meadows' twelve leverage points to intervene in a system Reflexivity (social theory) Strategic complementarity Stability criterion Virtuous circle and vicious circle Matthew effect Bus bunching W. Brian Arthur, economist noted for his works studying postitive feedback Retrieved from "http://en.wikipedia.org/wiki/Positive_feedback"

Feedback Loops
Feedback Loops can enhance or buffer changes that occur in a system. Positive feedback loops enhance or amplify changes; this tends to move a system away from its equilibrium state and make it more unstable. Negative feedbacks tend to dampen or buffer changes; this tends to hold a system to some equilibrium state making it more stable. Several examples are given here to help clarify feed back loops and to introduce loop diagrams. Understanding negative and positive connections is helpful for understanding loop structure.

A positive connection is one in which a change (increase or decrease) in some variable results in the same type of change (increase or decrease)in a second variable. The positive connection in the figure below for a cooling coffee cup implies that the hotter the coffee is the faster it cools. The variables Tc and Tr are coffee and room temperature respectively.

A negative connection is one in which a change (increase or decrease) in some variable results in the opposite change (decrease/increase) in a second variable. The negative connection in the figure below for a cooling coffee cup implies a positive cooling rate makes the coffee temperature drop.

When these two connections are combined we get a negative feedback loop as shown below in which the coffee temperature approaches the stable equilibrium of the room temperature. Going around the loop the positive connection times the negative connection gives a negative loop feedback effect. This same trick of multiplying the signs of the connections around a loop together to find out whether it is a positive or negative feedback loop works for more complicated loop structures with many more connections.

An example of positive feedback is world population with a fixed percentage birth rate. Positive feedbacks will result in unlimited growth (until checked) and are sometimes referred to as vicious cycles. In the figure below connecting population to births, large populations cause large numbers of births and large numbers of births result in larger population. This idea can be modeled nicely with the differential equation dP/dt=+rP, where P is population and r is the percent birth rate. The solution to this is P(t)=Po(exp[rt]) or exponential growth. Not all positive feedbacks give exponential growth but all, left unchecked, will result in unlimited (or unstable) growth. In the graph below we show the world population predicted for a fixed 2% growth rate from 1950 to 2050. Also shown is an estimate of future world population which is close to the mid-range United Nation Environmental Program (UNEP) best guess for future population to stress that exponential growth is not realistic for world population although it works fairly well for the time between 1950 and 1990. Logistic (S-shaped) growth would be a better choice for modeling world population for this 100 year time interval shown. UNEP Population (more info)

Systems Thinking
System Dynamics, Systems Thinking, and Daisy World (example)
Systems can be modeled by understanding the connection between: External parameters (like the solar output) and the system (In this case, Earth's Climate system). A system's stocks or quantities which accumulate value (Earth's surface temperature and atmospheric temperature) Flows between stocks or from outside the system (solar heating rate of infrared heating/cooling) Connections establishing the relationship between stocks and flows And possibly decisions or thresholds that can cause a system to branch into new states or develop new behavior. The figure below shows an example of a system diagram for a simplified earth atmosphere climate system. Notice the sun cannot be in any feedback loop since it is external to the system, i.e. the earth cannot significantly influence what the sun does. Can you explain why each arrow is a + or - connection? How about why the red + and - signs for each loop are as shown?

Daisy World Example.


Daisy world is an imaginary planet that can help illustrate several important aspects of systems and possibly the actual Earth's climate system. These include: Feedback Structure Stability Homeostasis (self regulating system through negative feedbacks). Abrupt transition from stable to unstable state. Graphical analysis Introduction of concepts related to Earth's climate system such albedo and radiative equilibrium

In the figure above Daisy World is shown with white daisies and bare ground (green). The white daisies reflect more sunlight than does the bare ground and so the larger the daisy population the more sunlight reflected back into space. The albedo (whiteness) of the planet increases with increasing daisy coverage and as the albedo increase the surface temperature tends to decrease (less absorbed solar energy). The graph shows how the surface temperature of Daisy World changes with daisy coverage and the flow diagram shows the connection between daisy coverage (albedo) and surface temperature.

The assumed growth of daisies as a function of temperature is shown above. There is an optimum temperature for daisies. If it gets too cold or too hot they don't grow as well. Can you answer the two questions here? The figure below combines the last two figures. The intersection of the temperature vs. daisy coverage straight line with the daisy coverage vs. temperature growth relationship are graphical solutions to the two equations and hence are equilibrium states of Daisy World. The loop diagram on the left (corresponding to P1) is a negative feedback loop so P1 is a stable equilibrium, and the loop on the right corresponding to P2 is a positive feedback loop so P2 is an unstable equilibrium point. In fact any point to the right of the central hump would be unstable. When the system starts at P2 and the temperature is increased a little (from maybe a glitch in solar output) this causes the daisy coverage to _________________ and then the amount of absorbed solar energy at the surface to _________________ which causes the temperature to __________________. Repeat this question for the system starting at P1.

For more information/ideas go to A White and Black Daisy World Model with Assignment {The model here works best on a PC type computer}

How to Use Models


In thinking about how to incorporate modeling activities into introductory geoscience courses, there are two important classes of considerations: technical and pedagogical.

Technical Considerations include:


Acquiring the models or ideas in a useable form. Identification and use of the proper equipment for physical demonstration models. In the case of mathematical models, computers simulations of analogous systems, visualization models, or statistical models one must learn how to operate and manipulate the modeling environment or software. These technical considerations are clarified under our discussions related to each specific type of model.

Pedagogical Considerations
There are several things to keep in mind when using or creating modeling activities for instruction. Keep the activity as interactive as possible. When you find that you're spending a majority of your time lecturing to the students about what to do or how things work, try to think of ways you can get them working through ideas in groups, lab, interactive lectures, etc. Including students in the development process and/or providing opportunities for them to experiment with the model or modify it can increase students' understanding of the model and its relationship to the physical world. Creating opportunities for students to analyze and comment on the models behavior increases their understanding of the relationships between different inputs and rates. Creating opportunities for students to validate the model, i.e. compare model predictions to observations, increases their understanding of its limits. Stress that models are not reality and that a model's purpose is to help bridge the gap between observations and the real world. An important reason to use a model is that you can perform experiments with models without harming the system of interest. Make sure that students think about the underlying assumptions of a model and the domain of applicability. Try to ask questions that can help check their understanding. For example, simple exponential growth assumes that the percent growth rate remains fixed and in real world systems it only applies for so long before the system becomes overstressed. Having students identify underlying assumptions of a model and their domain of applicability can help them gain an appreciation of what a model can and cannot do. Models can be used to introduce specific content. A model can introduce students to important terms as well as provide an environment to explore relevant processes. Models can be used to explore "What-if" scenarios. "What if Atmospheric CO2 doubles?" is a common example for a climate model.

Models can be used explore the sensitivity of a system to variations in its different components. Modeling Methodology for Physics Teachers (more info) (1998)provides general guidelines for using models in physics instruction that are quite applicable to geoscience instruction, particularly in Box 2 of Hestenes' document. Box 7 of Modeling Methodology for Physics Teachers (more info) (1998) outlines important aspects of the model construction process. Other Resources This related link provides guidelines for using mathematical functions in an introductory geoscience context. This is specially helpful in designing questions and activities for students. It is important that the learning environment and activities created around a model provide an interactive engagement experience. Try to avoid passive modeling activities where students simply watch a model simulation or animation without having to think or do anything.

Feedback
In a system where a transformation occurs, there are inputs and outputs. The inputs are the result of the environment's influence on the system, and the outputs are the influence of the system on the environment. Input and output are separated by a duration of time, as in before and after, or past and present.

In every feedback loop, as the name suggests, information about the result of a transformation or an action is sent back to the input of the system in the form of input data. If these new data facilitate and accelerate the transformation in the same direction as

the preceding results, they are positive feedback - their effects are cumulative. If the new data produce a result in the opposite direction to previous results, they are negative feedback - their effects stabilize the system. In the first case there is exponential growth or decline; in the second there is maintenance of the equilibrium.

Positive feedback leads to divergent behavior: indefinite expansion or explosion (a running away toward infinity) or total blocking of activities (a running away toward zero). Each plus involves another plus; there is a snowball effect. The examples are numerous: chain reaction, population explosion, industrial expansion, capital invested at compound interest, inflation, proliferation of cancer cells. However, when minus leads to another minus, events come to a standstill. Typical examples are bankruptcy and economic depression.

In either case a positive feedback loop left to itself can lead only to the destruction of the system, through explosion or through the blocking of all its functions. The wild behavior of positive loops - a veritable death wish - must be controlled by negative loops. This control is essential for a system to maintain itself in the course of time. Negative feedback leads to adaptive, or goal-seeking behavior: sustaining the same level, temperature, concentration, speed, direction. In some cases the goal is self-determined and is preserved in the face of evolution: the system has produced its own purpose (to maintain, for example, the composition of the air or the oceans in the ecosystem or the concentration of glucose in the blood). In other cases man has determined the goals of the machines (automats and servomechanisms). In a negative loop every variation toward a plus triggers a correction toward the minus, and vice versa. There is tight control; the system oscillates around an ideal equilibrium that it never attains. A thermostat or a water

tank equipped with a float are simple examples of regulation by negative feedback.

Copyright 1997

Homeostasis

Homeostasis: resistance to change

A person threatened by the environment (or informed of an approaching pleasure or danger) prepares for action. The body mobilizes reserves of energy and produces certain hormones such as adrenalin, which prepare it for conflict or flight. This mobilisation can be seen in familiar physiological reactions. In the presence of emotion, danger, or physical effort the heart beats faster and respiration quickens. The face turns red or pales and the body perspires. The individual may experience shortness of breath, cold sweats, shivering, trembling legs. These physiological manifestations reflect the efforts of the body to maintain its internal equilibrium. Action can be voluntary--to drink when one is thirsty, to eat when hungry, to put on clothing when cold, to open a window when one is too warm--or involuntary--shivering, sweating.

The internal equilibrium of the body, the ultimate gauge of its proper functioning, involves the maintenance of a constant rate of concentration in the blood of certain molecules and ions that are essential to life and the maintenance at specified levels of other physical parameters such as temperature. This is accomplished in spite of modifications of the environment. This extraordinary property of the body has intrigued many physiologists. In 1865 Claude Bernard noticed, in his Introduction to Experimental Medicine. that the "constancy of the internal milieu was the essential condition to a free life." But it was necessary to find a concept that would make it possible to link together the mechanisms that effected the regulation of the body. The credit for this concept goes to the American physiologist Walter Cannon. In 1932, impressed by "the wisdom of the body" capable of guaranteeing with such efficiency the control of the physiological equilibrium, Cannon coined the word homeostasis from two Greek words meaning to remain the same. Since then the concept of homeostasy has had a central position in the field of cybernetics. Homeostasis is one of the most remarkable and most typical properties of highly complex open systems. A homeostatic system (an industrial firm, a large organization, a cell) is an open system that maintains its structure and functions by means of a multiplicity of

dynamic equilibriums rigorously controlled by interdependent regulation mechanisms. Such a system reacts to every change in the environment, or to every random disturbance, through a series of modifications of equal size and opposite direction to those that created the disturbance. The goal of these modifications is to maintain the internal balances. Ecological, biological, and social systems are homeostatic. They oppose change with every means at their disposal. If the system does not succeed in reestablishing its equilibriums, it enters into another mode of behavior, one with constraints often more severe than the previous ones. This mode can lead to the destruction of the system if the disturbances persist. Complex systems must have homeostasis to maintain stability and to survive. At the same time it bestows on the systems very special properties. Homeostatic systems are ultrastable; everything in their internal, structural, and functional organization contributes to the maintenance of the same organization. Their behavior is unpredictable; "counterintuitive" according to Jay Forrester, or contravariant: when one expects a determined reaction as the result of a precise action, a completely unexpected and often contrary action occurs instead. These are the gambles of interdependence and homeostasis; statesmen, business leaders, and sociologists know the effects only too well. For a complex system, to endure is not enough; it must adapt itself to modifications of the environment and it must evolve. Otherwise outside forces will soon disorganize and destroy it. The paradoxical situation that confronts all those responsible for the maintenance and evolution of a complex system, whether the system be a state, a large organization, or an industry, can be expressed in the simple question, How can a stable organization whose goal is to maintain itself and endure be able to change and evolve?
Principia Cybernetica - Referencing this page

Control

Control is the operation mode of a control system which includes two subsystems: controlling (a controller) C, and controlled, S. They interact, but there is a difference between the action of C on S, and the action of S on C. The controller C may change the state of the controlled system S in any way, including the destruction of S. The action of

S on C is formation of a perception of system S in the controller C. This understanding of control is presented in Fig.1.

In Fig.2 we define the concept of perception. We see in the controller an agent which is responsible for its actions, and a representation of the controlled system, which is an object whose states we identify with perceptions. The relation between representation and agent is described as a flow of information: the actions of the agent depend on this flow. Thus the action of S on C is limited, in its effect, by changing only S's representation in C, not the rest of the system. Thus the asymmetry of the control relation: C controls S, but S does not control C. The action of S on C is "filtered" through the representation: its effect on C cannot be greater than allowed by the changing state of the representation. Of course, two systems can be in a state of mutual control, but this will be a different, more complex, relation, which we will still describe as a combination of two asymmetric control relations.

In many cases the controlled system can be also seen in greater detail, which is done in Fig.3. We describe the controlled system using some variables and distinguish between the variables directly affected by the controller, from the variables which are observed by the controller in perception. The causal dependence of the observed variables on the affected variables is determined by the intrinsic dynamics of the system. We also must not forget about the effect of uncontrollable disturbances on the observed variables. In Fig.3 we also made an addition to the controller: it now includes one more object which influences the agent: goal. The agent compares the current representation with the goal and takes actions which tend to minimize the difference between them. This is known as purposeful behavior. It does not necessarily result from the existence of an objectified goal; the goal may be built into the system -- dissolved in it, so to say. But a typical control system would include a goal as a an identifiable subsystem. Even though the relation of control is asymmetric, it includes a closed loop. Looked from the controller, the loop starts with its action and is followed by a perception, which is an action in the opposite direction: from the controlled to the controller. This aspect of control relation is known as feedback. The concept of control is the cornerstone of cybernetics. The basic control scheme which we have defined in this node is the unit from which complicated cybernetic systems are created by nature and man. For this reason, our definition is pretty wide: we want our building unit to be as universal as possible. In particular, we see as special cases of control some systems which most present authors would, probably, not call control. The different components (e.g. perception, action, ...) of the control loop we have enumerated can in the limit be absent. This leads to different "degenerate" or "limit" cases, which we would not usually see as control systems, but which still share many properties with the more elaborate control scheme. Specific instances of this control scheme can further differ in the presence or absence of different attributes or properties characterizing control systems: separability, contingency, evolvability, asymmetry, ... This leads to a very broad view of control in which many very important types of system can be classified, as shown by many examples of control systems. The abstract scheme can also be mapped on different other schemes for control by authors such as Ashby, Powers and Meystel, and on an older definition in terms of statements and commands.

Buffering, feedback, feedforward: mechanisms of control


While the perturbations resisted in a control relation can originate either inside (e.g. functioning errors or quantum fluctuations) or outside of the system (e.g. attack by a predator or changes in the weather), functionally we can treat them as if they all come from the same, external source. To achieve its goal in spite of such perturbations, the system must have a way to block their effect on its essential variables. There are three

fundamental methods to achieve such regulation: buffering, feedback and feedforward (see Fig. 1).

Buffering: absorbing perturbations


Buffering is the passive absorption or damping of perturbations. For example, the wall of a thermostatically controlled room is a buffer: the thicker or the better insulated it is, the less effect fluctuations in outside temperature will have on the inside temperature. Other examples are the shock-absorbers in a car, and a reservoir, which provides a regular water supply in spite of variations in rain fall. The mechanism of buffering is similar to that of a stable equilibrium: dissipating perturbations without active intervention. The disadvantage is that it can only dampen the effects of uncoordinated fluctuations; it cannot systematically drive the system to a non-equilibrium state, or even keep it there. For example, however well-insulated, a wall alone cannot maintain the room at a temperature higher than the average outside temperature.

Fig. 1: basic mechanisms of regulation, from left to right: buffering, feedforward and feedback. In each case, the effect of disturbances D on the essential variables E is reduced, either by a passive buffer B, or by an active regulator R.

Feedforward: anticipating perturbations


Feedback and feedforward both require action on the part of the system, to suppress or compensate the effect of the fluctuation. For example, a thermostat will counteract a drop in temperature by switching on the heating. Feedforward control will suppress the disturbance before it has had the chance to affect the system's essential variables. This requires the capacity to anticipate the effect of perturbations on the system's goal. Otherwise the system would not know which external fluctuations to consider as perturbations, or how to effectively compensate their influence before it affects the system. This requires that the control system be able to gather early information about these fluctuations. For example, feedforward control might be applied to the thermostatically controlled room by installing a temperature sensor outside of the room, which would warn the thermostat about a drop in the outside temperature, so that it could start heating before this would affect the inside temperature. In many cases, such advance warning is difficult to implement, or simply unreliable. For example, the thermostat might start heating the room, anticipating the effect of outside cooling, without being aware that at the same time someone in the room switched on the oven, producing more than enough heat to offset the drop in outside temperature. No sensor or anticipation can ever provide complete information about the future effects of an infinite variety of possible perturbations, and therefore feedforward control is bound to make mistakes. With a good control system, the resulting errors may be few, but the problem is that they will accumulate in the long run, eventually destroying the system.

Feedback: correcting perturbations after the fact


The only way to avoid this accumulation is to use feedback, that is, compensate an error or deviation from the goal after it has happened. Thus feedback control is also called error-controlled regulation, since the error is used to determine the control action, as with the thermostat which samples the temperature inside the room, switching on the heating whenever that temperature reading drops lower than a certain reference point from the goal temperature. The disadvantage of feedback control is that it first must allow a deviation or error to appear before it can take action, since otherwise it would not know which action to take. Therefore, feedback control is by definition imperfect, whereas feedforward could in principle, but not in practice, be made error-free. The reason feedback control can still be very effective is continuity: deviations from the goal usually do not appear at once, they tend to increase slowly, giving the controller the chance to intervene at an early stage when the deviation is still small. For example, a sensitive thermostat may start heating as soon as the temperature has dropped one tenth of a degree below the goal temperature. As soon as the temperature has again reached the goal, the thermostat switches off the heating, thus keeping the temperature within a very limited range. This very precise adaptation explains why thermostats in general do not need outside sensors, and can work purely in feedback mode. Feedforward is still necessary in those cases where perturbations are either discontinuous, or develop so quickly that any feedback reaction would come too late. For example, if you see someone pointing a gun in your direction, you would better move out of the line of fire immediately, instead of waiting until you feel the bullet making contact with your skin. Reference: Heylighen F. & Joslyn C. (2001): "Cybernetics and Second Order Cybernetics", in: R.A. Meyers (ed.), Encyclopedia of Physical Science & Technology , Vol. 4 (3rd ed.), (Academic Press, New York), p. 155-170

Special Cases of Control


Our scheme defining the phenomenon of control contains a number of distinct components and properties. These components can in the limit be absent (have the value zero). The resulting schemes can be seen as special cases ("limit" or "degenerate" cases) of the more general scheme. The presence of the other components implies that they will stil inherit most of the properties of control from the more general scheme: the action part may be absent or ignored, so that the control relation is reduced to the formation in the controller of a perception of the controlled system. An example of such a relation can be seen in metalanguage and metatheory. If the perception part is absent or grossly inadequate, we face a situation of blind control. In a primitive control system there may be no representation of the controlled system S as distinct from S itself. Information flows directly from S to the controller C. The classical harmonic oscillator is an example of such a control system. the latter situation is a specific case of the more general phenomenon where the asymmetry between controller and controlled is absent, that is to say where the

action of controller C on controlled S is not fundamentally different from the action of S on C. Other examples of such symmetric interaction include: communication between two control systems (interaction without fixed goals), conflict (interaction between two control systems with distinct goals), and cooperation or bonds (interaction between two control systems with the same goals).

Metalanguage
In the scheme of control an important role is played by representation of the controlled system in the controlling device. When the controlled system consists of texts in some language L, the language used to make their representations in the controlling system -let it be L' -- is referred to as a metalanguage. Creation of a metalanguage is a metasystem transition (MST), in which we ignore the controlling subsystem by abstraction. However, control is still there when the languages are used. Typically, it is a human being, both on the level of L and L'. The L' person uses the rules expressed in L', in order to guide and correct the manipulation of texts in L by the L person. An example: algebraic notation is a metalanguage L' with respect to the language L of arithmetic. L includes a notation of numbers and the four arithmetic operations which can be executed by the L person. Representation of numbers in L' is the same as in L, but L' also includes letters as many-to-one representation of unspecified numbers. The algebraic person L' can do more than the arithmetic person L: he manipulates algebraic formulas and, in particular, knows how to solve equations. High school algebra is a metatheory with respect to arithmetic. But it is not the top of the hierarchy. Modern algebra is a metatheory with respect to high school algebra, as well as with respect to many other theories. For example, consider the following 3-level hierarchy. On the ground level (level 0) we have a finite set of symbols and strings of (this is comparable with countable objects of arithmetic). On the next level (control level 1) there is a device that performs various operations on strings: concatenation, permutation, etc. (compare it to arithmetic operations). Then on the level 2 we put group theory (which is a part of modern algebra) in its application to permutations on the level 1. Apparently, David Hilbert was the first to use the prefix meta (from the Greek over) in the sense we use it in metalanguage, metatheory, and now metasystem. He introduced the term metamathematics to denote a mathematical theory of mathematical proof. In terms of our control scheme, Hilbert's MST has a non-trivial representation: a mapping of proofs in the form of usual mathematical texts (in a natural language with formulas) on the set of texts in a formal logical language which makes it possible to treat proofs as precisely defined mathematical objects. This done, the rest is as usual: the controlled system is a mathematician who proves theorems; the controlling person is a metamathematician who translates texts into the formal logical language and controls the work of the mathematician by checking the validity of his proofs and, possibly mechanically generating proofs in a computer. The emergence of the metamathematician is an MST.

We create a metalanguage and a metatheory in order to describe and examine some class of languages and theories. A metatheory M includes a representation of the examined theory T, and it can be seen as the perception in a control system where T is controlled by M. The result of the examination of the theories like T would, typically, be classification, modification and creation anew of the theories from the class to which T belongs. This is the action part of the control relation between T and M. This part, though, does not enter the meaning of the terms `metalanguage' and `metatheory': it may be absent. Hence we deal here with the case of a control system devoid of action.

The Harmonic Oscillator as a Control System


The classical harmonic oscillator is an example of a primitive control system where there is no representation of the controlled system S as distinct from S itself. Information flows directly from S to the controller C. A hydrogen molecule can be treated as a harmonic oscillator. Here the controlled system S consists of two protons at a distance x from each other. The agent which is embedded in S is the relative momentum of the protons in the center-of-mass coordinate system; it causes protons to move with respect to each other. The controller C is the electron shell which holds the protons together at a certain equilibrium distance x0 by exerting on them a force which acts against the coulomb repulsion. (This force is quantum-mechanical in its origins, but this does not prevent us from considering oscillations classically). The distance x is a representation of the twoproton controlled system for the controller, the shell, but there is no part of the controller that would keep x: it is a feature of the proton pair, i.e. the controlled system. Also, there is no separate subsystem to keep the goal of the controller, but it functions as if the goal was to keep the distance x close to x0. We shall now see that the control processes in a classical oscillator result from the interplay of the three fundamental factors in classical mechanics: coordinate, momentum, and force. The three arrows in our control scheme: action, perception and information will be described by the three familiar equations of the classical oscillator. Nothing happens as long as the protons are at the equilibrium distance x0 and at rest with respect to each other. Suppose some uncontrollable disturbance pushes one of the protons passing to the controlled system some momentum p. The coordinate x starts changing according to the equation: dx/dt = p/m (perception) In terms of the control scheme this is perception, because the agent p of the controlled system determines the representation. The change in the representation informs the agent F of the controller, the shell, which is assumed to be proportional to the change of x and opposite in direction: F = -k(x-x0) (information)

This force acts on the proton pair, causing the change of the momentum according to the equation: dp/dt = F (action) This will start stopping the movement of the protons and finally will reverse it. In this manner, unstopping oscillations will follow, which will keep the distance x close to x0 (assuming that the initial push was in certain limits).

Feedback
A lthough stocks and flows are both necessary and sufficient for generating dynamic behavior, they are not the only building blocks of dynamical systems. More precisely, the stocks and flows in real world systems are part of feedback loops, and the feedback loops are often joined together by nonlinear couplings that often cause counterintuitive behavior. From a system dynamics point of view, a system can be classified as either "open" or "closed." Open systems have outputs that respond to, but have no influence upon, their inputs. Closed systems, on the other hand, have outputs that both respond to, and influence, their inputs. Closed systems are thus aware of their own performance and influenced by their past behavior, while open systems are not . Of the two types of systems that exist in the world, the most prevalent and important, by far, are closed systems. As shown in Figure 1, the feedback path for a closed system includes, in sequence, a stock, information about the stock, and a decision rule that controls the change in the flow . Figure 1 is a direct extension of the simple stock and flow configuration shown previously with the exception that an information link added to close the feedback loop. In this case, an information link "transmits" information back to the flow variable about the state (or "level") of the stock variable. This information is used to make decisions on how to alter the flow setting.

Figure 1: Simple System Dynamics Stock-Flow-Feedback Loop Structure. It is important to note that the information about a system's state that is sent out by a stock is often delayed and/or distorted before it reaches the flow (which closes the loop and affects the stock). Figure 2, for example, shows a more sophisticated stock-flow-feedback loop structure in which information about the stock is delayed in a second stock, representing the decision maker's perception of the stock (i.e., Perceived_Stock_Level), before being passed on. The decision maker's perception is then modified by a bias to form his or her opinion of the stock (i.e., Opinion_Of_Stock_Level). Finally, the decision maker's opinion is compared to his or her desired level of the stock, which, in turn, influences the flow and alters the stock . Given the fundamental role of feedback in the control of closed systems then, an important rule in system dynamics modeling can be stated: Every feedback loop in a system dynamics model must contain at least one stock. .

Figure 2: More Sophisticated Stock-Flow-Feedback Loop Structure

Positive and Negative Loops


Closed systems are controlled by two types of feedback loops: positive loops and negative loops . Positive loops portray self-reinforcing processes wherein an action creates a result that generates more of the action, and hence more of the result. Anything that can be described as a vicious or virtuous circle can be classified as a positive feedback process. Generally speaking, positive feedback processes destabilize systems and cause them to "run away" from their current position. Thus, they are responsible for the growth or decline of systems, although they can occasionally work to stabilize them . Negative feedback loops, on the other hand, describe goal-seeking processes that generate actions aimed at moving a system toward, or keeping a system at, a desired state. Generally speaking, negative feedback processes stabilize systems, although they can occasionally destabilize them by causing them to oscillate.

Causal Loop Diagramming


In the field of system dynamics modeling, positive and negative feedback processes are often described via a simple technique known as causal loop

diagramming. Causal loop diagrams are maps of cause and effect relationships between individual system variables that, when linked, form closed loops. Figure 3, for example, presents a generic causal loop diagram. In the figure, the arrows that link each variable indicate places where a cause and effect relationship exists, while the plus or minus sign at the head of each arrow indicates the direction of causality between the variables when all the other variables (conceptually) remain constant. More specifically, the variable at the tail Figure 3: Generic causal loop diagram of each arrow in Figure 3 causes a change in the variable at the head of each arrow, ceteris paribus, in the same direction (in the case of a plus sign), or in the opposite direction (in the case of a minus sign) . The overall polarity of a feedback loop -- that is, whether the loop itself is positive or negative -- in a causal loop diagram, is indicated by a symbol in its center. A large plus sign indicates a positive loop; a large minus sign indicates a negative loop. In Figure 3 the loop is positive and defines a self reinforcing process. This can be seen by tracing through the effect of an imaginary external shock as it propagates around the loop. For example, if a shock were to suddenly raise Variable A in Figure 3, Variable B would fall (i.e., move in the opposite direction as Variable A), Variable C would fall (i.e., move in the same direction as Variable B), Variable D would rise (i.e., move in the opposite direction as Variable C), and Variable A would rise even further (i.e., move in the same direction as Variable D). By contrast, Figure 4 presents a generic causal loop diagram of a negative feedback loop structure. If an external shock were to make Variable A fall, Variable B would rise (i.e., move in the opposite direction as Variable A), Variable C would fall (i.e., move in the opposite direction as Variable B), Variable D would rise (i.e., move in the opposite Figure 4: Generic causal loop diagram of a directionas Variable C), and Variable negative feedback loop structure A would rise (i.e., move in the same direction as Variable D). The rise in Variable A after the shock propagates around the loop, acts to stabilize the

system -- i.e., move it back towards its state prior to the shock. The shock is thus counteracted by the system's response. Occasionally, causal loop diagrams are drawn in a manner slightly different from those shown in Figure 3 and Figure 4. More specifically, some system dynamicists prefer to place the letter "S" (for Same direction) instead of a plus sign at the head of an arrow that defines a positive relationship between two variables. The letter "O" (for Opposite direction) is used instead of a minus sign at the head of an arrow to define a negative relationship between two variables. To define the overall polarity of a loop system dynamicists often use the letter "R" (for "Reinforcing") or an icon of a snowball rolling down a hill to indicate a positive loop. To indicate a negative loop, the letter "B" (for "Balancing"), the letter "C" (for "Counteracting"), or an icon of a teetertotter is used . Figure 5 illustrates these different causal loop diagramming conventions.

Figure 5: Alternative Causal Loop Diagramming Conventions

In order to make the notion of feedback a little more salient, Figure 6 to Figure 17 present a collection of positive and negative loops. As these loops are shown in isolation (i.e., disconnected from the other parts of the systems to which they belong), their individual behaviors are not necessarily the same as the overall behaviors of the systems from which they are taken. Positive Feedback Examples

Population Growth/Decline: Figure 6 shows the feedback mechanism responsible for the growth of an elephant herd via births. In this simple example we consider two system variables: Elephant Births and Elephant Population. For a given elephant herd, we say that if the birth rate of the herd were to increase, the Elephant Population would increase. In this same way, we can say that if - over time - the Elephant Population of the herd were to increase, the birth rate of the herd would increase. Thus, the Elephant Birth rate drives the Elephant Population that drives Elephant Birth rate - positive feedback. National Debt: Figure 7 is a positive loop that shows the growth in the national debt due to the compounding of interest payments. First, we note that that an increase in the amount of interest paid per year on the national debt (itself a cost within the federal budget ) will cause the overall national debt to increase. In this same way, an increase in the level of national debt will increase the amount of the interest paid each year.

Figure 6: Positive Loop Responsible for the Growth in an Elephant Herd via Births

Figure 7: Positive Loop Showing Growth in the National Debt Due to Compounding Interest Payments

Arms Race: Figure 8 shows a generic arms race between Country A and Country B. In its simplest form, an "arms race" can be described as a self-sustaining competition for military superiority. An arms race is driven by the perception that one's adversary has equal or greater military strength. If Country A moves to increase its military capability, Country B interprets this as a threat and responds in-kind with its own increase in military capability. Country B's action, in turn, causes Country A to feel more threatened. Thus, Country A moves to further increase its military capability. Bank Panic: A common scene during the Great Depression in the 1930s was that of a panic stricken crowd standing outside their local bank waiting to withdraw what remained of their savings. Figure 8 shows the feedback mechanism responsible for the spiraling decline of the banking system during this period. From the diagram, we see that the frequency of bank failures increases public concern and the fear of losing their money. In this case, we say that the two system variables "move" in the same (S) or positive (+) direction. The relationship between the "fear of not being able to withdraw money" and the rate at which bank

Figure 8: Arms Race is a Positive Feedback Process.

Figure 9: Bank panic is a positive feedback process.

withdrawals are made is also positive. The relationship between withdrawals and bank health is negative (-) or opposite (O). This means that if the rate of bank withdrawals increases, the health of the bank decreases as capital reserves are drawn down. The relationship between the banking industry's health and the rate of bank failures is also negative. This means that if the health of the banking industry increases, the number of bank failures per year will decrease. This vicious cycle was clearly seen during the 1930s. An overall economic downturn caused the rate of bank failures to increase. As more banks failed, the public's fear of not being able to withdraw their own money increased. This, in turn, prompted many to withdraw their savings from banks, which further reduced the banking industry's capital reserves. This caused even more banks to fail. Figure 10 depicts three interacting positive feedback loops that are thought to be responsible for the growth in students taking drugs in high school .

Figure 10: Feedback structure responsible for growth high school drug use

Negative Feedback Examples Population Growth/Decline:In Figure 6, we saw how an elephant population and its corresponding birth rate form a positive feedback loop. Now, we consider the other half of the equation, that is, the feedback structure between Elephant Population and Elephant Death rate. Figure 11 shows the negative

feedback process responsible for the decline of an elephant herd via deaths. If the Elephant Death rate increases, the Elephant Population will decrease. A negative sign indicates this counteracting behavior. The causal influence of Elephant Population to Elephant Death rate is just the opposite. An increase in the number of elephants in the herd means that a proportionally larger number of elephants will die each year, i.e., an increase in the herd's death rate. A plus sign indicates this complimentary behavior. These two relationships combine together to form a negative feedback loop.

Figure 11: Elephant population negative feedback loop.

Figure 12 and Figure 13 are two simple and familiar examples of negative feedback processes. Figure 12 shows the negative feedback process responsible for the dissipation of Itching due to Scratching. Figure 13 considers the negative feedback involved in Eating to reduce Hunger. An increase in one's Hunger causes a person to eat more food. Increasing in the rate food consumption, in turn, reduces Hunger.

Figure 12: Scratching an itch and negative feedback

Figure 13: Dissipation of hunger

Law Enforcement:Figure 14 depicts a negative feedback process that maintains a balance between the number of drug dealers and the number of police officers in a neighborhood. An increase in the number of drug dealers in a neighborhood will prompt local officials to increase the number of law enforcement persons as a counter measure. Figure 14: Neighborhood drug intervention negative feedback As the number of police officers increase, more arrests are made and the number of drug dealers is reduced. Car Pools:Figure 15 shows a negative feedback process that maintains a balance between car pools and gasoline consumption. An increase in gasoline consumption increases gasoline price (supply reduction). A higher gasoline price pushes many individual motorists to join carpools, which reduces the total number of vehicles on the road. This, in turn, reduces gasoline consumption.

Figure 15: Gasoline consumption negative feedback

Implicit and Explicit Goals


The negative feedback loops presented in Figure 11 through Figure 15 are, in a sense, misleading because the goals they are seeking are implicit rather than explicit. For example, the implicit goal of the loop in Figure 11 is zero elephants. That is, if the loop were to act, in isolation, for a substantial period of time, eventually all of the elephants would die and the population would be zero. The same sort of logic applies to Figure 12 and Figure 13, in which the loops implicitly seek goals of zero itching and zero hunger respectively.The logic gets even murkier in the case of Figure 14 and Figure 15. In Figure 14, there is an

implicit goal of an "acceptable" or "tolerable" level of drug dealers in the neighborhood, which may or may not be zero. In Figure 15, there is an implicit goal of an acceptable or tolerable gasoline price, which is certainly a lower price rather than a higher price, but is also (realistically) not zero.

Figure 16: Generic negative feedback structure with explicit goal

An alternative and (often) more desirable way to represent negative feedback processes via causal loop diagrams is by explicitly identifying the goal of each loop. Figure 16, for example, shows a causal loop diagram of a generic negative feedback structure with an explicit goal. The logic of this loop says that, any time a discrepancy develops between the state of the system and the desired state of the system (i.e., goal), corrective action is called forth that moves the system back into line with its desired state. A more concrete example of a negative feedback structure with an explicit goal is shown in Figure 17. In the figure, a distinction is drawn between the actual number of elephants in a herd and the desired number of elephants in the herd (presumably determined by a knowledge of the carrying capacity of the environment supporting the elephants). If the actual number of elephants begins to exceed the desired number, corrective action -- i.e., hunting -- is called forth. This action reduces the size of the herd and brings it into line with the desired number of elephants.

Figure 17: Example of negative feedback structure with an explicit goal

Examples of Interacting "Nests" of Positive and Negative Loops In system dynamics modeling, causal loop diagrams are often used to display "nests" of interacting positive and negative feedback loops. This is usually done when a system dynamicist is attempting to present the basic ideas embodied in a model in a manner that is easily understood, without having to discuss in detail. As Figure 18 and Figure 19 show, when causal loop diagrams are used in this fashion, things can get rather complicated. Figure 18 is a causal loop diagram of a system dynamics model created to examine issues related to profitability in the paper and pulp industry. This figure has a number of features that are important to mention. The first is that the authors have numbered each of the positive and negative loops so that they can be easily referred to in a verbal or written discussion. The second is that the authors have taken great care to choose variable names that have a clear sense of direction and have real-life counterparts in the actual system . The last and most important feature is that, although the figure provides a sweeping overview of the feedback structure that underlies profitability problems in the paper and pulp industry, it cannot be used to determine the dynamic behavior of the model (or of the actual system). In other words, it is impossible for someone to accurately think through, or mentally simulate, the dynamics of the paper and pulp system from Figure 18 alone.

Figure 18: Causal Loop Diagram of a Model Examining Profitability in the Paper and Pulp Industry Figure 19 is a causal loop diagram of a system dynamics model created to examine forces that may be responsible for the growth or decline of life insurance companies in the United Kingdom. As with Figure 18, a number of this figure's features are worth mentioning. The first is that the model's negative feedback loops are identified by "C's," which stand for "Counteracting" loops. The second is that double slashes are used to indicate places where there is a significant delay between causes (i.e., variables at the tails of arrows) and effects (i.e., variables at the heads of arrows). This is a common causal loop diagramming convention in system dynamics. Third, is that thicker lines are used to identify the feedback loops and links that author wishes the audience to focus on. This is also a common system dynamics diagramming convention . Last, as with Figure 18, it is clear that a decision maker would find it impossible to think through the dynamic behavior inherent in the model, from inspection of Figure 19 alone.

Figure 19: Causal Loop Diagram of a Model Examining the Growth or Decline of a Life Insurance Company .

Archetypes
An area of the field of system dynamics or, more precisely, of the much broader field of "systems thinking," that has recently received a great deal of attention is archetypes . Archetypes are generic feedback loop structures, presented via causal loop diagrams, that seem to describe many situations that frequently appear in public and private sector organizations. Archetypes are thought to be useful when a decision maker notices that one of them is at work in his or her organization. Presumably, the decision maker can then attack the root causes of the problem from an holistic and systemic perspective . Currently, nine archetypes have been identified and cataloged by systems thinkers, including:
o o o o o o o o o

Balancing Process with Delay, Limits to Growth, Shifting the Burden, Eroding Goals, Escalation, Success to the Successful, Tragedy of the Commons, Fixes that Fail, and Growth and Underinvestment .

Recent efforts, however, have suggested that the number can be reduced to four:
o o o o

Growth Intended-Stagnation/Decline Achieved, Control Intended-Unwanted Growth Achieved, Control Intended-Compromise Achieved, and Growth Intended At Expense to Others .

No matter what the true number archetypes is or will be, however, the central question remains unanswered: How successful are archetypes in helping decision makers solve problems in their organizations? Problems with Causal Loop Diagrams Causal loop diagrams are an important tool in the field of system dynamics modeling. Almost all system dynamicists use them and many system dynamics software packages support their creation and display. Although some system dynamicists use causal loop diagrams for "brainstorming" and model creation, they are particularly helpful when used to present important ideas from a model that has already been created . The only potential problem with causal loop diagrams and archetypes then, occurs when a decision maker tries to use them, in lieu of simulation, to determine the dynamics of a system . Causal loop diagrams are inherently weak because they do not distinguish between information flows and conserved (noninformation) flows. As a result, they can blur direct causal relationships between flows and stocks. Further, it is impossible, in principle, to determine the behavior of a system solely from the polarity of its feedback loops, because stocks and flows create dynamic behavior, not feedback. Finally, since causal loop diagrams do not reveal a system's parameters, net rates, "hidden loops," or nonlinear relationships, their usefulness as a tool for predicting and understanding dynamic behavior is further weakened. The conclusion is that simulation is essential if a decision maker is to gain a complete understanding of the dynamics of a system .

Вам также может понравиться