Вы находитесь на странице: 1из 14

Sakarya University

The Institute Science and Technology


Department Of Industrial Engineering

Engineering, Manufacturing Systems,


Agents and Agent Technologies

For the Lecture “System and Agent Systems


Engineering”

Prof. Dr. Harun TAŞKIN

Mehmet Bilgehan ERDEM


D095006005
1. Index
2. Engineering......................................................................................................3
2.1. Analyzing the problem...............................................................................3
2.2. Designing a solution..................................................................................3
2.3. Bringing it to life........................................................................................3
3. Types of Manufacturing....................................................................................4
3.1. Job-Shop and Batch Production..................................................................4
3.2. Mass Production.........................................................................................4
3.3. Continuous Production...............................................................................5
4. Intelligent Agents.............................................................................................5
4.1. Agent Definition.........................................................................................5
4.2. Rational Agent...........................................................................................6
4.3. Structure of Agents....................................................................................6
4.4. Classes Of Intelligent Agents.....................................................................7
4.4.1. Simple Reflex Agents...........................................................................8
4.4.2. Model-based reflex agents..................................................................8
4.4.3. Goal-based agents...............................................................................9
4.4.4. Utility-based agents.............................................................................9
4.4.5. Learning agents.................................................................................10
4.5. Features of Intelligent Agents..................................................................10
4.6. Agents and Environment Types...............................................................11
4.6.1. Observable vs. partially observable..................................................11
4.6.2. Deterministic vs. stochastic...............................................................11
4.6.3. Episodic vs. sequential......................................................................12
4.6.4. Static vs. dynamic.............................................................................12
4.6.5. Discrete vs. continuous.....................................................................12
4.6.6. Single-agent vs. multiple agent.........................................................12
4.6.7. Overview of environments.................................................................12
4.7. Software Development Paradigms (Object-oriented Approach)..............13
4.7.1. The Agent-Oriented Approach...........................................................13
4.7.2. O-O versus A-O..................................................................................13
4.7.3. How is an Agent different from other software?................................14
5. Conclusions....................................................................................................14
5.1. Discussion................................................................................................14
6. References.....................................................................................................15
References

2. Engineering
The accomplishments of engineering can be seen in nearly every aspect of our daily lives, from
transportation to communications, and entertainment to health care. And, although each of these applications is
unique, the process of engineering is largely independent. This process begins by carefully analyzing a problem,
intelligently designing a solution for that problem, and efficiently transforming that design solution into physical
reality.

2.1. Analyzing the problem


Defining the problem is the first and most critical step of the problem analysis. To best approach a solution,
the problem must be well-understood and the guidelines or design considerations for the project must be clear.
For example, in the creation of a new automobile, the engineers must know if they should design for fuel
economy or for brute power. Many questions like this arise in every engineering project, and they must all be
answered at the very beginning if the engineers are to work efficiently toward a solution. When these issues are
resolved, the problem must be thoroughly researched. This involves searching technical journals and closely
examining solutions of similar engineering problems. The purpose of this step is two-fold. First, it allows the
engineer to make use of a tremendous body of work done by other engineers. And second, it ensures the engineer
that the problem has not already been solved. Either way, the review allows him or her to intelligently approach
the problem, and perhaps avoid a substantial waste of time or legal conflicts in the future.

2.2. Designing a solution


Once the problem is well-understood, the process of designing a solution begins. This process typically
starts with brainstorming, a technique by which members of the engineering team suggest a number of possible
general approaches for the problem. In the case of an automobile, perhaps conventional gas, solar, and electric
power would be suggested to propel the vehicle. Generally, one of these is then selected as the primary candidate
for further development. Occasionally, however, if time permits and several ideas stand out, the team may elect
to pursue multiple solutions to the problem. More refined designs of these solutions/systems then “compete” and
the best of those is chosen. Once a general design or technology is selected, the work is sub-divided and various
team members assume specific responsibilities. In the automobile, for example, the mechanical engineers in the
group would tackle such problems as the design of the transmission and suspension systems. They may also
handle air flow and climate-control concerns to ensure that the vehicle is both aerodynamic and comfortable to
ride in. Electrical engineers, on the other hand, would concern themselves with the ignition system and the
various displays and electronic gauges. They would also be responsible for the design of the communication
system which links all of the car’s sub-systems together. In any case, each of these engineers must design one
aspect which operates in harmony with every other aspect of the system

2.3. Bringing it to life


Once the design is complete, a prototype or preliminary working model is generally built. The primary
function of the prototype is to demonstrate and test the operation of the device. For this reason, its cosmetics are
typically of little concern, as they will likely change by the time the device reaches the market. The prototype
stage is where the device undergoes extensive testing to reveal any bugs or problems with the design. Especially
with complex systems, it is often difficult to predict (on paper) where problems with the design may occur. If
one aspect of the system happens to fail too quickly or does not function at all, it is closely analyzed and that
sub-system is redesigned and retested (both on its own and within the complete system). This process is repeated
until the entire system satisfies the design requirements. Once the prototype is in complete working order and the
engineers are satisfied with its operation, the device goes into the production stage. Here, details such as
appearance, ease of use, availability of materials, and safety are given attention and generally result in additional
final design changes.

3. Types of Manufacturing
Although there are many ways to categorize manufacturing, three general categories stand out (which
probably have emerged from production planning and control lines of thought):
1. Job-shop production
. A job shop produces in small lots or batches.

2. Mass production
. Mass production involves machines or assembly lines that manufacture discrete units repetitively.

3. Continuous production
. The process industries produce in a continuous flow.

Primary differences among the three types center on output volume and variety and process flexibility.
Table 1 matches these characteristics with the types of manufacturing and gives examples of each type. The
following discussion begins by elaborating on Table 1. Next are comments on hybrid and uncertain types of
manufacturing. Finally, five secondary characteristics of the three manufacturing types are presented.

Table 1. Types of Manufacturing — Characteristics and Examples


Job-Shop Production Mass Production Continuous Production

Volume Very low High Highest

Variety Highest Low Lowest

Flexibility Highest Low Lowest

Examples Tool and die making, Auto assembly, bottling, Paper milling, refining,
casting (foundry) , baking
(bakery) apparel manufacturing extrusion

3.1. Job-Shop and Batch Production


As Table 1 shows, job-shop manufacturing is very low in volume but is highest in output variety and
process flexibility. In this mode, the processes — a set of resources including labor and equipment — are reset
intermittently to make a variety of products. (Product variety requires flexibility to frequently reset the process.)
In tool and die making, the first example, the volume is generally one unit — for example, a single die set or
mold. Since every job is different, output variety is at a maximum, and operators continually reset the equipment
for the next job.
Casting in a foundry has the same characteristics, except that the volume is sometimes more than one. That
is, a given job order may be to cast one, five, ten, or more pieces. The multi piece jobs are sometimes called lots
or batches.
A bakery makes a variety of products, each requiring a new series of steps to set up the process — for
example, mixing and baking a batch of sourdough bread, followed by a batch of cinnamon rolls.

3.2. Mass Production


Second in Table 1 is mass production. Output volume, in discrete units, a high. Product variety is low,
entailing low flexibility to reset the process.
Mass production of automobiles is an example. A typical automobile plant will assemble two or three
hundred thousand cars a year. In some plants just one model is made per assembly line; variety is low (except for
option packages). In other plants, assembly lines produce mixed models. Still, this is considered mass production
since assembly continues without interruption for model changes.
In bottling, volumes are much higher, sometimes in the millions per year. Changing from one bottled
product to another requires a line stoppage, but between changeovers production volumes are high (e.g.,
thousands). Flexibility, such as changing from small to large bottles, is low; more commonly, large and small
bottles are filled on different lines.

Similarly, mass production of apparel can employ production lines, with stoppages for pattern changes.
More conventionally, the industry has used a very different version of mass production: Cutters, sewers, and
others in separate departments each work independently, and material handlers move components from
department to department to completion. Thus, existence of an assembly line or production line is not a
necessary characteristic of mass production.

3.3. Continuous Production


Products that flow — liquids, gases, powders, grains, slurries — are continuously produced the third type in
Table 1. In continuous process plants, product volumes are very high (relative to, for example, a job-shop
method of making the same product). Because of designed-in process limitations (pumps, pipes, valves, etc.)
product variety and process flexibility are very low.
In a paper mill, a meshed belt begins pulp on its journey though a high-speed multistage paper-making
machine. The last stage puts the paper on reels holding thousands of linear meters. Since a major product
changeover can take hours, plants often limit themselves to incremental product changes. Special-purpose
equipment design also poses limitations. For example, a tissue machine cannot produce newsprint, and a
newsprint machine cannot produce stationery. Thus, in paper making, flexibility and product variety for a given
machine are very low.
Whereas a paper mill produces a solid product, a refinery keeps the substance in a liquid (or sometimes
gaseous) state. Continuous refining of fats, for example, involves centrifuging to remove undesirable properties
to yield industrial or food oils. As in paper making, specialized equipment design and lengthy product
changeovers (including cleaning of pipes, tanks, and vessels) limit process flexibility; product volumes between
changeovers are very high, sometimes filling multiple massive tanks in a tank farm. Extrusion, the third example
of continuous processing in Table 1, yields such products as polyvinyl chloride (PVC) pipe, polyethylene film,
and reels of wire. High process speeds produce high product volumes, such as multiple racks of pipe, rolls of
film, or reels of wire per day. Stoppages for changing extrusion heads and many other adjustments limit process
flexibility and lead to long production runs between changeovers. Equipment limitations (e.g., physical
dimensions of equipment components) keep product variety low.

4. Intelligent Agents
4.1. Agent Definition
Although the word agent is used in a multitude of contexts and is part of our everyday vocabulary, there is
no single all-encompassing meaning for it. Perhaps, the most widely accepted definitions for this term is that "an
agent acts on behalf of someone else, after having been authorized". This definition can be applied to software
agents, which are instantiated and act instead of a user or a software program that controls them. Thus, one of the
most characteristic agent features is its agency. In the rest of this book, the term agent is synonymous to a
software agent. The difficulty in defining an agent arises from the fact that the various aspects of agency are
weighted differently, with respect to the application domain at hand. Although, for example, agent learning is
considered of pivotal importance in certain applications, for others it may be considered not only trivial, but even
undesirable. Consequently, a number of definitions could be provided with respect to agent objectives.
Wooldridge & Jennings have succeeded in combining general agent features into the following generic,
nevertheless abstract, definition:
An agent is a computer system that is situated in some environment, and that is capable of autonomous
action in this environment, in order to meet its design objectives [Wooldridge and Jennings, 1995].
It should be denoted that Wooldridge & Jennings defined the Notion of an "agent" and not an "intelligent
agent". When intelligence is introduced, things get more complicated, since a nontrivial part of the research
community believes that true intelligence is not feasible [Nwana, 1997]. Nevertheless, "intelligence" refers to
computational intelligence and should not be confused with human intelligence [Knapik and Johnson, 1998].

A fundamental agent feature is the degree of autonomy. Moreover, agents should have the ability to
communicate with other entities (either similar or dissimilar agents) and should be able to exchange information,
in order to reach their goal. Thus, an agent is defined by its interactivity, which can be expressed either as
proactivity (the absence of passiveness) or reactivity (the absence of deliberation) in its behavior. Finally, agents
can be defined through other key features, such as their learning ability, their cooperativeness, and their mobility.
One can easily argue that a generic agent definition, entailing the above characteristics, cannot satisfy
researchers in the fields of Artificial Intelligence and Software Engineering, since the same features could be
ascribed to a wide range of entities (e.g., man, machines, computational systems, etc.). Within the context of this
book, agents are used as an abstraction tool, or a metaphor, for the design and construction of systems. Thus, no
distinction between agents, software agents and intelligent agents is made. A fully functional definition is
employed, integrating all the characteristics into the Notion of an agent:
An agent is an autonomous software entity that -functioning continuously - carries out a set of goal-oriented
tasks on behalf of another entity, either human or software system. This software entity is able to perceive its
environment through sensors and act upon it through effectors, and in doing so, employ some knowledge or
representation of the user's preferences [Wooldridge, 1999].

4.2. Rational Agent

For each possible percept sequence, a rational agent should select an action that is expected to maximize its
performance measure, given the evidence provided by the percept sequence, and whatever built-in knowledge
the agent has.
• Rationality is distinct from omniscience (all-knowing with infinite knowledge)
• Agents can perform actions in order to modify future percepts so as to obtain useful information
(information gathering, exploration)
• An agent is autonomous if its behavior is determined by its own experience (with ability to learn and
adapt)

1.1. Structure of Agents

A simple agent program can be defined mathematically as an agent function which maps every possible
percepts sequence to a possible action the agent can perform or to a coefficient, feedback element, function or
constant that affects eventual actions:

The program agent, instead, maps every possible percept to an action.

Figure 1. Agents interact with environments through sensors and effectors.

We could view the world of agents as being categorized as presented bellow:


Cognitive agents

Computational agents ⊃ Intelligent agents


Reactive agents

Among computational agents we may identify also a broad category of agents, which are in fact
nowadays the most popular ones, namely those that are generally called software agents (or weak agents, as in
Wooldridge and Jennings, 1995, to differentiate them from the cognitive ones, corresponding to the strong
notion of agent): information agents and personal agents. An information agent is an agent that has access to one
or several sources of information, is able to collect, filter and select relevant information on a subject and present
this information to the user. Personal agents or interface agents are agents that act as a kind of personal assistant
to the user, facilitating for him tedious tasks of email message filtering and classification, user interaction with
the operating system, management of daily activity scheduling, etc.

Intelligent Agent

Theoretical model Description Implementation

Internal structures Performance Functioning

Knowledge & Reasoning & Interfaces Decision Plan


belief action making synthesis
- domain strategies
- self
- other agents
- goals Communication Perception

Figure 2. Levels of specification and design of intelligent agents

1.2. Classes Of Intelligent Agents


Russell & Norvig (2003) group agents into five classes based on their degree of perceived intelligence and
capability:
1. simple reflex agents
2. model-based reflex agents
3. goal-based agents
4. utility-based agents
5. learning agents
1.1.1. Simple Reflex Agents
Simple reflex agents act only on the basis of the current percept. The agent function is based on the
condition-action rule: if condition then action.
This agent function only succeeds when the environment is fully observable. Some reflex agents can also
contain information on their current state which allows them to disregard conditions whose actuators are already
triggered.

Figure 3. Schematic diagram of a simple reflex agent

1.1.2. Model-based reflex agents


Model-based agents can handle partially observable environments. Its current state is stored inside the agent
maintaining some kind of structure which describes the part of the world which cannot be seen. This behavior
requires information on how the world behaves and works. This additional information completes the “World
View” model.
A model-based reflex agent keeps track of the current state of the world using an internal model. It then
chooses an action in the same way as the reflex agent.

Figure 4. Schematic diagram of a Model-based reflex agent


1.1.3. Goal-based agents
Goal-based agents are model-based agents which store information regarding situations that are desirable.
This allows the agent a way to choose among multiple possibilities, selecting the one which reaches a goal state.

Figure 5. Schematic diagram of a Goal-based agent

1.1.4. Utility-based agents


Goal-based agents only distinguish between goal states and non-goal states. It is possible to define a
measure of how desirable a particular state is. This measure can be obtained through the use of a utility function
which maps a state to a measure of the utility of the state.

Figure 6. Schematic diagram of a Utility-based agent

1.1.5. Learning agents


Learning has an advantage that it allows the agents to initially operate in unknown environments and to
become more competent than its initial knowledge alone might allow.
Figure 7. Schematic diagram of a Learning agent

1.2. Features of Intelligent Agents

• Reactive: responds to changes in the environment


• Autonomous: control over its own actions
• Goal-oriented: does not simply act in response to the environment
• Temporally continuous: is a continuously running process
• Communicative: communicates with other agents, perhaps including people
• Learning: changes its behaviour based on its previous experience
• Mobile: able to transport itself from one machine to another
• Flexible: actions are not scripted
• Character: believable personality and emotional state

Agent
Type Percepts Actions Goals Environment

Symptoms, Questions, Healthy


Medical test results, test requests, patients, Patient,
diagnostic patient’s treatments, minimize hospital,
system answers referrals costs staff

Satellite Pixels of Correct


image varying Display a image Images from
analysis intensity and categorizatio categorizatio orbiting
system color n of the scene n satellite

Part- Pixels of Pick up parts Place parts Conveyor


picking varying and sort them into correct belt with
robot intensity and into bins bins parts, bins
color

Temperature Open and


, pressure close valves, Maximize
Refinery and chemical adjust purity, yield, Refinery,
controller readings temperature safety staff

Display
Interactive exercises, Maximize Set of
English suggestions, student’s students,
tutor Typed words corrections exam results exam papers

Table 2. Examples of Intelligent Agents

1.1. Agents and Environment Types

Environments in which agents operate can be defined in different ways. It is helpful to view the following
definitions as referring to the way the environment appears from the point of view of the agent itself.

1.1.1. Observable vs. partially observable


In order for an agent to be considered an agent, some part of the environment - relevant to the action being
considered - must be observable. In some cases (particularly in software) all of the environment will be
observable by the agent. This, while useful to the agent, will generally only be true for relatively simple
environments.

1.1.2. Deterministic vs. stochastic


An environment that is fully deterministic is one in which the subsequent state of the environment is wholly
dependent on the preceding state and the actions of the agent. If an element of interference or uncertainty occurs
then the environment is stochastic. Note that a deterministic yet partially observable environment will appear to
be stochastic to the agent.
An environment state wholly determined by the preceding state and the actions of multiple agents is called
strategic.

1.1.3. Episodic vs. sequential


This refers to the task environment of the agent. A task environment is episodic if each task that the agent
must perform does not rely upon past performance and will not affect future performance. Otherwise it is
sequential.

1.1.4. Static vs. dynamic


A static environment, as the name suggests, is one that does not change from one state to the next while the
agent is considering its course of action. In other words, the only changes to the environment are those caused by
the agent itself. A dynamic environment can change, and if an agent does not respond in a timely manner, this
counts as a choice to do nothing.

1.1.5. Discrete vs. continuous


This distinction refers to whether or not the environment is composed of a finite or infinite number of
possible states. A discrete environment will have a finite number of possible states, however, if this number is
extremely high, then it becomes virtually continuous from the agents perspective.

1.1.6. Single-agent vs. multiple agent


An environment is only considered multiple agent if the agent under consideration must act cooperatively or
competitively with another agent to realize some tasks or achieve goal. Otherwise another agent is simply
viewed as a stochastically behaving part of the environment.

1.1.7. Overview of environments


rising order of complexity →
• Observable • Partially observable
• Deterministic • Stochastic
• Episodic • Sequential
• Static • Dynamic
• Discrete • Continuous
• Single-agent • Multiple agent

Solitaire Backgammon Internet Shopping Taxi

Observable?? Full Full Partial Partial

Deterministic?? Yes No Yes No

Episodic?? No No No No

Static?? Yes Yes Semi No

Discrete?? Yes Yes Yes No

Single-Agent?? Yes No No No

Table 3. Examples of environments and their characteristics.

1.1. Software Development Paradigms (Object-oriented Approach)

The basic idea in an object-oriented approach is to view a software system as a collection of interacting
entities called "objects”;
• Objects defined by an identity, a state (member variables), and a behavior (invoked methods)
• The interactions among objects are described in terms of ‘messages’,
• ‘Instance’ objects sharing common characteristics are usually grouped into classes. A number of
different relationships can hold among them. Fundamental ones are inheritance / classification and
decomposition / aggregation that relate classes to each other, and ‘instance-of’ which relates a class
to its instances.
E.g. SmallTalk, C++, Java.

1.1.1. The Agent-Oriented Approach

A software system is viewed as a collection of interacting entities called agents. As objects, agents have an
identity, a state and a behaviour, but these are described in more sophisticated terms:
– State: knowledge, beliefs, goals, responsabilities, etc.
– Behaviour: roles that can be played, actions that can be performed, reactions to events,
responsibilities, etc.
The behaviour of an agent is defined in terms of how to decide what to do (and not in terms of what it
should do).

1.1.1. O-O versus A-O


i. Agents can be considered as active objects.
ii. O-O technology such as distributed objects (CORBA, RMI), applets, mobile object systems,
coordination mechanisms and languages can be used to implement agent-systems.
iii. A Multi-Agent System (MAS) can be composed of agents and objects which are controlled and
accessed by agents.
iv. The concept of agent provides a higher-level abstraction than the concept of object.

Figure 8. Roots of Agent Technology

1.1.1. How is an Agent different from other software?

• Agents are autonomous, that is, they act on behalf of the user
• Agents contain some level of intelligence, from fixed rules to learning engines that allow them to adapt
to changes in the environment
• Agents don't only act reactively, but sometimes also proactively
• Agents have social ability, that is, they communicate with the user, the system, and other agents as
required
• Agents may also cooperate with other agents to carry out more complex tasks than they themselves can
handle
• Agents may migrate from one system to another to access remote resources or even to meet other
agents

1. Conclusions
The concept of agent is associated with many different kinds of software and hardware systems. Still, we
found that there are similarities in many different definitions of agents.
Unfortunately, still, the meaning of the word “agent” depends heavily on who is speaking.
There is no consensus on what an agent is, but several key concepts are fundamental to this paradigm. We
have seen:
– The main characteristics upon which our agent definition relies
– Several types of software agents
– In what an agent differs from other software paradigms

1.1. Discussion

• Who is legally responsible for the actions or agents?


• How many tasks and which tasks the users want to delegate to agents?
• How much can we trust in agents?
• How to protect ourselves of erroneously working agents?
1. References

• Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.),
Upper Saddle River, NJ: Prentice Hall, ISBN 0-13-790395-2, http://aima.cs.berkeley.edu/ , chpt. 2
• Womack, Jones, Roos; The Machine That Changed The World, Rawson & Associates, New York.
Published by Simon & Schuster, 1990.
• Software Agents, Edited by Jeff M. Bradshaw. AAAI Press/The MIT Press.
• Agent Technology, Edited by N. Jennings and M. Wooldridge, Springer.
• The Design of Intelligent Agents, Jorg P. Muller, Springer.
• Heterogeneous Agent Systems, V.S. Subrahmanian, P. Bonatti et al., The MIT Press.
• Kalpakjian, Serope; Steven Schmid (2005). Manufacturing, Engineering & Technology. Prentice
Hall. pp. 22–36, 951–988. ISBN 0-1314-8965-8.
• Stan Franklin and Art Graesser (1996); Is it an Agent, or just a Program?: A Taxonomy for
Autonomous Agents; Proceedings of the Third International Workshop on Agent Theories,
Architectures, and Languages, Springer-Verlag, 1996
• N. Kasabov, Introduction: Hybrid intelligent adaptive systems. International Journal of Intelligent
Systems, Vol.6, (1998) 453-454.
• Wooldridge,M and N. R. Jennings. Agent theories, architectures, and languages. In Wooldridge
and Jennings, eds. Intelligent Agents, Springer Verlag, 1999. p.1-22.
• Dennett,D.C. The Intentional Stance. The MIT Press, 1987.
• Koller,D. and A. Pleffer. Representations and solutions for game-theoretic problems. Artificial
Intelligence 94 (1-2), 1997, 167-216.
• Pollack, M.E. The use of plans. Artificial Intelligence, 57 (1), 1992, 43-68.
• Nwana. H.S. "Research and Development Challenges for agent-based systems". IEE Proceedings
On Software Engineering, Volume 144(1), p2-10, 1997
• Knapik, M., & Johnson, J. (1998). Developing intelligent agents for distributed systems. New
York: McGraw Hill

Вам также может понравиться