Академический Документы
Профессиональный Документы
Культура Документы
Inteligencia Artificial
III Ciclo S2
Estudio de la computacin que observa que una maquina sea capaz de percibir,
razonar y actuar (Winston, 1992).
Nuevo esfuerzo excitante que logre que la computadora piense... mquinas con
mentes, en el sentido completo y literal (Haugeland, 1985).
Para aprobar la prueba total de Turing, es necesario que la computadora est dotada
de:
1. Vista: Capacidad de percibir el objeto que se encuentra en frente suyo.
2. Robtica: Capacidad para mover el objeto que ha sido percibido.
APLICACIONES Y HERRAMIENTAS DERIVADAS DE LA
INTELIGENCIA ARTIFICIAL
A lo largo de la historia de la Inteligencia Artificial sean ido desarrollando diferentes
herramientas y aplicaciones entre las que podemos mencionar las siguientes:
Lenguajes Programacin: Un lenguaje de programacin es un lenguaje artificial
empleado para dictar instrucciones a una computadora, y aunque es factible
hacer uso de cualquier lenguaje computacional para la produccin de
herramientas de inteligencia artificial; se han desarrollado herramientas cuyo
propsito especfico es el desarrollo de sistemas dotados con Inteligencia
Artificial.
AMBIENTES DE DESARROLLO
Debido al enorme xito que tuvieron los sistemas expertos en la dcada de los 80 s,
empezaron a surgir desarrollos conocidos como shells. En computacin, una Shell es
una pieza de software que provee una interfaz para los usuarios.
Un sistema experto posee entre sus componentes las dos herramientas siguientes:
1. Base de conocimientos.- El conocimiento relacionado con un problema o fenmeno
se codifica segn una notacin especfica que incluye reglas, predicados, redes
semnticas y objetos.
2. Motor de inferencia.- Es el mecanismo que combina los hechos y las preguntas
particulares, utilizando la base de conocimiento, seleccionando los datos y pasos
apropiados para presentar los resultados
En el entorno de los sistemas expertos, una Shell es una herramienta diseada para
facilitar su desarrollo e implementacin. En otros trminos, una Shell es un sistema
experto que posee una base de conocimientos vaca, pero acompaada de las
herramientas necesarias para proveer la base de conocimiento sobre el dominio del
discurso de una aplicacin especfica. Una Shell provee tambin al ingeniero del
conocimiento (encargado de recabar la base de conocimientos) de una herramienta que
trae integrado algn mecanismo de representacin del conocimiento, un mecanismo
de inferencia, elementos que faciliten la explicacin del procedimiento o decisin
tomados por el sistema experto (componente explicativo) e incluso, algunas veces,
proveen una interfaz de usuario.
El uso de estos ambientes de desarrollo, permiten el desarrollo de sistemas expertos
eficientes incluso cuando no se domine algn lenguaje de programacin; razn por la
que se ha popularizado su uso para la produccin de sistemas expertos en todos los
dominios del conocimiento.
Algunos ambientes de desarrollo para la construccin de sistemas expertos
independientes del dominio o shells clsicos son:
EMYCIN o Essential Mycin: Esta Shell fue construida en la Universidad de Stanford
como resultado del desarrollo del sistema experto MYCIN que realizaba el diagnstico
de enfermedades infecciosas de la sangre. El sistema EMYCIN fue la base para la
construccin de muchos otros sistemas expertos, tales como el PUFF (que diagnostica
enfermedades pulmonares) y el SACON (que trabaja con ingeniera estructural).
OPS5- OPS83: Estas Shell fueron desarrolladas en C. C es el lenguaje que debe ser
empleado para la insercin de las reglas base y junto con el uso del encadenamiento
hacia adelante, la posibilidad de insertar las reglas en un lenguaje como el C son sus
principales aportaciones. Este sistema esta basado en el algoritmo propietario RETE II.
El algoritmo RETE es de reconocida eficacia para la comparacin de hechos con los
patrones de las reglas y la determinacin de cuales de ellas satisfacen sus condiciones.
It is a branch of computer science that has aroused more interest today, because of its
enormous scope. The search for mechanisms that help us understand the intelligence
and make models and simulations of these, is something that has led many scientists to
choose this area of research.
The immediate origin of the concept and development criteria of "IA" dates back to the
intuition of mathematical genius Englishman Alan Turing and the nickname "Artificial
Intelligence" is due to McCarthy who organized a conference at Dartmouth College
(USA) to discuss the possibility of building "smart" machines; this meeting was attended
by scientific researchers of known reputation in the area of computer science as Marvin
Minsky, Nathaniel Rochester, Claude Shannon, Herbert Simon and Allen Newell. As
a result of this meeting, the first guidelines now known as Intelligence Artificizal settled;
although previously there were already some related work.
Since its inception, the IA had to deal with the conflict that there was no clear and
unique definition of intelligence; so it is not surprising that even today, there is no
single definition of it. Just as psychology has identified different types of human
intelligence (emotional, interpersonal, musical, linguistic, kinesthetic, spatial, etc.), the
different definitions of artificial intelligence emphasize different aspects; although there
are similarities between them. Below are some of the initial definition of this area.
Study of computing noting that a machine is able to perceive, reason and act
(Winston, 1992).
New exciting effort to achieve the computer think ... machines with minds, in
the full and literal sense (Haugeland, 1985).
Artificial Intelligence originally was built based on existing knowledge and theories in
other areas of knowledge. Some of the main sources of inspiration and knowledge
nurtured this area are computer science, philosophy, linguistics, mathematics and
psychology. Each of these sciences contributed not only with the knowledge developed
in them, but with their tools and experiences as well; thus contributing to the creation
and development of this new area of knowledge.
Philosophers like Socrates, Plato, Aristotle, Leibniz since 400 BC, laid the foundation
for the artificial intelligence to conceive of the mind as a machine operating from
encoded in an internal language knowledge and considering that thought served to
determine what the right action had to be taken. For example, Aristotle who is
considered as the first (300AC) in a structured way to describe how humans produce
rational conclusions from a group of premises; He contributed a set of rules known as
syllogisms that are now the basis of one of the approaches of Artificial Intelligence.
However, philosophy is not the only science that has inherited its fruits to this area;
because other equally important contributions are:
Mathematics provided the tools to manipulate statements of logical certainty as well as
those in which there is uncertainty of probabilistic type; calculating meanwhile, hybrid
tools that allow us modeling of different types of phenomena and were also
mathematics, who paved the way for management reasoning algorithms.
Psychology has reinforced the idea that humans and other animals can be considered
as machines for processing information, psychologists such as Piaget and Craik defined
as behaviorism theories - cognitive psychology.
Computer Sciences began shortly before the same Artificial Intelligence. AI theories are
a means for implementation of artifacts and cognitive modeling through computers.
Artificial intelligence programs generally are extensive and would not work without the
great advances in speed and memory provided by the computer industry.
Linguistics was developed in parallel with the IA and serves as a basis for knowledge
representation (Chomsky). Modern linguistics was born almost at the same AI and both
areas have matured together; to the extent that there is a known as computational
linguistics and natural language processing hybrid area. Linguists have shown that the
problem of understanding the language is much more complicated than had been
assumed in 1957.
The economy, as an expert area in making decisions because they involve the loss or
gain of performance; He gave AI a number of theories (Decision theory, which
combines probability theory and utility theory, game theory - for small economies;
Processes Markov decision - for sequential processes, among others) they made it
possible for making "good decisions".
Finally, Neuroscience, has contributed to the IA with the knowledge gathered to date
on how the brain processes information.
HISTORY
Since time immemorial, man has sought the realization of the desire to create beings
like him; through the creation of artifacts with appearance, movements and even
presented similar behavior to humans.
Russian Isaac Asimov (1920-1992), writer and historian, chronicled on objects and
situations in his time were science fiction; however, with the passage of time, many of
them have been coming true. Asimov, in his book Runaround described what today are
the three laws of robotics. His literary work would serve as motivation for scientists and
engineers try to make it a reality.
In the 50s when it is made, a system that had some success, was called Perceptron
Rossenblatt. This was a visual pattern recognition system in which efforts that could
resolve a range of problems came together, but these energies were diluted immediately.
Around this time, the English mathematician Alan Turing (1912-1954) proposed a test
in order to prove the existence of "intelligence" in a non-biological device. This test is
known as the "Turing test" is based on the assumption that if a machine behaves in
every respect as intelligent, then must be intelligent (Alan Turing, 1950). As a result of
this test, many of the efforts of researchers at the time, they focused on the drafting of
linguistic artificial intelligence systems, which marked the birth of so-called "chatbots"
(robots talk). Although they had already done research on the design and capabilities
of non-biological entities, the work of Alan Turing in 1950, he focused the interest of
the scientific community in the development of "intelligent machines". Two of the most
important contributions of Alan Turing are the design of the first computadoracapaz
to play chess and the establishment of the symbolic nature of computing (ITAM, 1987).
Later, in 1957 Alan Newell and Herbert Simon, who worked in proving theorems and
computer chess manage to create a program called GPS (General Problem Solver). This
was a system where the user defined an environment based on a series of objects and
operators could be applied on them. This program was written by using IPL
(Information Processing Language) and is considered as the first program in which
information related to the problem of the strategy used to give solution was separated.
The GPS was based on the work previously developed by the authors on logical
machines and although it was able to solve problems such as "Towers of Hanoi"; He
could not resolve problems or the real world, doctors or make important decisions.
The GPS driving heuristic rules that led to the desired destination by the method of
trial and error (Newell and Simon, 1961). Several years later; in the 70s, a team of
researchers led by Edward Feigenbaum begin drafting a project to solve problems of
daily life (more specific problems); giving rise to what is known as expert systems.
The first expert system was the Dendral called a mass spectrogram interpreter built in
1967, but the most influential prove to be the Mycin of 1974. The Mycin was able to
diagnose blood disorders and prescribe the appropriate medication, an achievement in
that time that were even used in hospitals (such as Puff, Mycin variant commonly used
in the Pacific Medical Center in San Francisco, USA)
Already in the 80s, special languages were developed for use with Artificial, such as LISP
or PROLOG Intelligence. It is at this time when more sophisticated expert systems are
developed, such as EURISKO. This program perfects its own body of heuristic rules
automatically, by induction.
We can also highlight the important intervention of Arthur Samuel, who developed a
program playing checkers able to learn from their own experience; Selfridge, who was
studying computer visual recognition.
From this initial group, two major "schools" of IA were formed: Newell and Simon led
the team at the University of Carnegie-Mellon, intending to develop models of human
behavior with apparatus whose structure resembled as possible to the brain (which led
to the position "connectionist" and artificial "neural networks").
McCarthy and Minsky formed another team at the Technological Institute of
Massachusetts (MIT), focusing more on the products the processing can be
characterized as intelligent, without worrying about the performance or structure of the
components are similar to those of humans.
Both approaches, however, pursue the same priority objectives of the A.I .: "understand
human natural intelligence, and use intelligent machines to acquire knowledge and
solve problems considered intellectually challenging".
The history of AI has witnessed cycles of success, unjustified optimism and the
consequent disappearance of enthusiasm and financial support. There have also been
cycles characterized by the introduction of new and creative approaches and
systematically refining the best. For its implications areas such as medicine, psicloga,
biology, ethics and philosophy among others, this branch of knowledge has had to deal
with strong opponents and critics groups since its inception; however, there was always
a group of people interested in the area which allowed its consolidation as an area of
knowledge of great interest for scientific research.
TURING TEST
Turing test (Alan Turing, 1950) attempts to provide a definition of artificial intelligence
that can be assessed. For a being or machine is considered intelligent fool must achieve
an evaluator that this being or machine is a human evaluating all activities of cognitive
type that can make humans.
If dialogue occurs and the number of errors in the given solution approaches the
number of errors occurring in communication with a human being, it can be
estimated - according Turing that this is a "smart" machine.
Today, the work involves programming a computer to pass the test is considerable.
The computer should be capable of the following:
1. Processing natural language: in order to establish successful communication, either
in Spanish, English or any other human language.
2. Representing knowledge: To save all the information that has been given before or
during interrogation. Using Database to both receive questions and then store them.
3. Argue automatically: Use the information saved to answer questions and obtain
new conclusions or making decisions.
4. Self-teaching machine: In order to adapt to new circumstances. Self-learning leads
to self-evaluation.
To pass the full Turing test, it is necessary that the computer is equipped with:
1. View: Ability to perceive the object is in front of you.
2. Robotics: Ability to move the object has been perceived.
APPLICATIONS AND DERIVATIVE ARTIFICIAL
INTELLIGENCE TOOLS
Throughout the history of artificial intelligence are gone developing different tools
and applications among which we mention the following:
Programming Languages: A programming language is an artificial language used to
issue instructions to a computer, and although it is feasible to use any computer
language tools for the production of artificial intelligence; tools have been developed
whose specific purpose is the development of artificial intelligence systems equipped
with.
DEVELOPMENT ENVIRONMENTS
Due to the enormous success they had expert systems in the early 80 's, they began to
emerge developments known as shells. In computing, one Shell is a piece of software
that provides an interface for users.
An expert system has among its members the following two tools:
1. Base conocimientos.- knowledge related to a problem or phenomenon is coded
according to a specific notation that includes rules, predicates, semantic networks and
objects.
2. Inference Engine is the mechanism that combines the facts and particular
questions, using the knowledge base, data and selecting appropriate steps to present
the results
In the environment expert systems, Shell is a tool designed to facilitate their
development and implementation. In other words, a Shell is an "expert system" that
has an empty knowledge base, but accompanied by the necessary tools to provide the
basis of knowledge about the domain of discourse of a specific application. A Shell
also provides the knowledge engineer (responsible for collecting the knowledge base)
of a tool that brings integrated some mechanism for knowledge representation, a
mechanism of inference, elements that facilitate the explanation of the procedure or
decision taken by the expert system ( explanatory) component and even sometimes
provide a user interface.
The use of these development environments, enabling the development of efficient
expert systems even when some programming language is not mastered; reason has
popularized its use for the production of expert systems in all domains of knowledge.
Some development environments for building expert systems independent domain or
shells classics are:
EMYCIN or Essential Mycin: This Shell was built at Stanford University as a result of
the development of expert system MYCIN performing diagnosis of infectious diseases
of the blood. The EMYCIN system was the basis for the construction of many other
experts, such as PUFF (diagnosing lung diseases) and SACON (working with
structural engineering) systems.
OPS5- OPS83: These Shell were developed in C. C is the language that should be
used for insertion of the basic rules and with the use of forward chaining, the
possibility of inserting the rules in a language like C is its main contributions. This
system is based on the proprietary algorithm RETE II. The RETE algorithm is known
to be effective for comparing patterns made with rules and determining which of
them meet their conditions.