Вы находитесь на странице: 1из 56

ARTIFICIA L

INTELLIGE NCE

GENERAL SCIENCE
REPORT ON: ARTIFICIAL INTELLIGENCE

RESEARGH REPORT
2

ARTIFICAL INTELLIGENCE
Submitted to: Dr.Akhter Baloch Submitted by: Nasreen Zehra Class: M.P.A Section: A

Letter of Acknowledgement

10-JUNE-2010

Prof.Dr.Akhter Baloch, Project Incharge/Course Incharge (Research Methodology)Chairman Dept of Public Administration, University of Karachi.
3

Respected Sir, First of all, I like to thanks to All Mighty ALLAH that he courage me to complete this project. I have putted my best to complete this report. It is honor for me to write a final report on Artificial Intelligence. It is a great opportunity to me that you have given me this assignment. This project give me the knowledge about artificial intelligence, although I learn acquire knowledge about such subject in general but this research gave me insight with respect to certain specific area. Sincerely, Nasreen Zehra.

Agenda
What is Artificial Intelligence? History and Development of Artificial Intelligence Characteristics of Artifical Intelligence Artificial Intelligence is every where In Sports In Space

In Automobiles Branches of ARTIFICAL Intelligence Artifical Intelligence Problem Solving Approaches of Artifical Intelligence Types of Artifical Intelligence Questionnaire Responses and data interpretation conclusion

Research Methodology:

To achieve the objectives of the research both primary and secondary data collection methods will be used. Primary method will be helpful to find out about practical approaches of AI. Questionnaire which will be consist of close-ended questions will be conducted as a primary source of data collection. The target population of this research includes students from different departments of Karachi University for survey, and Convenient Sampling Method will be adopted to select respondents. Questionnaire will be filled out by participants and returned to the undersigned by hand. The secondary methods will be helpful to find out the existing theories
5

about this topic and as well as work from other authors. Secondary method will be conducted with the help of articles, scholarly journals, books, newspapers and business magazines. Electronic sources like internet will be used

INTRODUCTION

AI is generally associated with Computer Science, but it has many important links with other fields such as Mathematics, Psychology, Cognition, Biology and Philosophy among many others. Our ability to combine knowledge from all these fields will ultimately benefit our progress in the quest of creating an intelligent artificial being.

Artificial Intelligence, or AI for short, is a combination of computer science, physiology, and philosophy. AI is a broad topic, consisting of different fields, from machine vision to expert systems. The element that the fields of AI have in common is the creation of machines that can "think".

In order to classify machines as "thinking", it is necessary to define intelligence. To what degree does intelligence consist of, for example, solving complex

Problems, or making generalizations and relationships? And what about perception and comprehension? Research into the areas of learning, of language, and of sensory perception have aided scientists in building intelligent machines. One of the most challenging approaches facing experts is building systems that mimic the behavior of the human brain, made up of billions of neurons, and arguably the most complex matter in the universe. Perhaps the best way to gauge the intelligence of a machine is British computer scientist Alan Turing's test. He stated that a computer would deserves to be called intelligent if it could deceive a human into believing that it was human.

Artificial Intelligence:
The branch of computer science concerned with making computers behave like humans. The term was coined in 1956 by John McCarthy at

the Massachusetts Institute of Technology. Artificial intelligence includes

Games playing: programming computers Expert systems : programming

to play games such as chess and checkers

computers to make decisions in real-life situations (for example, some expert systems help doctors diagnose diseases based on symptoms)

Natural language : programming


computers to understand natural human languages

Neural networks : Systems that


simulate intelligence by attempting to reproduce the types of physical connections that occur in animal brains

Robotics : programming computers to


see and hear and react to other sensory stimuli

Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human
8

behavior). The greatest advances have occurred in the field of games playing. The best computer chess programs are now capable of beating humans. In May, 1997, an IBM super-computer called Deep Blue defeated world chess champion Gary Kasparov in a chess match. In the area of robotics, computers are now widely used in assembly plants, but they are capable only of very limited tasks. Robots have great difficulty identifying objects based on appearance or feel, and they still move and handle objects clumsily. Natural-language processing offers the greatest potential rewards because it would allow people to interact with computers without needing any specialized knowledge. You could simply walk up to a computer and talk to it. Unfortunately, programming computers to understand natural languages has proved to be more difficult than originally thought. Some rudimentary translation systems that translate from one human language to another are in existence, but they are not nearly as good as human translators. There are also voice recognition systems that can convert spoken sounds into written words, but they do not understand what they are writing; they simply take dictation. Even these systems are quite limited -- you must speak slowly and distinctly. In the early 1980s, expert systems were believed to represent the future of artificial intelligence and of computers in general. To date, however, they have not lived up to expectations. Many expert systems help human experts in such fields as medicine and

engineering, but they are very expensive to produce and are helpful only in special situations. Today, the hottest area of artificial intelligence is neural networks, which are proving successful in a number of disciplines such as voice recognition and naturallanguage processing.

Turing Test
In 1950, mathematician Alan Turing devised a test to identify whether a machine displayed intelligence. In the Turing Test, two people (A and B) sit in a closed room, while an interrogator (C)sits outside. Person A tries to fool the interrogator about their gender, while person B tries to assist the interrogator in their identification. Turing suggested a machine take the place of person A. If the machine consistently fooled the human interrogator, it was likely to be intelligent.

History and Development Of Artificial Intelligence:


So, can a machine behave like a person? This question underlies artificial intelligence (AI), the study of intelligent behavior in machines. In the 1980s, AI

10

research focused on creating machines that could solve problems and reason like humans.

One of the most difficult problems in artificial intelligence is that of consciousness. A consciousness gives us feelings and makes us aware of our own existence. But scientists have found it difficult getting robots to carry out even the simplest of cognitive tasks.

Turing Test
In 1950, mathematician Alan Turing devised a test to identify whether a machine displayed intelligence. In the Turing Test, two people (A and B) sit in a closed room, while an interrogator (C)sits outside. Person A tries to fool the interrogator about their gender, while person B tries to assist the interrogator in their identification. Turing suggested a machine take the place of person A. If the machine consistently fooled the human interrogator, it was likely to be intelligent.

History and Development Of Artificial Intelligence:

11

So, can a machine behave like a person? This question underlies artificial intelligence (AI), the study of intelligent behavior in machines. In the 1980s, AI research focused on creating machines that could solve problems and reason like humans.

One of the most difficult problems in artificial intelligence is that of consciousness. A consciousness gives us feelings and makes us aware of our own existence. But scientists have found it difficult getting robots to carry out even the simplest of cognitive tasks.

In order for something to be considered an "Artificial Intelligence," there are a few different characteristics that are required... Some of these characteristics include the following abilities:

The ability to act intelligently, as a human. The ability to behave following "general intelligent action." The ability to artificially simulate the human brain. The ability to actively learn and adapt as a human. The ability to process language and symbols.

12

As can be seen from just these few examples, Artificial Intelligence primarily concerns the ability for a computer to mimic human intelligence. That is its key characteristic

Artificial Intelligence Is Every Where:


Sporting chance:
The RoboCup football championship features robots playing the beautiful game. The tournament has different leagues for different robot types, including one for Sonys Aibos and one for humanoid robots. Despite the mechanical style of play on show, the tournament is proving a popular annual fixture.

In the RoboCup junior championships, independent robots compete one-on-one on a miniature football table with a gray scale pattern. The robots use this dark to pale gradient to navigate their way to the opponents goal. A special ball is used which contains sensors that communicate with sensors in the robot

13

In Space:
NASAs twin robot geologists, the Mars Exploration Rovers, launched toward Mars on June 10 and July 7, 2003, in search of answers about the history of water on Mars. They landed on Mars January 3 and January 24 PST, 2004 (January 4 and January 25 UTC, 2004).

The Mars Exploration Rover mission is part of NASAs Mars Exploration Program, a long-term effort of robotic exploration of the red planet.

Primary among the missions scientific goals is to search for and characterize a wide range of rocks and soils that hold clues to past water activity on Mars. The spacecraft are targeted to sites on opposite sides of Mars that appear to have been affected by liquid water in the past. The landing sites are at Gusev Crater, a possible former lake in a giant impact crater, and Meridiani Planum, where mineral deposits (hematite) suggest Mars had a wet past.

After the airbag-protected landing craft settle onto the surface and open, the rovers rolled out to take
14

panoramic images. These give scientists the information they need to select promising geological targets that tell part of the story of water in Mars past. Then, the rovers drive to those locations to perform on-site scientific investigations.

These are the primary science instruments to be carried by the rovers:


Panoramic Camera (Pancam): for determining the mineralogy, texture, and structure of the local terrain.

Miniature Thermal Emission Spectrometer (Mini-TES): for identifying promising rocks and soils for closer examination and for determining the processes that formed Martian rocks. The instrument will also look skyward to provide temperature profiles of the Martian atmosphere.

Mssbauer Spectrometer (MB): for close-up investigations of the mineralogy of iron-bearing rocks and soils.

Alpha Particle X-Ray Spectrometer (APXS): for close-up analysis of the abundances of elements that make up rocks and soils.

Magnets: for collecting magnetic dust particles. The Mssbauer Spectrometer and the Alpha Particle X-ray Spectrometer will analyze the particles collected and help determine the ratio of magnetic particles to non-magnetic particles. They will also

15

analyze the composition of magnetic minerals in airborne dust and rocks that have been ground by the Rock Abrasion Tool.

Microscopic Imager (MI): for obtaining close-up, high-resolution images of rocks and soils.

Rock Abrasion Tool (RAT): for removing dusty and weathered rock surfaces and exposing fresh material for examination by instruments onboard.

Before landing, the goal for each rover was to drive up to 40 meters (about 44 yards) in a single day, for a total of up to one 1 kilometer (about three-quarters of a mile). Both goals have been far exceeded!

Moving from place to place, the rovers perform on-site geological investigations. Each rover is sort of the mechanical equivalent of a geologist walking the surface of Mars. The mast-mounted cameras are mounted 1.5 meters(5 feet) high and provide 360-degree, stereoscopic,
16

humanlike views of the terrain. The robotic arm is capable of movement in much the same way as a human arm with an elbow and wrist, and can place instruments directly up against rock and soil targets of interest. In the mechanical fist of the arm is a microscopic camera that serves the same purpose as a geologists handheld magnifying lens. The Rock Abrasion Tool serves the purpose of a geologists rock hammer to expose the insides of rocks.

In Automobiles:
Fuel injection systems in our cars use learning algorithms. Jet turbines are designed using genetic algorithms, which are both examples of AI, says Dr Rodney Brooks, the director of MIT's artificial intelligence laboratory.
17

Routing Our Daily Calls:


Every cell phone call and e-mail is routed using artificial intelligence, says Ray Kurzweil, an AI entrepreneur and the author of two books on the subject, The Age of Intelligent Machines and The Age of Spiritual Machines.

Speech recognition
In the 1990s, computer speech recognition reached a practical level for limited purposes. Thus United Airlines has replaced its keyboard tree for flights as the widely available Roomba with ope information by system using speech recognition of flight numbers and city names. It is quite convenient. On the other hand, while it is possible to instruct some computers using speech, most users have gone back to the keyboard and the mouse as still more convenient

Understanding natural language


Just getting a sequence of words into a computer is not enough. Parsing sentences is not enough either. The computer has to be
18

provided with an understanding of the domain the text is about, and this is presently possible only for very limited domains.

Computer vision
The world is composed of threedimensional objects, but the inputs to the human eye and computers' TV cameras are two dimensional. Some useful programs can work solely in two dimensions, but full computer vision requires partial threedimensional information that is not just a set of two-dimensional views. At present there are only limited ways of representing three-dimensional information directly, and they are not as good as what humans evidently use.

Expert systems
A ``knowledge engineer'' interviews experts in a certain domain and tries to
19

embody their knowledge in a computer program for carrying out some task. How well this works depends on whether the intellectual mechanisms required for the task are within the present state of AI. When this turned out not to be so, there were many disappointing results. One of the first expert systems was MYCIN in 1974, which diagnosed bacterial infections of the blood and suggested treatments. It did better than medical students or practicing doctors, provided its limitations were observed. Namely, its ontology included bacteria, symptoms, and treatments and did not include patients, doctors, hospitals, death, recovery, and events occurring in time. Its interactions depended on a single patient being considered. Since the experts consulted by the knowledge engineers knew about patients, doctors, death, recovery, etc., it is clear that the knowledge engineers forced what the experts told them into a predetermined framework. In the present state of AI, this has to be true. The usefulness of current
20

expert systems depends on their users having common sense

Branches of AI Logical AI
What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. The first article proposing this was is a more recent summary. lists some of the concepts involved in logical aI. is an important text.

Search
AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.

Pattern recognition

21

When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face. More complex patterns, e.g. in a natural language text, in a chess position, or in the history of some event are also studied. These more complex patterns require quite different methods than do the simple patterns that have been studied the most.

Representation
Facts about the world have to be represented in some way. Usually languages of mathematical logic are used

Inference

From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes, but new methods of non-monotonic inference have been added to logic since the 1970s. The simplest kind of non-monotonic reasoning is default reasoning in which a conclusion is to be inferred by default, but the conclusion can be withdrawn if there is evidence to the contrary. For example, when we hear of a bird, we man infer that it can fly, but this conclusion can be reversed when we hear that it is a penguin. It is the possibility that a conclusion may have to be withdrawn that constitutes the non-monotonic character of the reasoning. Ordinary logical reasoning is monotonic in that
22

the set of conclusions that can the drawn from a set of premises is a monotonic increasing function of the premises. Circumscription is another form of non-monotonic reasoning

Common sense knowledge and reasoning


This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Cyc system contains a large but spotty collection of common sense facts.

Learning from experience


Programs do that. The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logicis a comprehensive undergraduate text on machine learning. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information

23

Planning
Planning programs start with general facts about the world (especially facts about the effects of actions), facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions.

Epistemology
This is a study of the kinds of knowledge that are required for solving problems in the world.

Ontology
Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s.

Heuristics
24

A heuristic is a way of trying to discover something or an idea imbedded in a program. The term is used variously in AI. Heuristic Functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, and may be more useful. [My opinion].

Genetic programming
Genetic programming is a technique for getting programs to solve a task by mating random Lisp programs and selecting fittest in millions of generations. It is Problems

Problems solving

25

The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems. These consist of particular traits or capabilities that researchers would like an intelligent system to display. The traits described below have received the most attention

Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the step-by-step reasoning that humans were often assumed to use when they solve puzzles, play board games or make logical deductions. By the late 1980s and '90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information,
26

employing concepts from probability and economics.

For difficult problems, most of these algorithms can require enormous computational resources most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research.

Human beings solve most of their problems using fast, intuitive judgments rather than theconscious, step-by-step deduction that early AI research was able to model. AI has made some progress at imitating this kind of "sub-symbolic" problem
27

solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside human and animal brains that give rise to this skill.

Knowledge representation Knowledge representation and Common sense knowledge


Knowledge representation and knowledge engineeringare central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[ situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know) and many other, less well researched domains. A complete representation of "what exists" is an ontology of which the most general are called upper ontologies
28

Among the most difficult problems in knowledge representation is:

Default reasoning and the qualification problem


Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969[50] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.

The breadth of commonsense knowledge


29

The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts of laborious ontological engineering they must be built, by hand, one complicated concept at a time. A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology.

30

The subsymbolic form of some commonsense knowledge


Much of what people know is not represented as "facts" or "statements" that they could actually say out loud. For example, a chess master will avoid a particular chess position because it "feels too exposed or an art critic can take one look at a statue and instantly realize that it is a fake. These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI or computational intelligence will provide ways to represent this kind of knowledge.

Planning

31

Automated planning and scheduling

Intelligent agents must be able to set goals and achieve them.[They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or "value") of the available choices.

In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.

32

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence

Machine learning
Machine learning[ has been central to
AI research from the beginning. Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression takes a set of numerical input/output examples and attempts to discover a continuous function that would generate the outputs from the inputs. In reinforcement learning the agent is rewarded for good responses and

Learning

33

punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.

Motion and manipulation


The field of robotics[ is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are), mapping (learning what is around you) and motion planning (figuring out how to get there)

Perception

34

Machine perception, Computer vision, and Speech recognition Machine perception[69] is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected subproblems are speech recognition, facial recognition and object recognition

Social intelligence
Kismet, a robot with rudimentary social skills Emotion and social skills play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves
35

elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, for good human-computer interaction, an intelligent machine also needs to display emotions. At the very least it must appear polite and sensitive to the humans it interacts with. At best, it should have normal emotions itself.

Computational creativity
TOPIO, a robot that can play table tennis, developed by TOSY.
A sub-field of AI addresses creativity both theoretically (from a philosophical and psychological perspective) and practically (via specific implementations of systems that generate outputs that can be considered creative). A related area of computational research is Artificial Intuition and Artificial Imagination.

36

General intelligence
Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them.[12] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[74] Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed

37

to be AI-complete: it may require strong AI to be done as well as humans can do it

Approaches
There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues. A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence, by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems? Can intelligence be reproduced using highlevel symbols, similar to words and ideas? Or does it require "sub-symbolic" processing?

Cybernetics and brain simulation


There is no consensus on how closely the brain should be simulated.
38

In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[24]By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

Symbolic
Main article: Good old fashioned artificial intelligence
When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: CMU, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI "good old fashioned AI" or "GOFAI".

Cognitive simulation
Economist Herbert Simon and Allen Newell studied human problem solving skills and attempted to formalize them,
39

and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 80s.

Logic based
Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.]His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning. Logic was also focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming[
40

"Anti-logic" or "scruffy"
Researchers at MIT (such as Marvin Minsky and Seymour Papert)] found that solving difficult problems in vision and natural language processing required ad-hoc solutions they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their "antilogic" approaches as "scruffy" (as opposed to the "neat" paradigms at CMU and Stanford) Commonsense knowledge bases (such as Doug Lenat's Cyc) are an example of "scruffy" AI, since they must be built by hand, one complicated concept at a time.

Knowledge based
When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applicationsThis "knowledge revolution" led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software] The knowledge revolution was also driven by the realization that

41

enormous amounts of knowledge would be required by many simple AI applications.

Sub-symbolic
During the 1960s, symbolic approaches had achieved great success at simulating highlevel thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background. By the 1980s, however, progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into "subsymbolic" approaches to specific AI problems.

Bottom-up, embodied, situated, behaviorbased or nouvelle AI


Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 50s and reintroduced
42

the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Computational Intelligence
Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the middle 1980s. These and other sub-symbolic approaches, such as fuzzy systems and evolutionary computation, are now studied collectively by the emerging discipline of computational intelligence.

Statistical

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI's recent successes. The shared mathematical language has also permitted a high level of collaboration with more established

43

fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a "revolution" and "the victory of the neats

Integrating the approaches


Intelligent agent paradigm An intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success. The simplest intelligent agents are programs that solve specific problems. The most complicated intelligent agents are rational, thinking humans.The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works some agents are symbolic and logical, some are sub-symbolic neural networks and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fieldssuch as decision theory and economicsthat also use concepts of abstract agents. The intelligent agent paradigm became widely accepted during the 1990s[

44

Agent architectures and cognitive architectures


Researchers have designed systems to build intelligent systems out of interacting intelligent agents in a multiagent system. A system with both symbolic and subsymbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling. Rodney Brooks' subsumption architecture was an early proposal for such a hierarchical system.

Types of Artificial Intelligence


Strong Artificial Intelligence:

Strong artificial intelligence research deals with the creation of some form of computer-based artificial intelligence that can truly reason and solve problems; a strong form of AI is said to be sentient, or self-aware. In theory, there are two types of strong AI: Human-like AI, in which the computer program thinks and reasons much like a human mind.

45

Non-human-like AI, in which the computer program develops a totally non-human sentience, and a nonhuman way of thinking and reasoning.

Weak Artificial Intelligence: Weak artificial intelligence research deals with the creation of some form of computer-based artificial intelligence that cannot truly reason and solve problems; such a machine would, in some ways, act as if it were intelligent, but it would not possess true intelligence or sentience. To date, much of the work in this field has been done with computer simulations of intelligence based on predefined sets of rules. Very little progress has been made in strong AI. Depending on how one defines one's goals, a moderate amount of progress has been made in weak AI

46

QUESTIONNAIRE
Name: ____________________ ___________________ DEPARTMENT:

Q1. What is artificial intelligence? Ans: It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable. Agree disagree Q2. Isn't there a solid definition of intelligence that doesn't depend on relating it to human intelligence? Yes No Q3. Is intelligence a single thing so that one can ask a yes or no question ``Is this machine intelligent? Yes No Its a mechanism

47

Q4. When did AI research start? 1950s 1947 1960s Q5. Does AI aim at human-level intelligence? Yes No Q6. Might an AI system be able to bootstrap itself to higher and higher level intelligence by thinking about AI? Yes No Q7. Are computers fast enough to be intelligent? Yes No

48

Responses & DATA Interpretation


Q1. Do You agree with definition of AI ?
Artificial Intelegence

95% Peoples are agreed with the definition of AI.

Disagree 5%
Agree Disagree

Agree 95%

49

Q2. Isn't there a solid definition of intelligence that doesn't depend on relating it to human intelligence?

Dependency on Human Intelligence

Disagree 15%
Agree Disagree

85% Peoples are agreed that its not depend on Human Intellegence

Agree 85%

Q3. Is intelligence a single thing so that one can ask a yes or no question ``Is this machine intelligent?

50

AI Is Mechanism ?
80% 60% 40% 20% 0%

Mostly people believe that machine is intelligent

Yes 66%

No 22%

Mechanism 14%

Q4. When did AI research start?

51

Knowledge about AI

44% peoples has known About the History of


Bad 56% Go od 44%

Good Bad

AI

Q5. Does AI aim at human-level intelligence?

A IM of AI is reach to H um an Level Intellegence

Its Aim at Human


36% No

Level Intelligence

64%

Ye s

0%

10%

20%

30%

40%

50%

60%

70%

52

Q6. Might an AI system be able to bootstrap itself to higher and higher level intelligence by thinking about AI?

Is it System Bootstrap itself ?


68% 70% 60% 50% 32% 40% 30% 20% 10% 0% Yes No

AI system is able to bootstrap itself

Q7. Are computers fast enough to be intelligent?

53

Fast Intellegent ?

Y es No

50% peoples think that its fast enough to be intelligent. Rather then think that Human intelligent is faster then it.

50%

50%

54

Conclusion:
Artificial Intelligence is the study of the computations that make it possible to perceive, reason, and act. The engineering goal of Artificial Intelligence is to solve realworld problems; the scientific goal of Artificial Intelligence is to explain various sorts of intelligence. Applications of Artificial Intelligence should be judged according to whether there is a well-defined task, an implemented program, and a set of identifiable principles. Artificial Intelligence can help us to solve difficult, real-world problems, creating new opportunities in business, engineering, and many other application areas. Artificial Intelligence sheds new light on questions traditionally asked by psychologists, linguists, and philosophers. A few rays of this new light can help us to be more intelligent.

References;
55

WWW.geogle.com www.dogpile.com www.incylopedia.com www.wikipedia.com www.acrobat,pdf file

56

Вам также может понравиться