Вы находитесь на странице: 1из 32

Seminar Report

SEMINAR REPORT ON

ARTIFICIAL INTELLIGENCE

Mech Engg.

Page 1

Seminar Report

Recently, the media has spent an increasing amount of broadcast time on new technology. The focus of high-tech media has been
Mech Engg. Page 2

Seminar Report
aimed at the flurry of advances concerning artificial intelligence (AI). What is artificial intelligence and what is the media talking about? Are these technologies beneficial to our society or mere novelties among business and marketing professionals? Medical facilities, police departments, and manufacturing plants have all been changed by AI but how? These questions and many others are the concern of the general public brought about by the lack of education concerning rapidly advancing computer technology. Artificial intelligence is defined as the ability of a machine to think for itself. Scientists and theorists continue to debate if computers will actually be able to think for themselves at one point (Patterson 7). The generally accepted theory is that computers do and will think more in the future. AI has grown rapidly in the last ten years chiefly because of the advances in computer architecture. The term artificial intelligence was actually coined in 1956 by a group of scientists having their first meeting on the topic (Patterson 6). Early attempts at AI were neural networks modeled after the ones in the human brain. Success was minimal at best because of the lack of computer technology needed to calculate such large equations.

Mech Engg.

Page 3

Seminar Report

AI is achieved using a number of different methods. The more popular implementations comprise neural networks, chaos engineering, fuzzy logic, knowledge based systems, and expert systems. Using any one of the aforementioned design structures requires a specialized computer system. For example, Anderson Consulting applies a knowledge based system to commercial loan officers using multimedia (Hedburg 121).Even more exotic is the software that is used. Since there are very few applications that are pre-written using AI, each company has to write it's own software for the solution to the problem. An easier way around this obstacle is to design an add-on.

Fuzzy logic's ability to do multiple operations allows it to be integrated into neural networks. Two very powerful intelligent

Mech Engg.

Page 4

Seminar Report
structures make for an extremely useful product. This integration takes the pros of fuzzy logic and neural networks and eliminates the cons of both systems (Liebowitz 113). This new system is a now a neural network with the ability to learn using fuzzy logic instead of hard concrete facts. Allowing a more fuzzy input to be used in the neural network instead of being passed up will greatly decrease the learning time of such a network.

Expert systems have proven effective in a number of problem domains that usually require human intelligence (Patterson 326). They were developed in the research labs of universities in the 1960's and 1970's. Expert systems are primarily used as specialized problem solvers. The areas that this can cover are almost endless. This can include law, chemistry, biology, engineering, manufacturing, aerospace, military operations, finance, banking, meteorology, geology, and more. Expert systems use knowledge instead of data to control the solution process. "In knowledge lies the power" is a theme repeated when building such systems. These systems are capable of explaining the answer to
Mech Engg. Page 5

Seminar Report
the problem and why any requested knowledge was necessary. Expert systems use symbolic representations

for knowledge and perform computations through manipulations of the different symbols (Patterson 329). But perhaps the greatest advantage to expert systems is their ability to realize their limits and capabilities. Now that each type of implementation of AI has been discussed, how do we use all this technology? Foremost, neural networks are used mainly for internal corporate applications in various types of problems. For example, Troy Nolen was hired by a major defense contractor to design programs for guiding flight and battle patterns of the YF-22 fighter. His software runs on five on-board computers and makes split-second decisions based on data from ground stations, radar, and other sources. Secondly, fuzzy logic has many applications that hit close to home. Home appliances win most of the ground with AI enhanced washing machines, vacuum cleaners, and air-conditioners. Hitachi
Mech Engg. Page 6

Seminar Report
and Matsushita manufacture washing machines that automatically adjust for load size and how dirty the articles are (Shine 57). This machine washes until clean, not just for ten minutes. Matsushita also manufactures vacuum cleaners that adjust the suction power according to the volume of dust and the nature of the floor.

Mech Engg.

Page 7

Seminar Report

Expert systems are prevalent all over the world. This proven technology has made its way into almost everywhere that human experts live. Expert systems even can show an employee how to be an expert in a particular occupation. A Massachusetts company specializes in teaching good judgment to new employees or trainees. Called Wisdom Simulators, this company sells software that simulates nasty job situations in the business world. The ability to learn before the need arises attracts many customers to this type of software (Nadis 8). Expert systems have also been applied in medical facilities, diagnosis of mechanical devices, planning scientific experiments, military operations, and teaching students specialized tasks. The field of artifical intelligence was founded at a conference on the campus of Dartmouth College in the summer of 1956. The first generation of AI researchers believed that it was possible to create a

Mech Engg.

Page 8

Seminar Report
machine as intelligent as (or more intelligent than) a human being and that it would happen soon, in no more than a few decades.[1] They were given millions of dollars to make this vision come true, but it soon became obvious that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism of Sir James Lighthill and ongoing pressure from congress, DARPA and the British Government stopped funding undirected research into artificial intelligence. Seven years later, the Japanese Government and American industry would provide AI with billions of dollars, but again the investors would be disappointed and by the late 80s the funding would dry up again. The cycle of boom and bust, of AI winters and summers, continues to the present day. Undaunted, there are those that make extraordinary predictions even now.

Automatons and early robotics Main articles Automaton, History of Robots and History of Humanoid Robots
Machines that acted like thinking human beings were built as early as 800 BCE, when there was a statue of Amun in the ancient Egyptian city of Napata that could raise its arm and speak.[9] Perhaps the most legendary was the Turk, a mechanism built in 1769 that appeared to play chess intelligently. This was an illusion, however - its moves were made
Mech Engg. Page 9

Seminar Report
not by the machine itself but rather by a small chess master hidden in a cabinet underneath.[10] Automatons without intelligence, such as the Turk, have had little to no effect at all on the development of modern AI. But they were designed to appear intelligent and (like the mechanical men of myth and fiction) this indicates a very ancient and pervasive fascination with the idea of a machine that thinks like a human being.[11]

Formal reasoning and logic History of logic and History of philosophy


In the 17th century, Thomas Hobbes, Ren Descartes and Gottfried Leibniz explored the possibility that all rational thought could be made as systematic as algebra or geometry. Hobbes famously wrote in Leviathan: "reason is nothing but reckoning". Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that "there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked):

Mech Engg.

Page 10

Seminar Report

The ENIAC, at the Moore School of Electrical Engineering. (U.S. Army Photo)

The birth of artificial intelligence 1943-1956

The IBM 702: the machine that Arthur Samuel taught to play checkers. The first computers cost millions of dollars, filled entire rooms and had less computing power than a modern clock or thermostat. A number of researchers from many fields (mathematics, psychology, engineering and even political science) instinctively recognized that a machine that could
Mech Engg. Page 11

Seminar Report
manipulate numbers could also manipulate symbols, and that the manipulation of symbols could well be the essence of human thought. In 1956, at a conference on the Dartmouth campus, the field of artificial intelligence was born.

Cybernetics and early neural networks


In the early forties, two Princeton scientists attempted to create a mathematical description of the human brain. Walter Pitts and Warren McCulloch analyze networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network. Pitts and McCulloch were part of a research program called "cybernetics" that lasted from the 1940s (when Norbert Weiner defined the term) till the 1960s. Researchers developed robots, like W. Grey Walter's turtles and the Johns Hopkins Beast, that displayed rudimentary intelligence. These machines did not use computers or digital electronics; they were controlled entirely by analog circuitry. One of the students inspired by Pitts and McCulloch's work was a young Marvin Minsky, then a 24 year old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC. Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.

Micro-worlds

Mech Engg.

Page 12

Seminar Report
In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as Micro-Worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on the so-called "blocks world," which consists of colored blocks of various shapes and sizes arrayed on a flat surface. This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented "constraint propagation"), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of

the micro-world program was Terry Winograd's SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.

Mech Engg.

Page 13

Seminar Report

The optimism
The first generation of AI researchers made these predictions about their work:

1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem." 1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do." 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved." 1970, Marvin Minsky (in Life Magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being."[51]

The Money
In June of 1963 MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the "AI Group" founded by Minsky and McCarthy five years earlier. ARPA continued to provide three million dollars a year until the 70s. ARPA made similar grants to Newell and Simon's program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963). Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[54] These four institutions would continue to be

Mech Engg.

Page 14

Seminar Report
the main centers of AI research (and funding) in academia for many years. The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, felt that his organization should "fund people, not projects!" and allowed researchers to pursue whatever directions might interest them. The freewheeling atmosphere at MIT gave birth to the hacker culture. However, this "hands off" approach would soon come to an end.

Boom 1980-1987
In the 1980s a form of AI program called "expert systems" was adopted by corporations around the world, and, in those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.

Mech Engg.

Page 15

Seminar Report

A Hopfield net with four nodes.

The revival of connectionism


In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a "Hopfield net") could learn and process information in a completely new way. Around the same time, David Rumelhart began to work with a new method for training neural networks called "backpropagation" (it had actually been discovered years earlier by Paul Werbos but had been completely ignored at the time it was published.) These two discoveries revived the field of connectionism which had been abandoned since 1970. The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986 a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.

The rise of expert systems


Mech Engg. Page 16

Seminar Report
An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of

the approach.

Expert systems avoided many of the pitfalls of earlier AI programs by restricting themselves to a small domain of specific knowledge (and thus avoiding the commonsense knowledge problem). An expert system could do only one thing, but it did it well. Their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.

Mech Engg.

Page 17

Seminar Report
In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986. Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to inhouse AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as Teknowledge and Intellicorp.

Cyc: an encyclopedia of common sense


In 1988, Doug Lenat made this announcement: "I would like to present a surprisingly compact, powerful, elegant set of reasoning methods that form a set of first principles which explain creativity, humor and common sense reasoning ... but, sadly, I don't believe they exist. So, instead, this paper will tell you about Cyc, the massive knowledge base project that we've been working on at MCC for the last four years." Lenat
Mech Engg. Page 18

Seminar Report
wanted to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. He believed, as many others do, that there is no shortcut -- the only way for machines to know the meaning of the concepts we use is to teach them, one concept at a time, by hand. The project was originally expected to take only two person-centuries, but all indications are that it will take much longer.

AI 1993-present
The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focussing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. AI was more cautious and more successful than it had ever been.

Mech Engg.

Page 19

Seminar Report

Garry Kasparov playing against Deep Blue, the first machine to win a chess match against a reigning world champion.

Mech Engg.

Page 20

Seminar Report

Intelligent agents

A new paradigm called "intelligent agents" became widely accepted during the 90s. Although earlier researchers had proposed modular "divide and conquer" approaches to AI, the intelligent agent did not reach its modern form until Judea Pearl, Alan Newell and others brought concepts from control theory and economics into the study of AI.[114] When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete. An intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success. The simplest intelligent agents are programs that solve specific problems. The most complicated intelligent agents would be rational, thinking human beings. The intelligent agent paradigm defines AI research "the study of intelligent agents". This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence. The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common
Mech Engg. Page 21

Seminar Report
language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell's SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.

AI behind the scenes


Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems and their solutions proved to be useful throughout the technology industry, such as machine translation, data mining, industrial robotics, logistics, speech recognition, banking software, medical diagnosion Google's

search engine to name a few. The field of AI receives little or no credit for these successes. Now no longer considered a part of AI, each has been reduced to the status of just another item in the tool chest of computer science. Nick Bostrom
Mech Engg. Page 22

Seminar Report
explains "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore." (This is called the "AI effect" and is expressed most succinctly by Tesler's Theorem: "AI is whatever hasn't been done yet.") In fact, many researchers in AI today deliberately call their work by other names, such as informatics, knowledge-based systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continue to haunt AI research: the New York Times reported in 2005: "Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."

The rise and fall of AI in public perception

Mech Engg.

Page 23

Seminar Report

The field was born at a conference on the campus of Dartmouth College in the summer of 1956.] Those who attended would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. They and their students wrote programs that were, to most people, simply astonishing: computers were solving word problems in algebra, proving logical theorems and speaking English. By the middle 60s their research was heavily funded by DARPA, and they were optimistic about the future of the new field:

1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do" 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."

Mech Engg.

Page 24

Seminar Report
These predictions, and many like them, would not come not true. They had failed to recognize the difficulty of some of the problems they faced: the lack of raw computer power, the intractable combinatorial explosion of their algorithms, the difficulty of representing commonsense knowledge and doing commonsense reasoning, the incredible difficulty of perception and motion and the failings of logic. In 1974, in response to the criticism of England's Sir James Lighthill and ongoing pressure from congress to fund more productive projects, DARPA cut off all undirected, exploratory research in AI. This was the first AI Winter. In the 80s, field early the was

revived by the commercial success of expert systems and by 1985 the market for AI had reached more than a billion dollars. Minsky and others warned the community that enthusiasm for AI had spiraled out of control and that disappointment was sure to follow. Minsky was right. Beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, more lasting AI Winter began. In the 90s AI achieved its greatest successes, albeit somewhat behind the scenes. Artificial intelligence was adopted throughout the technology industry, providing the heavy lifting for logistics, data mining, medical diagnosis and many other areas. The success was due to several factors:
Mech Engg. Page 25

Seminar Report
the incredible power of computers today (see Moore's law), a greater emphasis on solving specific subproblems, the creation of new ties between AI and other fields working on similar problems, and above all a new commitment by researchers to solid mathematical methods and rigorous scientific standards.

1961-65 -- A.L.Samuel Developed a program which learned to play


checkers at Masters level.

1965 -- J.A.Robinson introduced resolution as an inference method in


logic. 1965 -- Work on DENDRAL was begun at Stanford University by J.Lederberg, Edward Feigenbaum and Carl Djerassi. DENDRAL is an expert system which discovers molecule structure given only informaton of the constituents of the compound and mass spectra data. DENDRAL was the first knowledge-based expert system to be developed.

1968 -- Work on MACSYMA was initiated at MIT by Carl Engleman,


William Martin and Joel Moses. MACSYMA is a large interactive program which solves numerous types of mathamatical problems. Written in LISP, MACSYMA was a continuation of earlier work on SIN, an indefinite integration solving problem

Approaches to AI research

Mech Engg.

Page 26

Seminar Report
Conventional AI mostly involves methods now classified as machine learning, characterized by formalism and statistical analysis. This is also known as symbolic AI, logical AI, neat AI and Good Old Fashioned Artificial Intelligence (GOFAI). (Also see semantics.) Methods include:

Expert systems: apply reasoning capabilities to reach a conclusion. An expert system can process large amounts of known information and provide conclusions based on them. Case based reasoning: stores a set of problems and answers in an organized data structure called cases. A case based reasoning system upon being presented with a problem finds a case in its knowledge base that is most closely related to the new problem and presents its solutions as an output with suitable modifications. Bayesian networks Behavior based AI: a modular method of building AI systems by hand.

Computational intelligence involves iterative development or learning (e.g., parameter tuning in connectionist systems). Learning is based on empirical data and is associated with non-symbolic AI, scruffy AI and soft computing. Subjects in computational intelligence as defined by IEEE Computational Intelligence Society mainly include:

Neural networks: trainable systems with very strong pattern recognition capabilities. Fuzzy systems: techniques for reasoning under uncertainty, have been widely used in modern industrial and consumer product control systems; capable of working with concepts such as 'hot', 'cold', 'warm' and 'boiling'.

Mech Engg.

Page 27

Seminar Report

Evolutionary computation: applies biologically inspired concepts such as populations, mutation and survival of the fittest to generate increasingly better solutions to the problem. These methods most notably divide into evolutionary algorithms (e.g., genetic algorithms) and swarm intelligence (e.g., ant algorithms).

Applications of artificial intelligence Business


Banks use artificial intelligence systems to organize operations, invest in stocks, and manage properties. In August 2001, robots beat humans in a simulated financial trading competition (BBC News, 2001). A medical clinic can use artificial intelligence systems to organize bed schedules, make a staff rotation, and provide medical information. Many practical applications are dependent on artificial neural networks, networks that pattern their organization in mimicry of a brain's neurons, which have been found to excel in pattern recognition. Financial institutions have long used such systems to detect charges or claims outside of the norm, flagging these for human investigation. Neural networks are also being widely deployed in homeland security, speech and text recognition, medical diagnosis (such as in Concept Processing technology in EMR software), data mining, and e-mail spam filtering. Robots have become common in many industries. They are often given jobs that are considered dangerous to humans. Robots have proven effective in jobs that are very repetitive which may lead to mistakes or accidents due to a lapse in concentration and other jobs which humans may find degrading. General Motors uses around 16,000 robots for tasks
Mech Engg. Page 28

Seminar Report
such as painting, welding, and assembly. Japan is the leader in using and producing robots in the world. In 1995, 700,000 robots were in use worldwide; over 500,000 of which were from Japan.

Toys and games


The 1990s saw some of the first attempts to mass-produce domestically aimed types of basic Artificial Intelligence for education, or leisure. This prospered greatly with the Digital Revolution, and helped introduce people, especially children, to a life of dealing with various types of AI, specifically in the form of Tamagotchis and Giga Pets, the Internet (example: basic search engine interfaces are one simple form), and the first widely released robot, Furby. A mere year later an improved type of domestic robot was released in the form of Aibo, a robotic dog with intelligent features and autonomy.

TOYS AND GAMES

List of applications
Mech Engg. Page 29

Seminar Report
Typical problems to which AI methods are applied

Pattern recognition
o

Computer vision, Virtual reality processing and Image

Optical recognition

character

o o o

Handwriting recognition Speech recognition Face recognition

Game theory and Strategic planning Game artificial intelligence and Computer game bot

Artificial Creativity

Natural

language

processing, Translation and Chatterbots Other fields in which AI methods are implemented

Artificial life Automated reasoning Automation Biologically-inspired computing Colloquis Concept mining Data mining Knowledge representation

Robotics
o o o o o o

Behavior-based robotics Cognitive Cybernetics Developmental robotics Epigenetic robotics Evolutionary robotics

Hybrid intelligent system Intelligent agent Intelligent control Litigation

Semantic Web

Mech Engg.

Page 30

Seminar Report
CONCLUSION
Autobot is not a real A.I., it is totally hypothetical but itand its featurescan be valuable as a model for designing artificial intelligence. It has an adequate model of the world, made up of learned and tested information; it has goals which direct its behaviour and is able to create, modify, and improve these goals; it is capable of problem solving; it is capable of deducing new information; and it is capable of formulating strategies to achieve goals, and of adjusting those strategies as necessary. We can look at Autobot as a model for designing other A.I.s which can use the same basic architecture and design features to approach their own tasks and problems. While it requires an investment in some infrastructure, and in creating the A.I. itself, using Autobot to control traffic networks is hugely more efficient than letting humans control traffic lights and maintenance. It just is not possible for a team of human controllers to adapt to changing circumstances, or micromanage things like case by case optimal red light lengths, as effectively as Autobot. This is true of to an A.I. in any position, it will require some investment of infrastructure to set it up with the necessary sensors to retrieve enough information to create an adequate model of the world, but the things it is capable of makes it more than worth the expenditure. There should be no worry about A.I. making humans redundant. Autobot (or any A.I)'s friendliness supergoal will cause it to value humanity and individual humans and their right to autonomy. An A.I. coordinating a city could do so in tandem with, not instead of, humans. It is possible for a team of people to control all of the traffic lights in a city, but people get bored and sick and quit, and they need breaks for lunch and cannot work continuously. A machine that controls the traffic signals can operate forever, never takes a day off, and never needs to be paid. Since it respects human autonomy, an A.I. with a friendliness supergoal will only take over the jobs delegated to it. It would be more efficient, and safe, to make Autobot directly responsible for piloting all of the cars (as seen in I, Robot) but humans are not currently willing to relinquish control of their cars and so Autobot seeks to fulfill its goals within this limitation, rather than challenging it.

Mech Engg.

Page 31

Seminar Report

Mech Engg.

Page 32

Вам также может понравиться