Вы находитесь на странице: 1из 8

NPTEL

Video Course on Machine Learning

Professor Carl Gustaf Jansson, KTH

Week 1 Introduction to the


Machine Learning course

Video 1.2 Foundation of Artificial Intelligence


and Machine Learning
Artificial Intelligence as a research area has 62 year old Roots

The area was named and defined at a


Summer workshop during 6-8 weeks in the
summer of 1956.
This happened not much longer than a
decade after the advent of the first computer.
A small group of computer scientists
gathered at Dartmouth College in New
Hampshire, US.
Why did this workshop take place at this point in time?

The bulk of the work in the early days of computer science was focussed on low
level matters applying the new computer tools on very standardized task.

I think the answer is that the people who were instrumental in the development of
computer science already from the beginning had the visions and ambitions now
articulated in the agenda for artificial intelligence.

Obviously they thought that after some decade of preliminary work, the original
ambitions for the use of computers should be put up high on the agenda again.
Defining Statement for Artificial Intelligence

´The study is to proceed on the basis of the conjecture


that every aspect of learning or any other feature of intelligence
can in principle be so precisely described that a machine can be made to simulate it.

An attempt will be made to find how to make machines:


- use language
- form abstractions and concepts
- solve kinds of problems now reserved for humans and
- improve themselves.´

We think that a significant advance can be made in one or more of these problems if a
carefully selected group of scientists work on it together for a summer.¨

John McCarthy, at Dartmouth College in 1956


’Founding Fathers’ of Artificial Intelligence in 1956

Claude Shannon founder of Information and Communication Theory


D.M. Mackay british researcher in Information Theory and Brain organization

Julian Bigelow chief engineer for the von Neumann computer at Princeton in 1946
Nathaniel Rochester author of the first assembler for the first commercial computer

Oliver Selfridge named ’the father of Machine Perception’

Ray Solomonoff inventor of Algorithmic Probability


John Holland the inventor of Genetic Algorithms
Marvin Minsky key MIT researcher in the early development of AI

Allen Newell champion for symbolic AI and inventor of central AI techniques


Herbert Simon pioneer in Decision making theory and a Nobel Prize Winner
John McCarthy inventor of the LISP programming language
Early work in Artificial Intelligence (1950-1960)

A. M. Turing, 1950, Computing Machinery and Intelligence. Mind 49.


https://www.csee.umbc.edu/courses/471/papers/turing.pdf.

In 1955 Allen Newell, Herbert A. Simon and Cliff Shaw created the Logic Theorist, the first program
deliberately engineered to mimic the problem solving skills of a human being. It is called "the first Artificial
Intelligence program". It could prove theorems in Whitehead and Russel’s Principia Mathematica. It
introduced key artificial intelligence techniques such as list processing, means ends analysis and heuristic
search.

In 1958 John McCarthy created the first version of Lisp based on Lambda Calculus and using list processing,
the second-oldest high-level programming language in widespread use today. Only Fortran is older, by one
year.

Oliver Selfridge created the Pandemonium architecture in 1959, which was one of the first computational
models in pattern recognition for images.

In 1959 Simon, Newell and Shaw created the General Problem Solver (GPS) a computer program intended
to work as a universal problem solver machine. Any problem that can be expressed as a set of well-formed
formulas (WFFs) or Horn clauses, can be solved, in principle, by GPS.
Early work in Machine Learning (1950-1960)
Arthur Samuel coined the term Machine Learning in 1959. Samuel joined IBM's Poughkeepsie Laboratory in 1956
and pioneered first machine learning in the context of programs that played checkers.
Samuel, Arthur L. (1959). "Some Studies in Machine Learning Using the Game of Checkers¨. IBM Journal of Research and Development.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.368.2254&rep=rep1&type=pdf

McCulloch and Pitts introduced Neural Networks as a model of computation as early as 1943
Warren McCulloch and Walter Pitt, "A Logical Calculus of the Ideas Immanent in Nervous Activity“. In: Bulletin of Mathematical Biophysics Vol 5,
http://www.cse.chalmers.se/~coquand/AUTOMATA/mcp.pdf

Marvin Minsky and Dean Edmonds build SNARC in 1956, the first Neural Network machine, able to learn.

Frank Rosenblatt invents the Perceptron in 1957, while working at the Cornell Aeronautical Laboratory.
Frank Rosenblatt,1957, ´The perceptron, a perceiving and recognizing automaton´ Project Para,Cornell Aeronautical Laboratory,
https://blogs.umass.edu/brain-wars/files/2016/03/rosenblatt-1957.pdf

John Holland introduced Genetic Algorithms in 1960 based on the concept of Darwin’s theory of evolution.

Ray Solomonoff published work on Non-semantic machine learning in 1956. His first contribution, he termed an
´Inductive Inference Machine´. In 1960 he also published his first work on Algorithmic probability.
http://world.std.com/~rjs/An%20Inductive%20Inference%20Machine1957.pdf
NPTEL

Video Course on Machine Learning

Professor Carl Gustaf Jansson, KTH

Thanks for your attention!

The next lecture 1.3 will be on the


topic:

Intelligent Autonomous Systems


and Artificial Intelligence

Вам также может понравиться