Вы находитесь на странице: 1из 11

UNIT 7 - The Social Implications of

Artificial Intelligence and Expert Systems

TABLE OF CONTENTS

CONTENT .......................................................................................................................... 2

1. A BRIEF HISTORY OF ARTIFICIAL INTELLIGENCE ........................................... 2


1.1 THE TWO BRANCHES OF ARTIFICIAL INTELLIGENCE .................................................... 3
1.2 ESSENTIAL ABILITIES FOR INTELLIGENCE .................................................................... 3
2. IS HUMAN INTELLIGENCE A SYMBOL MANIPULATING ACTIVITY?................................... 4
2.1 THE C HINESE ROOM SCENARIO ................................................................................... 4
2.2 SYSTEMS THEORISTS .................................................................................................. 5

3. EXPERT SYSTEMS AND LEGAL PROBLEMS ......................................................... 5


3.1 EXPERT SYSTEMS DEFINITION ..................................................................................... 5
3.2 EXPERT SYSTEMS AND LEGAL PROBLEMS ................................................................... 6
3.3 EXPERT SYSTEMS AND LIABILITY ................................................................................ 7

4. REPLACE HUMANS WITH INTELLIGENT MACHINES? ...................................... 7

5. ARTIFICIAL INTELLIGENCE AND THE DEGRADATION OF THE HUMAN


CONDITION ...................................................................................................................... 8

6. BEHAVIOURAL NEUROSCIENCE............................................................................. 9
6.1 ENCODING ETHICAL PRINCIPLES INTO A CONSCIOUS COMPUTER .................................. 9

7. FURTHER ETHICAL ISSUES INVOKED BY ARTIFICIAL INTELLIGENCE..... 10


7.1 ARTIFICIAL INTELLIGENCE AND MILITARY ESTABLISHMENT FUNDING ....................... 10
7.2 THE THIRD WORLD AND EXPERT SYSTEMS ................................................................ 10

8. SUMMARY................................................................................................................... 11

BIS2061 1 Unit 7
Content

1. A Brief History of Artificial Intelligence


The dream of creating mechanical minds has a long history. European literature
abounds with stories of individuals who have accomplished this goal, including the
Gothic thriller Frankenstein by Mary Shelly. It has haunted philosophy since the early
17th century when Rene Descartes claimed that understanding consists of building
appropriate representations. The first attempt to express the laws of thought on a
logical basis was made by the Irish mathematician G. Boole in the 19th century.
Boole's reduction of logic to a series of yes / no decisions became a cornerstone of
20th century philosophy and science (Boolean Logic).

At around the same time, C. Babbage invented his difference engine, a device for
tabulating arbitrary mathematical functions (the forerunner of today's digital computer
systems). Fifty years later, G. Frege, a German mathematician, invented the first
formally defined axiomatic logic system, while Whitehead and Russell wrote their
Principia Mathematica, whose goal was to demonstrate that the roots of all
mathematics lies in the basic laws of logic.

All these efforts culminated in one of the most influential movements of the early
20th century, Logical Positivism. The movement attempted to define the nature of
knowledge with the same formal rigour that was then popular in the world of
mathematics. One very influential work of that time was Wittgenstein's Tractatus
Logico-Philophicus (1921). The book is a formal argument about the connection
between what can be said and what can be thought or known. To some extent
Wittgenstein already forges a link between human thought and computation. The
initial enthusiasm and optimism of logical positivism soon proved untenable and
Wittgenstein, in his later philosophy, came to reject most of his theses in his
Tractatus.

In 1936, A. Turing, an English mathematician, restated the assertion that thought


could be reduced to computation. Turing devised a formal definition of computation
as programs executed by an automaton (the Turing machine). Since then, Turing
machines have served as our predominant model of computation. Turing and the
mathematician A. Church, independently, also reframed one of Wittgenstein's key
assertions as a hypothesis known as the Church-Turing Thesis:
‘If a problem that could be presented to a Turing machine is not solvable, then it is
also not solvable by human thought’.

From this assumption, it would follow that if humans can solve problems or behave
intelligently, then it should ultimately also be possible to build machines to exhibit the
same kind of behaviour. This is the cornerstone of Artificial Intelligence. Turing also
later proposed an operational definition of intelligence, the so called *Turing Test, see
below.

BIS2061 2 Unit 7
A number of further crucial developments occurred in the late 1940s, when the first
digital computer systems were built, following J. von Neumann's stored program
idea. These machines served to demonstrate how logic and arithmetic could work
together to describe computational procedures. Cybernetics, the investigation of
feedback loops in engineering, science and organisms, was also emerging as a new
field, holding much promise for the study of intelligence.

The birth of artificial intelligence as a research program with an identity of its own is
often linked to the famous Dartmouth Summer School on Artificial Intelligence,
where a number of researchers actively engaged in thinking about "thinking
machines" met in 1956. The four most influential participants were Herbert Simon,
Allan Newell, John McCarthy and Marvin Minsky. While cybernetics centred on the
study of essentially low level phenomena, such as feedback control and neural
networks, artificial intelligence started with a much more ambitious, high level view
of thought as a symbol processing activity performed by the human problem solver.

*The Turing Test


Here two parties are allowed to communicate through some impersonal medium
such as a Teletype terminal. A program may then be called intelligent if it can
fool the human partner into believing that he /she is conversing with another
human. (Kreutzer & McKenzie, 1991)

1.1 The Two Branches of Artificial Intelligence

‘AI consists of two branches of research:


1 One branch which attempts to shed light on the nature of human intelligence by
simulating it , or components of it, with the eventual aim of totally replicating it,
or even surpassing it; and
2 Another branch which attempts to build expert systems that exhibit intelligent
behaviour regardless of its resemblance to human intelligence.’
(Forestor and Morrison, 1990)

The latter school is particularly concerned with the construction of intelligent tools for
assisting humans in complex tasks such as medical diagnosis, chemical analysis, oil
exploration and fault diagnosis in machinery. Other activities that fall under these two
branches of endeavour include attempts to build:
§ Systems with visual perception
§ Systems that understand natural language
§ Systems that demonstrate machine learning capabilities
§ Systems that can manipulate objects (robotics)
§ Systems that can provide intelligent tuition
§ Systems that can play games

1.2 Essential Abilities for Intelligence


No one knows where the borderline between non-intelligent behaviour and intelligent
behaviour lies; in fact, to suggest that a sharp borderline exists is probably unwise.
According to Douglas Hofstadter the essential abilities for intelligence are certainly:

BIS2061 3 Unit 7
§ To respond to situations very flexibly
§ To take advantage of fortuitous circumstances
§ To make sense out of ambiguous or contradictory messages
§ To recognise the relative importance of different elements of a situation
§ To find similarities between situations despite differences which may separate
them
§ To draw distinctions between situations despite similarities which may link them
§ To synthesise new concepts by taking old concepts and putting them together in
new ways
§ To come up with ideas in new ways

Now do Review Questions 1 and 2

2. Is Human Intelligence a Symbol Manipulating Activity?

Many AI researchers are adamant that human intelligence is a symbol manipulating


activity that can be simulated (at least in part) by computational means. In other
words, as intelligent beings, humans' have internal symbols or processes that have
external referents and associated meanings, and by manipulating those symbols in rule
governed ways humans can come to exhibit meaningful behaviour in a dynamic
environment. One of the most powerful arguments against this view is John Searle's
so called Chinese Room scenario.

2.1 The Chinese Room Scenario

Searle, a philosopher and long time sceptic of the claims for AI, proposes the
following thought experiment:
‘Suppose that a man is inside a room, which has a gap under the door and through this
gap he receives sheets of paper from someone outside. No other form of
communication is possible. The sheets of paper have Chinese symbols written on
them and the task before this individual is to translate those symbols into some other
language such as English. To do this, he simply looks up a table on the wall and
writes down the equivalent of the Chinese symbol in the required language. He then
passes these under the door to the person waiting outside.’

Searle's claim is that although the man in the room has manipulated symbols such that
Chinese language has been translated into English language, in no sense could the
man be said to understand Chinese. He has simply followed rules in order to change
one particular input format into a desired output format, and this is essentially what
digital computers do. Hence, any claim that rule governed symbol manipulation can
allow a computer to understand language, or more broadly, exhibit intelligence is
totally without foundation. Humans may manipulate symbols, but in communicating
or demonstrating intelligence in other ways they must be doing other things as well.

One of the founding principles of AI is that programs are formal representations and
are therefore executable on any from of computational equipment. Furthermore, since
AI supporters argue that the brain is simply a form of computational device, then the

BIS2061 4 Unit 7
program that executes within the brain (the product of which is the human mind) must
also be able to be executed on other forms of computational machinery, such as
digital computers. Therefore, in order to replicate the mind, all we need to do is
discover the nature of the program that executes within the brain - we can then run it
on a digital computer and replicate a human mind.

Searle's criticism of this principle is that it essentially disregards the physical


architecture on which the program executes. In his view:
‘Our mental states are outcomes of the physiology of the brain: the mind is not a
program that can be executed on any computer, instead, our mind emerges as a result
of the particular neurophysiological properties of our brain.’

2.2 Systems Theorists

Searle is not saying that human intelligence cannot be recreated. Systems theorists
advocate thinking of the mind itself as an emergent property of the brain, i.e. as a
physical organ in action. A good analogy is to argue:
‘That whirlpools, steam, ice, raindrops, snowflakes and sleet are phenomena, which
emerge from the physical properties and characteristics of the water molecule.
Similarly, a human mind is a phenomenon, which emerges from the structural,
electrophysiological and chemical qualities and processes of the human brain.’

Now do Review Question 3

3. Expert Systems and Legal Problems

3.1 Expert Systems Definition

In essence, expert systems are programs, which encapsulate an expert's, or several


experts' knowledge of a particular domain in a computer processable form. From this
knowledge base, inferences may then be drawn which may equal or exceed the quality
of similar inferences made by human experts. (Forestor and Morrisson, 1990)

Other definitions of expert systems include (Alex Goodall):


‘An expert system is a computer system that uses a representation of human expertise
in a specialist domain in order to perform functions similar to those normally
performed by an human expert in that domain. The system operates by applying an
inference mechanism to a body of specialist expertise represented in the form of
knowledge.’

Such systems have been applied to many problem areas, for example:
§ The analysis of chemical compounds
§ The diagnosis and treatment of infectious diseases
§ The configuration of computer systems for shipment
§ Identifying areas for mineral exploration and mining

BIS2061 5 Unit 7
For the most part, expert systems are collections of rules that have been extracted
from an expert by a knowledge engineer and take the form of either IF.... THEN
statements, semantic networks, frame and predicate logic. Forestor and Morrisson
(1990) provide the example of an expert system constructed for fault diagnosis of jet
engines that utilises IF.... THEN statements:

IF the engine stalled in flight


AND the aircraft's wing was at a high or excessive angle of attack at low
speed
AND the engine subsequently restarted at a normal angle of attack
THEN the engine may have suffered a compressor stall due to inadequate airflow into
the engine, caused by the aircraft being close to stalling

Regardless of the form in which knowledge can be stored in an expert system's


knowledge base the essential nature of expert systems in applying deductive and often
inductive methods to a body of knowledge remains unchanged. The real benefit of an
expert system occurs in applications of great complexity where such systems can
supply the appropriate intervention, therapy or repair procedures for a particular case
in hand. In order to achieve this, expert systems not only use a knowledge base and an
inference engine to operate on that knowledge, but they also usually provide an
explanatory interface that justifies their conclusion. This is achieved by explaining the
system's line of reasoning with relevant probabilities for each of the conclusions it
draws.

3.2 Expert Systems and Legal Problems

Note:
Negligence is a failure to act as a reasonable person would under the same
circumstances
Malpractice is a failure to demonstrate a minimum level of competence required by a
professional

A general case that has implications for all expert systems is where a professional
uses an expert system containing the codified knowledge of another professional. For
example, imagine that a doctor uses a medical diagnosis expert system and as a result
of the system's faulty knowledge, the patient is misdiagnosed and as a result dies.
Consider:

§ The doctor who supplied the knowledge could foresee that other doctors would
use it, yet at the same time, an attribution of liability could effectively discourage
any doctor from helping to construct such expert systems. In these circumstances
experts require software companies to indemnify them against liability for errors
or other inadequacies in the knowledge they supply
§ The doctor using this system could be liable, especially if it was discovered that
he/she failed to exercise professional judgement, or if he/she used the system
contrary to manufacturer's instructions
§ A doctor may also be liable if he/she failed to use such a system should it be
available, especially if it can be demonstrated that it would have improved patient
care

BIS2061 6 Unit 7
3.3 Expert Systems and Liability

The doctrine of strict liability requires that one who sells a product in a defective
condition, that is, unreasonably dangerous to the user, is subject to liability for the
physical harm caused to the ultimate user. Injured parties can thereby claim
compensation from the manufacturer or any other party in the chain of distribution.
This removes the need to demonstrate that the manufacturer or distributors acted
negligently. Only the defect, which rendered the product unreasonably dangerous,
need be demonstrated.

However this doctrine of strict liability is problematic when applied to expert systems.
First, the doctrine only applies to physical harm to persons and property. Some
applications of expert systems will not involve this, for example, a faulty mineral
exploration and mining program.
Second, the doctrine does not apply to services - only to products. Hence, in the
example of the medical expert system, the patient is obtaining a service therefore
strict liability cannot be invoked in this instance.

Now do Review Question 4

Activity 1 – Expert Systems and Professional Issues

4. Replace Humans with Intelligent Machines?


A way in which AI could be seen as an improper goal for society is by asking the
question of whether we really need to replace humans with intelligent machines? Do
demands for productivity require that intelligent computers of some description
replace thousands of workers, for example, replacing bank clerks with automatic teller
machines or replacing shop floor workers with robots in the car industry?

The counter argument is best represented by the following anecdote (Forestor and
Morrison, 1990):

‘A union leader looking over a quarry site bemoans the fate of his workers. He
approaches the quarry owner and says ‘If it wasn't for those steam shovels, we'd be
employing 500 men with shovels.’To which the quarry owner replies ‘If it wasn't for
your 500 quarry men with shovels we'd be employing 10,000 men with thimbles.’’

Perhaps the message of this anecdote is not just that technological change demands
changes in the nature of work, but that work can also be dangerous, dirty and simply
degrading to human beings. In that case Forestor and Morrison (1990) argue that
perhaps the design of work for human beings in conjunction with intelligent
technology is what we require. Beyond that, the job reducing potential of technology
needs to be managed more effectively by the provision of training programs, incentive
schemes and appropriate government policies.

BIS2061 7 Unit 7
Now do Review Question 5

5. Artificial Intelligence and the Degradation of the Human


Condition
There is an argument that AI is demeaning to human beings simply because it
degrades the human condition itself. For example, experts in robotics have proposed
that
‘The specifically human characteristics of emotion, free will, moral responsibility,
creativity and ethical awareness can be accommodated by the doctrine of the robotic
man.’

Historically, most cultures have come to regard human beings as apart from the
animals, the supreme pinnacle of creation and or evolution. Humanists in particular
have felt uncomfortable with the notion of consciousness as a mechanical process, or
indeed any process which can be decomposed, understood and recreated. For them,
this denies human beings their mystery, or the possibility of an essence or soul that
exists beyond the physical plane.

On the other hand, AI proponents such as Margaret Boden (Forestor and Morrison,
1990) argues that this reaction arises simply because we have such a limited, and
perhaps demeaning, view of machines, that stems from 19th century images of
clockwork and gears. She argues that such preconceptions do not encompass
‘The potential richness and subtlety that machines can possess.’

David Bolter (Forestor and Morrison, 1990) argues that the metaphor of the computer
leads us to view humanity in finite terms as opposed to the infinite view of human
consciousness popular during medieval and renaissance periods.

Should we regard humans as portrayed by Bertrand Russell in his statement:


‘Is man what he seems to the astronomer, a tiny lump of impure carbon and water
impotently crawling on a small and unimportant planet?’ (Russell, 1961)

Alternatively, should we regard man as the supreme creation, at the apex of


evolution? This anthropocentric view is best illustrated by the following quote from
William Shakespeare's Hamlet:
‘What a piece of work is a man! How noble in reason! how infinite in faculty! In
form, in moving, how express and admirable! In action how like an angel! in
apprehension how like a god! The beauty of the world! The paragon of animals!’

The anthropocentric philosophy is also best illustrated by Michelangelo Buonarotti's


The Creation of Adam, c.1510 in the Sistine Chapel:
The Creation of Adam depicts for the first time in the history of this subject in
painting, God the Father is horizontal, mirroring the reclining figure of Adam. The
characters in the depiction are all arranged and fixed in the horizontal plane. This
could be interpreted to have two significant implications. First, this demonstrates
certain arrogance, that humanity should promote himself to a position where he is in

BIS2061 8 Unit 7
the same plane as God the Father. The figures could just have easily been arranged
and fixed in the vertical plane so that God the Father reached down to Adam and
conversely Adam reached up to his creator. This latter arrangement could have
preserved mankind's humble relationship with God. Second, art critics argue that the
horizontal plane arrangement of the figures imply that Michelangelo thereby took the
words of the Bible literally: ‘So God created Man in his own image’ (Genesis I: 27).
This can be argued as being a very anthropocentric. Why should God, who has
created the entire universe, be in the form of a man?

6. Behavioural Neuroscience
Rita Carter (1998), a renowned medical writer, illustrates how via the latest brain
scans our thoughts, memories - even our moods- can be revealed as clearly as an X-
ray reveals our bones. She enthuses on how
‘We can observe individuals' brains literally light up in a specific area when they
register a joke and in contrast with how dully it glows in another when they recall an
unhappy memory.’

Something as individual as our personalities is nothing more than reflections of the


biological mechanisms underlying thought and emotion. Even behavioural
eccentricities may be traced to abnormalities in the geography of the brain. Carter
states
‘Obsessions and compulsions, for example, seem to be caused by a struck neural
switch in a brain area which monitors the environment for danger; addiction, eating
disorders and alcoholism stem from dysfunction in the brain's reward system.’

Carter suggests that even belief in God has been linked to activity in a particular
region of the human brain. It is possible to locate and observe the mechanisms of
rage, violence and misperception, and even to detect the physical signs of complex
qualities of mind like kindness, humour, heartlessness, gregariousness, altruism,
mother-love and self-awareness.

If these biological mechanisms could be formalised then artificial intelligence has the
basis for replicating human behaviour. When each minute brain component has been
located, its function identified and its interactions with each other component made
clear, the resulting description will contain all there is to know about human nature
and experience.

6.1 Encoding Ethical Principles into a Conscious Computer

Boole, George (1815-1864)


The publication of The Mathematical Analysis of Logic has come to be regarded as
the first substantial step towards modern mathematical logic. Earlier mathematicians,
in particularly Gregory and Peacock, had shown that algebraic methods can be used to
represent relations between entities other than numbers. The basic idea of Boole's
logic is the use of methods substantially equivalent to those of ordinary algebra, to
operate on variables, x, y, z… standing for classes and the symbols 1 and 0 standing
respectively for the universal class and the empty class.

BIS2061 9 Unit 7
In Boole's symbolism, if x represents a class, say the class of red objects, (1-x) or →
for short, stands for the complementary class of objects that are not red. Operations
corresponding to addition, subtraction and multiplication in ordinary algebra are
introduced. If x stands for the class of red objects, and y stands for the class of square
objects, then xy stands for the product of the two classes, the objects that are red and
square. And x + y stands for the class of objects that are either red or square but not
both.

With this notation we can represent a limited class of statements of logical


importance. Using For example, if we want our conscious computer to treat all men as
equal:

All men are equal


Let the class of men = x
Let the class of equality = y

Hence,
xy=1
x (1 -y) = 0

The conscious computer will recognise the class of men who are not equal as empty
and hence only recognise to treat all men as equal. The symbols 1 and 0 standing
respectively for the universal class and the empty class are the formal representation
used in computers.

Activity 2 – Encoding Ethical Principles into a Conscious Computer

7. Further Ethical Issues Invoked by Artificial Intelligence


7.1 Artificial Intelligence and Military Establishment Funding

AI could be considered an improper goal for society due to its funding base and clear
links with the military establishments of both the US and the UK. For example, via
funding from the US Defence Advanced Research Projects Agency (DARPA), AI
researchers under the Reagan administration embarked upon a huge spending spree to
develop key weapons or weapons related systems that formed part of the Strategic
Computing Initiative. Current research includes an intelligent pilot's assistant that can
assist a fighter pilot under the stress of high G manoeuvres to plan target approaches
and exits, evasive action and monitor threats in a hostile aerial environment.
Similarly, prototypes of autonomous reconnaissance vehicles that can head out into
enemy territory evade enemy attacks and transmit tactical information back to a
computerised HQ. In addition, expert systems could assist generals to make correct
decisions in the face of the enormous complexity, conflicting reports and speed that
characterises modern conflicts. Recent conflicts in Iraq and Serbia have highlighted
the adoption of technology by the Western military machine to assist in successful
campaigns.

7.2 The Third World and Expert Systems

BIS2061 10 Unit 7
AI enthusiasts claim that we need to provide Third World countries with expert
systems for medical diagnosis, agricultural advice and geological analysis because
these countries lack substantial human expertise in such areas.
Forestor and Morrison (1990) view this claim as:
‘.... Another technofix that attempts to fix the symptoms without addressing the cause’

Many scholars argue that the reasons for these inadequacies can be traced to
exploitation by the developed world via irresponsible loan practices, trade cartels,
cash crop economies perpetrated by Western involvement. The strategy suggested is
for Third World countries to eliminate these problems rather than bootstrap
economies by technological means.
‘After all, what use is an expert system in a country that does not have a regular
power supply, the parts or people to maintain it, nor the expertise to tailor it to local
conditions and local needs. Of what benefit is expert system advice that improves, for
example, agricultural output if the output merely pays off a foreign debt - a debt that
was incurred buying weapons for a civil war brought about by colonial powers
deciding that disparate racial groups should become a country? What purpose is there
to an agricultural surplus that goes into the pockets of a political elite - an elite
maintained by one or other power bloc which happens to need bases or a strategic
buffer zone? From such an analysis, perhaps we are inevitably led to the conclusion
that the most appropriate line of attack for solving the problems of these countries lies
at a non technological level rather than through a computerised technological fix.’

8. Summary
This unit has introduced some of the key concepts and social and ethical issues
invoked by Artificial Intelligence. You have seen what AI is about and why it raises
such passionate debate amongst scientists, ethicists, and philosophers.

BIS2061 11 Unit 7

Вам также может понравиться