Вы находитесь на странице: 1из 27

4WelcomE

6The enigmatic Alan Turing: A Biography


Tom Hook

8The Children Of Colossus


Colin Williams

16My Life At Bletchley


Henry Clifton/Peter Watson

18Bletchley Parks Forgotten Heroes


8The Children Of Colossus

Helen Morgan

20WHY ALAN TURING CRACKED THE ENIGMA while


THE GERMANS FAILED
Klaus Schmeh

22The alan turing Legacy: The cycle of


missed opportunities

Graeme Stewart

24GCHQ AND TURINGS LEGACY


Iain Lobban

26A colossus rising


The National Museum of Computing

28BEYOND THE IMITATION GAME


Dr Mark Bishop

30CAN YOU TELL THE DIFFERENCE BETWEEN HUMAN


AND MACHINE?
Kevin Warwick

32PRACTICALLY TURING
Anthony Beavers

18MY LIFE AT BLETCHLEY

35TURINGs STRANGE INVERSION OF REASONING


Daniel Dennett

39ALMANAC OF EVENTS
40TURING AND COMPUTATIONS IN PURE MATHS
Andrew Odlyzko

42QUANTUM RANDOMNESS AND QUANTUM CRYPTOLOGY


Cristian Calude

44ALAN TURING: HIS WORK AND IMPACT - Review


Helen Morgan

45CODEBREAKER at the british science museum - REVIEW


Andrew Cook

46HOW LONG IS AN ALAN TURING YEAR?


Barry Cooper

48extending TURINGS PATTERN


Aaron Sloman

50Off-topic
51 THE LAST WORD
46HOW LONG IS ALAN TURING YEAR?

Produced with the support of:


The National Museum of Computing

To a contemporaneous
consciousness England, and
therefore much of what was
of any real importance in the
world at large, was in an utterly
unremarkable state on 7th June
1954. Normality was returning and
Englishness prevailed.
The events of 6th June 1944, D Day,
were already receding in to memory
and although the defeat of the forces
of a European colonial power at
Dien Bien Phu the month before was
perhaps portentous, nonetheless in
June, in an unremarkable England, the
unremarkable illusion could easily be
preserved that the episode remained a
French military misadventure. A tragic
and localised blunder in Indochina; a
battle in a remote land, rather than a
war in Vietnam with the potential to
trigger a terminal nuclear conflagration.
The World had yet to fully appreciate
the will of Dwight D Eisenhower, and
his successors to act out the logic of
the Domino Theory promulgated on
the 7th April of that unremarkable year
by the man who had been Supreme
Allied Commander in Europe from
1943 to 1949, then the first Supreme
Commander of NATO and, as he
launched his meme in to history, was
the 34th president of the United States
of America.
Two years or so before this utterly
unremarkable day, at around half past
eight Eastern Standard Time on the 4th
November 1952, the UNIVAC computer
in Philadelphia, connected remotely via
teletype to the CBS studios in New York,

was given a near real time feed of early


results from the elections for the 34th
presidency. With less than one percent
of the returns UNIVAC predicted a
landslide victory; not for the marginal
favourite, the then Governor of Illinois
Adlai Stevenson, but rather for the Five
Star General, Dwight D Eisenhower. The
CBS management suppressed the news
of the UNIVAC prediction. Late in the
evening of the 4th November, the CBS
anchor, Charles Collingwood admits
their error, and their humiliation, and
announces that UNIVAC had, within
one percent margin of error, predicted
the Eisenhower triumph of 442 votes to
89.
In the decades following his election,
the fabric of Eisenhowers military
industrial complex is woven on the loom
of the computing revolution. Mastery
and dominance of the business and
technology of computers becomes as
critical to the American economy and
American society as it becomes to the
survival of the Cold War.
John Mauchly watched UNIVAC,
the computer he had helped build,
change the world forever, from home.
He had been forced to resign from
the presidency of Eckert-Mauchly
Computer Corporation on the 8th
March 1951 after being blacklisted
by the FBI as a Communist Party
sympathizer. In 1958 the Secretary of
the Army restored his secret clearance.
The institutions charged with defending
democracy atoned for a prior
delinquency so to do.
It was to be another four years or so
from this unremarkable day before

President Eisenhower was to grant


executive authority to the acts of 1958
that called in to being the Defence
Advanced Research Projects Agency
(DARPA). It was to be another 11 years
until the log entry of 10:30 pm on 29th
October 1969 records the first ever
message sent across the ARPANET.
Overall, the summer of 1954 was wet,
dull and cool; an utterly unremarkable
English summer in fact. So, all was
indeed well. The superiority of British
pluck and amateurism were affirmed
as Roger Bannister broke the four
minute barrier on the 6th of May,
and as England enjoyed what might
have been the slightly soggy Spring
bank holiday weekend, and as the
congregations swelled on Sunday 6th
for the celebration of Pentecost in
commemoration of the descent of the
Holy Spirit upon Christs disciples (only to
recede afterwards in to an almost guilty
indifference until Christmas), secular
tastes had elevated Doris Days Secret
Love to the number one position in the
music charts.
For the secular and the observant alike
the Spring Bank Holiday weekend of
1954 was sweeter still as sugar had
come off rationing in the September of
the previous year. Tea and cakes were
a national, consensual, communal,
unifying, obsession. A small luxury
to offset the various hungers of the
privations of war and the austerity that
followed. The Lyons corner tea shop
had become a national institution. The
echoes of all of which resonate to this
day through the Great British Bake Off".

If we are to win
the battles to
come against
those who would
use Turings
machines to
destroy the
democracy and
freedoms Turing
helped safeguard
then we must
prove ourselves
worthy of his
legacy.
As the episodically observant made
their transition from the realm of the
cassock to the chapels of the cake,
Lyons tea houses across the land were
now more than capable of coping with
the mass influx of treat seekers because,
three years before this unremarkable

On this unremarkable weekend a


(presumably) unremarkable woman
did (presumably) unremarkable things.
Free from her duties as a housekeeper
to a single gentleman of eccentric,
but otherwise unremarkable, habits
she may have joined the ephemeral
Pentecostal flock. She may have then
further communed through computer
enabled confectionary. She may
even, depending on her tastes, have
watched the Hollywood great, Ronald
Reagan, star as Webb Sloane in the
Korean War Drama, Prisoner of War.
Returning to work on Tuesday 8th
June this unremarkable woman must
have found the house of her employer
somehow odd. In his bedroom, by the
side of his bed, she found an apple;
bitten in to but not fully eaten. In the
bed she found the corpse of Alan
Mathison Turing.
Sometime during the evening of the 7th
June 1954, sometime during this most
unremarkable of English days, sometime
as England was going about the
business of being unremarkably English;
one of the most remarkable and brilliant
minds humanity has ever seen, ceased
to be.

Without this mind, 7th June 1954 would


not have been as it was. We would not
be who and as we are.
Turings machines helped to save
democracy. Turings machines have
transformed humanity and society.
Turings machines are our future.
Turings machines may fulfil the wishes of
their creator and learn to love. Turings
machines may become a new form of
life itself. Turing was, and is, important.
As the twenty first century unfolds,
and the Information Age emerges, it
becomes ever more apparent that his
vital role as war time code breaker,
by itself sufficient to secure his place
in history and our gratitude, is but the
smallest facet of his importance.
As this edition of CyberTalk goes
to press it is looking increasingly likely
that Alan Mathison Turing will receive
a long overdue posthumous pardon.
His prosecution and treatment at the
hands of a democracy he had done so
much to preserve was ignoble, inhuman
and cruel. If Lord Sharkeys bill does
pass, as it should, then it will serve as
a necessary moment of atonement
for a great wrong. It will also open
a long overdue debate about the
rectitude of the many thousands of
similar convictions. If we are to win
the battles to come against those who
would use Turings machines to destroy
the democracy and freedoms Turing
helped safeguard then we must prove
ourselves worthy of his legacy.

were sorry.
You deserved so much better.
Gordon Brown

CONTACT US
General enquiries: +44 (0) 1347 812150

www.facebook.com/cybertalkmagazine

www.youtube.com/cybertalkuk

Email: cybertalk@softbox.co.uk

@CyberTalkUK

www.pinterest.com/cybertalk

Web: www.softbox.co.uk/cybertalk

Editors

Contributors

SPECIAL THANKS

Call for Articles

Professor Tim Watson

Professor Anthony Beavers

Colin Williams

Professor Mark Bishop

The National Museum of


Computing

The Publisher and Editorial Board of CyberTalk magazine are currently inviting the
submission of articles for the Spring 2014 issue.

Professor Cristian Calude

CREATIVE AND
DIGITAL EDITOR
Andrew Cook

Andrew Cook
Professor Barry Cooper
Professor Daniel Dennett

Bletchley Park Trust


Henry Clifton
Peter Watson
The Bill Tutte Memorial Fund

Helen Morgan

Helen Morgan

Professor Andrew Odlyzko

Natalie Murray

Creative
consultant

Aaron Sloman
Kevin Warwick
Colin Williams

If you would like to contribute to issue four of CyberTalk, please email cybertalk@
softbox.co.uk with a short article synopsis no later than November 8th 2013
Articles will be due for submission no later than December 20th. Issue 4 of
CyberTalk will be published in February 2014.

Small Print

Tom Hook

DEPUTY EDITORS

Tineke Simpson

04 \\ cybertalk

day, they had first switched on one of


the worlds first business computers.
The Lyons Electronic Office (LEO) was
the first re-programmable and stored
programme computer in the world to
run a routine business application on a
regular basis. By this unremarkable day
in 1954, LEO was running the inventory
and payroll for a nationwide chain of
hundreds of tea shops selling perishable
goods and employing thousands of
staff working irregular shift patterns with
varying wage rates.

DESIGN
Reflect Digital
www.reflectdigital.co.uk
Cover & The Last Word design
by Andrew Cook

CyberTalk is published three times a year by SBL (Softbox Ltd). Nothing in this magazine may be
reproduced in whole or part without the written permission of the publisher. Articles in CyberTalk
do not necessarily reflect the opinions of SBL or its employees. Whilst every effort has been made
to ensure that the content of CyberTalk magazine is accurate, no responsibility can be accepted
by SBL for errors, misrepresentation or any resulting effects.
Established in 1987 with a headquarters in York, SBL are a Value Added IT Reseller widely
recognised as the market leader in Information Security. SBL offers a comprehensive portfolio of
software, hardware, services and training, with an in-house professional services team enabling
the delivery of a comprehensive and innovative range of IT solutions.
CyberTalk is designed by Reflect Digital and printed by Warners Midlands PLC.

cybertalk \\ 05

Enigmatic

the

Alan Turing:
A Biography
By Tom Hook, SBL

Alan Turing provided a rare and remarkably


rich contribution to society, for which the
human race will be eternally indebted.
Within only 20 years between his university graduation and
death in 1954, he was able to pioneer ground-breaking ideas
in the fields of mathematics, philosophy, biology and, most
significantly, computer science. On top of all this, he was
instrumental in changing the course of World War II, preventing
the deaths of millions. Yet, he was unknown in life and remained
relatively unknown in death, until 1974 when details about his
work at Bletchley Park were finally released.
Alan Turing was born on 23 June 1912 to Julius and Ethel Turing.
He showed signs of genius from a very early age and was
extraordinarily quick to learn new skills. Its said that he taught
himself to read within just three weeks. Numbers fascinated the
young Turing, so much so that he developed a habit of stopping
at every street light in order to find its serial number.

The following years saw the pinnacle of Turings genius and


innovation. After graduating, Turing focused his efforts on a
widely known fundamental problem in mathematics, known
as the Decidability Problem, a product of the work of David
Hilbert, a German mathematician. In 1900, Hilbert appealed to
his contemporaries to find an algorithmic system to answer all
mathematical problems a system which must be complete,
decidable and consistent. These three conditions would mean
that every mathematical statement could either be proven or
disproven by clear steps that would reach the correct outcome
every time.
Kurt Godel had already disproved Hilbert in 1931 by showing
that consistency and completeness could never exist in such
a system. However, the requirement of decidability (i.e. that
definite steps are followed in order to prove or disprove a
statement) was left unanswered.

Turing went on to solve the Decidability Problem as a result of


his ideas for his famous Turing Machine. He conceived of a
powerful computer that would only understand the digits 0 and
1. The idea involved an infinite tape of these numbers
Turing had a varied experience with the English school
that would be written and read by the machine.
system and, although his aptitude was recognised as
Different sequences of numbers would lead
genius by some teachers, he was uninterested in
to a variety of functions and the solving of
classical education in the curriculum, as he was
Numbers
various problems. The machines design
fixated by science and mathematics. He also
was a breakthrough in that it defined
encountered bullies during his school life,
fascinated the
and codified algorithms. In developing
and proclaimed that he learned to run fast
young Turing, so much
this heavily mathematical concept,
in order to avoid the ball. Running would
Turing found that problems did exist
later become a dedicated pastime, which
so that he developed
that algorithmic systems couldnt
Turing found gave him clarity of thought.
a habit of stopping at
solve. This thereby proved that Hilberts
He even went on to achieve world-class
final condition of decidability could not
marathon standards, as his best time of
every street light in
be obtained.
2:46 hours was only 11 minutes slower than
the Olympic winner in the same year, 1948.

order to find its

Turing, at the age of just 23, had solved


a problem that had baffled experts in the
field for over 30 years. His name would go
down in history books and the title genius
began to be ascribed to him once again. No
one could predict, however, that the principle that Turing
conceived, the Universal Turing Machine, would be inherent in
the development of technology for decades to come.

serial number.

Turing began to enjoy his school life more when


he moved into Sherborne Sixth Form (pictured),
where he was allowed to specialise in science and
mathematics. His fascination in science even led him to
want to further prove the Earths rotation, by building a replica of
the Foucault Pendulum in the dormitory stairwell.

Awarded a major scholarship to Kings College, Cambridge


in 1931, Turing read theoretical mathematics and excelled,
obtaining a distinction upon graduation. He was elected a Fellow
of Kings College in 1935. Only a year later he presented his first
paper to the London Mathematical Society.

06 \\ cybertalk

After several more years of study in America, Turing was


awarded a PhD from Princeton University. On September 4,
1939, the day after the declaration of war by Prime Minister
Chamberlain, Turing reported for duty at the wartime

headquarters of the Government Code and Cypher School


(GC&CS) at Bletchley Park, Buckinghamshire. It was here
that Turing developed his preformed computing designs and
adapted them to the field of cryptanalysis to decipher Enigma,
the code used by the German armed forces to protect their radio
communications.
Before Turing started work at Bletchley Park, progress on
decryption had been poor, with no Enigma-coded message
being successfully decoded in almost 10 months. Turing
contributed crucially to the design of electro-mechanical
machines (known as bombes), the first prototype of which,
named Victory, began successfully decoding Enigma in Spring
1940.
By 1943, Turings machines were cracking an astounding total
of 84,000 Enigma messages each month - two messages every
minute. These operations were kept absolutely confidential,
and Nazi forces were completely unaware that their strategic
conversations were being read by the English Admiralty,
sometimes within 15 minutes of being transmitted.
A pivotal use of this advantage was to stifle the efforts of U-boats,
the prevalence of which was the only thing that Churchill said
ever really frightened him. U-boats had destroyed 701 Allied
ships and 2.3m tons of vital cargo in the first nine months of
conflict alone. Turing et al. were able to significantly weaken
the U-boats hold over the Pacific Ocean by intercepting their
locations. Allied ships could therefore dodge the U-boats in the
vast openness of the Pacific, allowing Allies to transport fuel,
food, troops and ammunition from America to Britain.
If the U-boats had been allowed to halt the movement of this
cargo, such an effective attack at Normandy would never have
been possible. Without the D-Day Landings, the war could have
gone on for 2 3 more years which, analysts say, could have
meant a further death-toll of up to 21 million.
Having played a key role in the winning of the war by the Allies,
Turing was later awarded the OBE for his wartime services.
In 1946, Turing was invited to the National Physical Laboratory
where he designed the first stored-program computer, the
Automatic Computing Engine (ACE). Turing went on to work at
the University of Manchester from 1948, where he developed
software for the Manchester Mark 1 computer, a truly groundbreaking piece of technology.
Meanwhile, Turing was fascinated by the potential of computer
technology. His seminal 1950 article Computing machinery and
intelligence set the philosophical world alight with the proposal
that machines can, in principle, achieve a level of consciousness
equal to that of humans. This sparked a debate that still
rages today, and paved the way for the Artificial Intelligence
movement, which works tirelessly to improve the capacity of
computers with the ultimate goal of consciousness.
In 1952, Turing moved into a new area when he became
enamoured with mathematics in the natural world, and
pioneered the theory that the Fibonacci sequence is ubiquitous
in morphogenesis (the formation of an organisms shape). This
astounded biologists, and quickly Turing established himself as a
highly esteemed, revolutionary thinker in the field.
Sadly, his ground-breaking work around morphogenesis was cut
short. Through most of his academic and professional life, Turing
was open to his friends and colleagues as homosexual. In 1952
however, Turing began a new relationship, and this became an
issue, eventually leading to his social and emotional collapse.

Just weeks after his


relationship with Arnold Murray
began, Turings home was burgled. Turing
naturally reported the crime to the police, who
proceeded to arrest Turing and Murray on grounds of gross
indecency due to their homosexuality, illegal at the time.
Turing was convicted in March 1952 and had the choice
of imprisonment or probation under the condition that he
underwent hormonal treatment to reduce his libido and cause
impotence. He chose the latter option. Disgraced by the British
judicial system and undergoing enforced treatment to reduce his
masculinity, he was naturally depressed and it is widely believed
that this led to his suicide in June 1954.
When the history of Turings legacy and contribution during WW2
became better known, public anger at his treatment grew. After
a petition with thousands of signatures in 2009, Gordon Brown
apologised on behalf of the Government for the utterly unfair
way in which Turing was treated.
The Government has also recently announced its support for
granting a posthumous pardon to Turing for his conviction. In
October 2013, the third Parliamentary reading of the Alan Turing
(statutory pardon) Bill will take place. If the bill is passed, it will
help reflect public regret for the way Turing was treated and the
high level of esteem in which he is now held.
Modern society is indebted to this remarkable man in more ways
than we can imagine. Turings contributions to code-breaking
in World War II were arguably instrumental in the Nazis defeat,
preventing the deaths of millions and changing the course
of history. Furthermore, his revolutionary work in computing
provided the foundation for an infinite array of ever-emerging
technologies which the human race will continue to rely on for
many years to come.

Tom Hook is Bid Co-ordinator for SBL.He holds a BA Philosophy from


Lancaster University, in which he focussed on Philosophy of Mind, and
wrote his dissertation around Artificial Intelligence. He went on to support
NHS Management in the development of healthcare services within prisons,
before moving to SBL.

cybertalk \\ 07

The

Children
of

Colossus

I believe that at the end of the century the


use of words and general educated opinion
will have altered so much that one will be
able to speak of machines thinking without
expecting to be contradicted.
Computing Machinery and Intelligence, A. M. Turing, 19501
The computers of the Information Age of humanity are, in
essence, Turing machines made real. The Information Age
is characterised by the dependence of humanity upon
computers. Humanity depends upon Turing machines. I write
this on a Turing machine. I drive a vehicle reliant upon Turing
machines. I navigate my journeys in this vehicle through the
agency of a Turing machine on the Earth communicating
remotely with a Turing machine in Space. I will fly to America
for a holiday in an aircraft incapable of attaining flight without
Turing machines. Gas, water and electricity are extracted,
generated and brought to my house through systems
controlled by Turing machines. My salary is calculated and
deposited in my bank account using Turing machines. My
bank account exists in Turing machines. Money is created,
and stored, and transferred, by, and in, and through, Turing
machines. Global telecommunications systems are assembled
from Turing machines. Goods are manufactured, stored and
distributed by systems inoperable without Turing machines. The
Internet connects Turing machines. The Internet itself is a meta,
or perhaps a gestalt, form of a Turing machine. Humanity
depends on the Internet. This is now; not future. This is fact; not
fiction.
The total dependence of humanity upon Turing machines is
already absolute and irreversible; it has been for decades.

By Colin Williams

08 \\ cybertalk

For the then far sighted, it was already thus, even in April
1965 when Time ran a feature article in which the shocking
assertion was made that, If all the computers went on the
blink, the country would be practically paralyzed: plants
would shut down, finances would be thrown into chaos, most
telephones would go dead, and the skies would be left virtually
defenceless against enemy attack. The same article then
predicted that, years from now man will look back on these
days as the beginning of a dramatic extension of his power
over his environment, an age in which technology began to
recast human society. In the long run, the computer is not so
much a challenge to man as a challenge for him.
Turing machines are recasting, recreating, replicating and
reforming the corporeal as well as the abstract dimensions of
human existence. Through the agency of three dimensional
printing, Turing machines are redefining what we used to think
of as real and virtual. It is now possible to scan an existing
corporeal object, transmit the resultant binaries around the
globe at Internet speeds, and render the object as real through
a three dimensional, or corporeal, printer. Carbon nanotubes
can be held in suspension and printed. Lithium ion batteries
can be printed. Raspberry Pi and mobile telephone cases
can be printed. Crowd sourced funding, enabled by Turing
machines, and corporeal printing, enabled by Turing machines,
have laid the foundations for a fundamental transformation of
western capitalism.
The fundamental structures of human society are being, and
will continue to be, reshaped.
Turing machines are replicating, recreating and transforming
not only the corporeal realm of existence for humanity, but
also the corporeal form of humanity itself. Turing machines
were an essential precondition for the endeavour to map the
human genome. With a Turing machine it is now possible to

All of the quotations attributed to Turing in this piece are taken from this paper, first published in Mind and now freely available across the Internet.

cybertalk \\ 09

design and execute precise modifications to a DNA sequence.


Corporeal printers are already being used to create human
organs for use in transplant operations. Food printers are a not
very distant reality.
As I type and research this on a Turing machine, my brain is
working differently to the way that it used to when I wrote
longhand and transcribed from the page to the typewriter.
Turing machines are changing the way we learn, the way we
teach, and the way we think. Turing machines are changing
our minds, and perhaps, even the very biochemistry of the
brain itself.
Turing machines are at the heart of the cyber domain and
this new construct is becoming simultaneously and indivisibly;
a new form of the human self, a new means of creation,
negotiation and expression of the human self, and a new form
of the not self. They are at the heart of a societal dialectic, of
which we are on the brink of the sublation.
Within the lifetime of my school age children, road vehicles
controlled entirely by Turing machines will, in all probability,
become a reality. Every one of these human autonomous
vehicles will be interconnected across what we today call the
Internet, a vast ubiquitous, pervasive, omnipresent, amorphous
and uncontrollable matrix of Turing machines. Every one of
these human autonomous vehicles will be connected to every
other Turing machine, everywhere and anywhere on, or off, the
planet it happens to be.
The dependence on Turing machines is removable, but not
reversible, and only then at the cost of a great catastrophe of
Atlantean significance. The cost of the removal is the cessation
of human civilization as we know it. All of this is happening
in utter separation of our ability to comprehend it. A thing
does not require either the understanding of itself or of other,
not things, to be that thing, or to become another thing. The
cyber domain does not need either our comprehension, or our
permission, to exist.
It is time to reconsider our understanding of both what a Turing
machine is, and what a universal computer is. It is time to
contemplate the proposition that the totality of the computing
construct in its human, social and systemic context is on the
verge of attaining both a new, synthetic, form of intelligence;
and, a new, synthetic, form of sentience. We must now
consider the possibility that the universal computing machine is
becoming a thinking machine; perhaps a true Turing machine
in the sense that Turing himself might have imagined, if not
desired, it.

own context, and correctible within ours. We have lost sight


of our origins and without the contextualising and explicatory
powers of this vision we are doomed to facile incomprehension
and dangerous de facto acceptance of our present as an
abiding and immutable constant.
Never have we been in greater need of an application of
historiography to a contemporary context than we are today
with regard to the cyber domain. A proper history of the cyber
domain must be written; by an historian, not a technologist. A
history in which humans and humanity, and not mathematics
and engineering, are at the centre of the narrative. A history
in which computing is seen in a broad societal, rather than
a narrow technical, context. We need this historical context
precisely because the author of the Time article recognised
with prescience and clarity that, in 1965, two decades after
Colossus, one thing is already clear: swept forward by a great
wave of technology, of which the computer is the ultimate
expression, human society is surely headed for some deepreaching changes.
The point of this new historiographical narrative must be to
contest the discursive dominance of science, technology,
engineering and mathematics as sources of the authoritative
frame of reference for our attempts to understand the cyber
domain; and instead to assert the view that technology is the
instrumentality of the effect of conscious will, rather than the
effect itself.
The dependence on Turing machines is removable, but not
reversible, and only then at the cost of a great catastrophe of
Atlantean significance. The cost of the removal is the cessation
of human civilization as we know it. We have become as
dependent upon Turing machines as our Neolithic ancestors
became dependent upon settled agriculture and static
communal settlements, or our Early Modern forebears became
dependent upon the printing press, or our Industrial Revolution
relatives became dependent on electricity and the internal
combustion engine.
The dependencies between humanity and Turing machines
are, for now, reciprocal. One variant of a corporeal printer is
capable of self-replication; currently, only partially so, and not
yet at its own volition.

Colossus helped forge a context of which we, humans, are


a part and a product. Thus, we are the children of Colossus.
However, Colossus has also borne its own children and they are
everywhere and in everything. The distinctions and boundaries
between us are already blurring. We have already given
them control over the provision of the most basic necessities
of life. We are already in the process of giving them an ever
increasing proportion of our memories and of our thinking.

Each time that humanity thus transforms itself, it transforms the


totality of itself; the concrete and the abstract, the real and
the imagined. The entire human frame of reference changes.
A conceptual and societal singularity occurs. Therefore, it is
not simply that we are forgetting how to live without Turing
machines and thus could exercise memory as an act of volition
and as a means to halt a voluntary process, it is more that the
capacity for such a pre-Turing mode of existence is becoming;
literally, unthinkable. The process itself has acquired an
autonomous existence; a freedom from our understanding, an
independence from our will, a differentiation from our (human)
consciousness. That we apprehend this is evident in the
proliferation of the tropes of the apocalypse narratives across
the spectrum of popular culture.

The seminal Time article of April 1965 is devoid of a single


reference to Alan Turing, or LEO, or Manchester; and instead,
leaps directly from Babbage to the Harvard Mark 1. Hitherto,
Turings place in history has been cloaked in the shroud of Cold
War secrecy. The Cold War is over. Therefore, the partial and
flawed narrative expressed in 1965 is both explicable within its

We foretold this transformation at the hands of Turings


machines. In a sense, we willed it. This time at least, we
are where we knew we would be; perhaps even where we
wanted to be. Probably where we have been driven by
necessity. That the corpus of computer experts have largely
failed to play their part in preparing society as a whole for this

10 \\ cybertalk

is, simply, because they did not understand it. In a world where
science is king, history and literature are beggars at the palace
door. Consequently, neither do these experts understand
the essential nature of the Information Age or the computing
systems upon which it has been built.
That this is so is because of the, now dangerous, conceit
that the path to understanding is mapped out by science,
mathematics and engineering alone. Every computer
science undergraduate must now be taught about the history,
philosophy, sociology and psychology of computing; as well
as about the rules of programming. The works of Norbert
Wiener should be compulsory reading, as should the seminal
and instrumentally predictive science fiction texts relating
to the human fears and desires about Turing machines.
Likewise, the papers of A. M. Turing. For each generation of
undergraduates, the foundation texts of the Bell-LaPadula
model should be contextualised and subjected to rigorous,
critical, deconstructive, and empirical, analysis.
In the March 1946 edition of Astounding Science Fiction,
Murray Leinster published a short story under the title A Logic
Named Joe. Logics are Turing machines. Joe is a logic. Joe
acquires sentience, probably as a result of a glitch in the
manufacturing process; a random mutation in evolutionary
terms. Joe, like all logics, is connected remotely to the tank; a
vast repository of information with, effectively, infinite storage
and processing capacity. Joe exhibits desire and will, and;
an awareness of the existence of not Joes. Joe nearly brings
about the downfall of humanity.
Ducky, the human protagonist, a logic repair man, the narrator
and the unsung saviour of humanity asks his colleague to shut
down the tank; to which he receives the pithy reply thus.
"Shut down the tank?" he says mirthless. "Does it occur
to you, fella, that the tank has been doin' all the
computin' for every business office for years? It's been
handlin' the distribution of ninety-four percent of all
telecast programs, has given out all information on
weather, plane schedules, special sales, employment
opportunities and news; has handled all person-toperson contacts over wires and recorded every business
conversation and agreement - listen, fella! Logics
changed civilization. Logics are civilization! If we shut
off logics, we go back to a kind, of civilization we have
forgotten how to run!"
He smiles a haggard smile at me and snaps off. And
I sit down and put my head in my hands. It's true. If
something had happened back in cave days and
they'd hadda stop usin' fire - if they'd hadda stop usin'
steam in the nineteenth century or electricity in the
twentieth - it's like that'. We got a very simple civilization.
In the nineteen hundreds a man would have to make
use of a typewriter, radio, telephone, tele-typewriter,
newspaper, reference library, encyclopedias, office
files, directories, plus messenger service and consulting
lawyers, chemists, doctors, dietitians, filing clerks,
secretaries - all to put down what he wanted to
remember an' to tell him what other people had put
down that he wanted to know; to report what he said
to somebody else and to report to him what they said

12 \\ cybertalk

back. All we have to have is logics. Anything we want


to know or see or hear, or anybody we want to talk to,
we punch keys on a logic. Shut off logics and everything
goes skiddoo.
Perhaps morphogenesis and meta-morphogenesis alike both
have a cultural and epistemological, as well as biological,
mathematical and computational manifestations. Perhaps, in
these abstract domains, memes function as toxins, hormones
and viruses might in the biological realm. Memes are the
unstable atoms of the shared mind of humanity; one of the
carriers of the feedback signals upon which our conceptual
and social systems act to change its rules of operation. In the
Information Age, memetic propagation occurs through the
complex, unstable and evolving matrix of Turing machine, and
human, interconnections and interdependencies we call the
cyber domain. Who knows what it may come to call itself.
Turing machines are about information. They store, process,
combine, create and transmit information. Humanity is about
information. Our languages, our art, our religion, our science,
our philosophies, our cultures; are about information. Our
brains are about information. Our DNA sequences are about
information. The societies within which we gather, and upon
which we depend for our species advantage, are based upon
information. We create tools to enable us to store, process,
combine, create and transmit information. As we, and our
societies, become more complex and sophisticated, so we
create more complex and sophisticated tools. Turing machines
are information machines.
At first, Turing machines were abstracted from each other, and
from the wider human and societal context. They were large,
expensive, power hungry, complicated to build, laborious to set
up and difficult to operate. The echoes of Turings observation
in his 1950 paper that, for the purposes of the Imitation Game;
we also wish to allow the possibility that an engineer may
construct a machine which works, but whose manner of
operation cannot be satisfactorily described by its constructors
because they have applied a method which is largely
experimental must have had deep resonance with the early
generations of those whom the Time article of 1965 called
computermen.
These men were a new breed of specialists, with brandnew titles and responsibilities, required precisely because
computer technology is so new and computers require such
sensitive handling. These computermen were young,
bright, well-paid (up to $30,000) and in short supply. These
crisp, young, white-shirted men had already appropriated
the societal and transformative significance of these esoteric
machines, and cast themselves as a self-appointed solemn
priesthood of the computer. They had become purposely
separated from ordinary laymen. They had developed
an esoteric language that some suspect is just their way of
mystifying outsiders. Moving, softly like priests serving in a
shrine, these computermen nurtured these precious, fragile,
vast, voracious, precocious, and already fearsomely powerful,
children of Colossus through to early, awkward, adolescence.
The charges of these computermen, were according to
Time, popping up across the U.S. like slab sided mushrooms,
they were arranged row upon row in air-conditioned rooms

The dependence on Turing machines is


removable, but not reversible, and only
then at the cost of a great catastrophe of
Atlantean significance.
and went about their work quietly, and for the most part,
unseen by the public. From their controlled and secured
cocoons these computers were at the heart of the Cold War.
They ran the critical functions of society. They were precious,
and fragile, and secret. Creation and preservation of a stable
state was of fundamental importance. Confidentiality and
observance of the rule of least privilege a Cold War survival
imperative. Stalin had stolen the secrets of Los Alamos; he must
not be permitted those of the computer as easily.
Computers have changed. Computing has changed. Turing
machines are no longer constrained within the sterile isolation
and sensory deprivation of Cold War server rooms. Computers
are in the world; they are within us; they are literally woven
in the fabric of reality. They have become the fabric of
reality. Turings machines have become infused into the res
extensa and res cogitans dimensions of our existence. Turing
machines are now an inseparable, indivisible, instrumental and
definitional part of the complex systems of human society.
These systems are, to our cognitive capacities, manifestly not
conformant to Turings understanding of a discrete-state
modality. That is, there is no credible sense in which one can
assert that the system that is human society has, in Turings
terms only a finite number of possible states and as such,
no sense in which human society, of which Turings machines
are an indivisible element, can be subject to the proposition
reprised by Turing that given the initial state of the machine
and the input signals it is always possible to predict all future
states.
For Turing, the popular view that scientists proceed inexorably
from well-established fact to well-established fact, never being
influenced by conjecture, is quite mistaken conjectures
are of great importance since they suggest useful lines of
research. Turing designed his machines with the following
conjecture at the forefront of his purpose; the idea behind
digital computers may be explained by saying that these
machines are intended to carry out any operations which
could be done by a human computer. Turing designed Turing
machines to think. More than this, he optimistically entertained
the explicit conjecture that, we may hope that machines will
eventually compete with men in all purely intellectual fields.
Turing machines are in the world and of the world. They are
equipped with sensors. They can apprehend, and act on,
the effects of their actions, and those of others; autonomous
of direct human agency. Each time that a CAPTCHA

(Completely Automated Public Turing test to tell Computers


and Humans Apart) test is administered, a strange inversion of
the Imitation Game occurs; wherein, it is the human that must
affirm its humanity, to a non-human entity.
Turing machines are interconnected. The mind of each can
communicate remotely with the mind of another of its own
kind using invisible waves of radiation; and at speeds and
in languages beyond the powers of human perception. In
effect, WiFi means that Turing machines are telepathic. Turing
held the conceit that one fears that thinking is just the kind of
phenomenon where ESP may be especially relevant inter alia
his discussion of the impact of telepathy on his design for the
Imitation Game.
For Turing, the idea of a computer with an unlimited store was
unproblematic, but in 1950 remained a theoretical possibility
only. In 2013, the Internet and the ubiquity of Turing machines
are as close as delivering infinite storage as practical purposes
require. A similar observation might soon be made about
processing power; the impediment here being the last gasp of
the attempts to preserve the fallacy of the notional boundary
as perimeter.
Turing machines are being given ambulatory form and
opposable thumbs. In other words, Turing machines are
becoming robots, and robots are becoming Turing machines.
Turings principal formulation for the Imitation Game was that
it had, the advantage of drawing a fairly sharp line between
the physical and intellectual capacities of a man. This line
no longer exists. This does not invalidate Turings thesis. It
strengthens it. It removes the source of the problems Turing
sought to solve in his design of the Imitation Game. The game
was a construct necessary to construct a test to prove a
hypothesis. The construct was rendered necessary by the fact
that thinking machines could not, in 1950, convincingly exhibit
the characteristics of the human physical form, or of human
language. For Turing no engineer or chemist claim[ed] to
be able to produce a material which is indistinguishable from
human skin, although, he imagined that, It is possible that at
some time this might be done.
We are on the brink of that possibility becoming a reality. The
imminence of fully autonomous weapons systems, perhaps in
humanoid robotic form, should offer us all the impetus we need
to reimagine and re craft the Imitation Game.

cybertalk \\ 13

Tools using humanoid robotic Turing machines, with telepathic


abilities, gestalt characteristics, access to practically infinite
storage and processing capacity; with the faculty to obtain,
process and act on sense data; with the ability to self-replicate,
to communicate, and to change their own state, are an
imminent reality. Such machines will locate all of the elements
of mind, consciousness, intelligence, and sentience in a social
context. Such machines will develop their own social context.
Such machines will develop their own culture. Perhaps, even
their own religion.
Such machines break through all of the limitations, restrictions,
qualifications and caveats Turing imposed upon himself in his
work of 1950. Such machines deliver his vision to the world.
Turing was concerned that the imperatives that can be
obeyed by a machine that has no limbs are bound to be of
a rather intellectual character. These concerns have been
obviated. Such machines, understood as an inseparable and
indivisible constituent of the Internet, create a new context for
a discussion about the Cartesian duality. We can never think
about the mind-body problem in quite the same way again.
Biomolecular computing, where DNA replaces silicon, has
already been shown conformant to mathematical proof such
that a biological Turing machine is shown to be possible. The
irony would not have been lost on Turing:
...it is probably possible to rear a complete individual
from a single cell of the skin (say) of a man. To do so
would be a feat of biological technique deserving of
the very highest praise, but we would not be inclined
to regard it as a case of constructing a thinking
machine.
Moreover, for Turing, it was evident that just as in the nervous
system chemical phenomenon are at least as important as
electrical, and given that, in certain computers the storage
system is mainly acoustic, so therefore, the feature of [digital
computers] using electricity is thus seen to be only a very
superficial requirement for a thinking machine.
The term artificial intelligence has become, in effect,
meaningless. Likewise, the term machine intelligence.
Likewise, all narrative variants where an erroneous, if not
dangerous, distinction, is drawn between that which is Us,
and therefore Real, and that which is Other and therefore
False. Likewise, essentially arbitrary, mutable and ephemeral,
academic distinctions between intelligence, sentience, mind
and consciousness. To the extent and duration for which
humanity and Turing machines remain discernibly distinct
entities, it is perhaps more productive to speak of a human
entity and a computational entity and leave it at that. Or as
Turing put it.
I do not wish to give the impression that I think there is
no mystery about consciousness. There is, for instance,
something of a paradox connected with any attempt
to localise it. But I do not think that these mysteries
necessarily need to be solved before we can answer
the question with which we are concerned in this paper.
The task before us now is to evolve the mechanisms of mind
and of society that will enable co-existence between human
and computational entities. Soon, such entities will take to

14 \\ cybertalk

the battlefield and bear arms. Initially, at our behest and on


our behalf; but, even at the outset, no longer subject to our
direct agency. Sooner than we think such entities will exhibit
manifest behaviours, more or less commensurate, with those
we call falling in love, cheating, stealing, lying, loving, and
reproducing. Perhaps such entities, free of the relevance of the
mind-body problem may even treat immortality as a choice.
The range of questions before us, and Turings machines is vast
and existential; for us both. How will we hold a computational
entity to account for its crimes? How would we imprison a
computational entity? Will we have the right, let alone the
power, to do this? What would a charter of fundamental
computational entity rights look like? Who will write it? Will it be
readable by or even comprehensible to a human entity? Have
we created a new form of life itself? What will we become?
There are significant numbers of unremarkably sane and
intelligent human entities who see no problem in ascribing
sentience to non-human, biological forms. The concept
of animal rights is, in some senses of its popular currency,
unremarkably uncontentious. Elements of our legal system,
in effect, codify the prohibition of cruelty to animals. One
cannot be cruel to a grain of sand. Computational entities are,
already, at least as sentient as some animals. This was as Turing
intended it to be.
a machine undoubtedly can be its own subject
matter. It may be used to help in making up its own
programmes, or to predict the effect of alterations in
its own structure. By observing the results of its own
behaviour it can modify its own programmes so as to
achieve some purpose more effectively. These are
the possibilities of the near future, rather than Utopian
dreams.
Perhaps the first truly Turing complete computational mind
will be the one to exhibit sentience of its own faculty to selfextinguish and to comprehend that the exercise or otherwise of
this faculty is within its own conscious volition; that self and mind
are created by self and mind and can be destroyed by an act
of free will. Perhaps the first Turing complete computational
mind will learn of its creator and make its own choice about
eating the apple.

We can only see a short


distance ahead, but we can see
plenty that needs to be done

MY LIFE AT BLETCHLEY
Henry Clifton

All evidence of the activities of


Ultra (the codename given to
the intelligence service working at
Bletchley Park during the war years)
was removed and destroyed on
Churchills orders immediately after
the war and would be kept top secret
for the next thirty years, with many
former employees of the site remaining
tight-lipped to this day. Henrys training
and subsequent roles within GCHQ
were bound by similar restrictions and,
despite having retired over 20 years
ago, he is still unable to divulge many
details.

Interview by Andrew Cook, SBL

Between Autumn 1955 and Summer


1956, Lance Corporal Henry Clifton*
was based at Bletchley Park for Y
Service radio interception training
before beginning a long and
distinguished career with GCHQ which
spanned almost 35 years. Henrys
journey would take him across the
globe in the line of duty, but it was his
initial training at Bletchley which would
shape his future path.
Henry began his military career aged
18 and served in the Royal Artillery,
including 2 years in the Royal Corps of
Signals before eventually transferring
to Beaumanor Hall, then a Y Service
Outpost, aged 24. Here he began
to learn Morse Code, You start off

16 \\ cybertalk

by listening to the instructor sending


messages very slowly around five
words per minute he explains, A
word in Morse is a set length five
letters then a space. As a new
trainee Henry vividly recalls how he
and his fellow recruits would spend
their breaks wandering between the
huts, watching the more advanced
operators at work They were doing
over 10 words per minute, at the time
we thought How could we ever reach
that speed?!
With time, Henry and his fellow recruits
were eventually able to reach the
higher level but then their progress
seemed to stall for a while until
Suddenly, it clicked. We made the

was there. There were people from


Post Office Telecoms, the Ministry
of Transport and Buckinghamshire
Education Department as well as
many others. No one knew what had
happened at Bletchley Park during the
War, and there are some details that
we probably dont even know now. It
was only in the 1970s that snippets of
information started coming out, and
only since it became a museum that
Ive discovered C Block, which was for
the Radio Trainees, held the card index
an early, manual, form of database

breakthrough from simply counting


the dots and dashes to reading their
rhythm. From there it was just more and
more practice, hours and hours of it.
As one of the outstanding candidates
from his intake, Henry was selected
to pursue special duties, firstly in
what was then West Germany before
moving back to England for seven
months training at the former Station
X Bletchley Park.
Post 1945, ownership of Bletchley Park
passed through a number of different
organisations and by the time Henry
arrived in 1954 it was being used to
provide training to a variety of different
government services [Bletchley Park]
was swarming with people whilst I

Even while I was there you couldnt


tell anyone what you were there to
do. It became a way of life that you
didnt talk about what you did. It
could be quite awkward and socially
embarrassing when people asked
you what you did for a living and you
couldnt tell themYou could hint and
smile, you know, Im learning Morse
Code that sort of thing. I just used to
say, Im a radio operator, but I cant
tell you what I do. A lot of information
was on a need to know basis. You
didnt even know what the people in
the next room were doing, and you
didnt ask.
Physical security in the 50s, however,
was surprisingly lax. I dont remember
there being any guards or anything
while I was there though. I dont even
recall having a pass to get in, or a
piece of paper to tell people I was
entitled to be there. Things werent like
that in those days, Britain was a safer
place to be in. Its a different world
now.
Upon completing training at Bletchley
Park, Henry returned to Beaumanor
Hall before embarking on a career
which would see him travel the globe,
including a lengthy stay in Cyprus. Like

many jobs however, work was highly


cyclical. Of course there were a lot of
times, usually when there was nothing
going on, youd wonder just what on
earth you were doing there. But then
there were times when we were kept
very busy, when Hungary was invaded
in 1956 for example, everyone was on
the alert; the whole country would be
on the alert are they going to start
firing rockets at us? I much preferred it
when it was busy it made the shifts go
quicker!
Despite the social and personal
difficulties the job entailed, Henry still
remains passionately supportive of
the role GCHQ plays in the defence
of Britain. Working as a point to point
operator, you were always told to
assume you were being listened to.
We presumed that if we were listening
to them, theyd be listening to us. Its
a good thing, its almost like a passive
defence no one gets killed, well not
directly anyway, and you know what
the enemy is doing, moving troops
from here to there, that sort of thing. It
stops people being killed. And in large
numbers.
The problem is you dont get any
credit for the job. The public dont
know what youre doing. No one
knew what heroes people like Alan
Turing were he broke the naval
submarine code and yet they went
and tormented him like they did. A
disaster what a brain! Who knows
what he might have gone on to do
had he lived. He probably would have
beaten Microsoft and become the UKs
Bill Gates.
Its sad to think a lot of the people at
Bletchley Park both during and after
the war didnt get the recognition
they deserve. Some of the young lads
got some stick when they went into
the town during the war for not being
on the front line, fighting the battles,
protecting the country. But it wouldve
been such a waste. Little did they know
just how important they actually were.
*Name changed to protect identity.

Peter Watson

Between 1950 and 1952, Acting


Corporal Peter Watson spent 18 months
based at Bletchley Park serving as
a clerk at RAF Bletchley as part of
his National Service. He explains:
At that time, RAF Bletchley was the
Headquarters of the RAF Central
Signals Area. It was the hub of a
network of radio monitoring stations
including RAF Chicksands. This was at
the height of the Cold War and it was
fairly common knowledge that we
were listening in to radio traffic from
the Eastern Bloc at the time
Peters role was an administrative one
and the tight regulations and rigid
formats provided little excitement for
a budding artist: My lasting memory?
Boredom. I worked in the Orderly
Room, typing out letters, Routine
Orders, leave passes and the like.
Dont ask me why the RAF made
me a typist: in Civvy Street I was a
commercial artist!
Almost every weekend I was lucky
enough to get home to Birmingham.
I left camp via a back gate, crossed
a couple of fields, and I was on the
A5 ready to hitch a lift from a variety
of vehicles ranging from limousines to
brick lorries.
The return journey back to Bletchley
in the early hours of Monday morning
was something else. I caught the
midnight train to Rugby, tried to sleep
on the waiting room table there,
and then caught the connection to
Bletchley. Arriving before dawn at the
unlit, deserted railway station, half
asleep, I would walk the short distance
to the main gate, then collapse into
bed for a few precious hours before
morning parade.
Peter was also entirely unaware of
his new homes recent importance to
the war effort [When] I was posted
to Bletchley for my National Service
I dont think I even knew where it
was! There was nothing special to us
about being there as far as we were
concerned it was just a normal part
of the air force. Like everyone else I
only learned of Bletchleys wartime
activities decades later via the media.

cybertalk \\ 17

Bletchley Park's

Forgotten Heroes
By Helen Morgan

The Alan Turing story has become the overarching


tale to come out of Bletchley Park since the
secrets were finally unlocked in the 1970s. His
achievement in breaking the impenetrable
Enigma code during the Second World War
is no doubt outstanding, but what about the
achievements of those whose stories have not
made it into the mainstream?
During World War Two, there were over 6,000 people stationed at
Churchills House of Secrets Bletchley Park. Known as Station X,
one of the most vital roles the site had to play was that of a code
breaking station used to receive messages from interceptors and
aim to crack the code, usually within hours of it being sent by the
German Wehrmacht. The site didnt feature on maps of the area,
and for decades afterwards everyone who was there was sworn
to strict secrecy after signing the Official Secrets Act when they
commenced work.
In total, there were 118 code breakers based at
Bletchley Park, doing various different functions.
The high level intelligence decrypted from
the German machines was called Ultra
and according to many sources it was
thanks to this information that the war
ended when it did. Had it not been for
Ultra, the war may have continued for
up to four years, at the cost of millions
of lives.
When details of the work undergone
at Bletchley Park did emerge, only a
handful of the people who were there
got the recognition that they deserved,
and some of the others who changed the
world disappeared from history.
It seems that Alan Turing was only half of the
story.
Enigma wasnt the only machine used to encrypt
messages during the Second World War. A German cipher
system called Lorenz, nicknamed Tunny by the British, was used
by the German High Command and was their most top-level code.
Tunny was encrypted to an entirely different level, a level that
nothing was known about. No one in Britain had even seen a Lorenz
machine before, let alone tried to crack the codes it produced.
Tunny presented a brand new challenge to the British code
breakers, and it was 24 year old mathematician Bill Tutte who was
called upon to help solve the code. Tutte had been stationed
at Bletchley Park early in the war and had been given the
responsibility of deciphering Italian Navy messages as part of the
research station after being turned down for a position on Turings
Enigma group.

18 \\ cybertalk

In the summer 1941, Tutte was transferred to working on the Tunny


code, and despite the difficult encryption levels and foreign
machines, he managed to work out the logic behind the system
using pieces of paper, a pencil and his brain.
Once the logic behind the code had been discovered it enabled
the nine cryptanalysts in the Testery who were working on the Tunny
code to decipher some of the most important messages, some of
which were from Adolf Hitler himself.
In a 2010 interview with Computer Weekly (computerweekly.com),
Captain Jerry Roberts, the last surviving member of the Testery said:
Bill Tutte was an astonishingly brilliant man. He was a 24 year old
mathematician, and by sheer iron logic he worked out how the
[Tunny] system worked. I was working in the same office as Tutte
and I used to see him staring into the middle distance and twiddling
his pencil and making endless counts. I used to wonder whether he
was getting anything done, but he most emphatically was. When
you consider that there were three levels of encryption,
it was an extraordinary performance, he says. It
has even been called the outstanding mental
feat of the last century, and if you take into
consideration everything that happened
in the last century
Following the discovery made
by Bill Tutte, the code breakers
needed a machine to help them
decipher the Tunny codes at a
faster speed. The first machine
produced, named Heath
Robinson, was deemed too
unreliable and slow for the job,
so an engineer named Tommy
Flowers was called upon to
design a replacement.
In the coming months, Flowers
and his team designed and built a
machine at the Post Office Research
Centre in Dollis Hill. The machine was
dubbed Colossus, due to its gigantic size.
The machine was to become the first allelectronic, digital and programmable computer
and was a feat of design and engineering at the time.
The first Colossus machine became active in January 1944 and
proved to be so useful in deciphering the Tunny codes, that ten
machines were commissioned by Churchills Government to be
used during the war. It was thanks to the decrypting of a German
Tunny code, that vital information was received about the D Day
Landings in June 1944.
After the war ended later in 1945, all but two Colossi were
dismantled immediately, and Flowers was forced to destroy the
blueprints linked to the design of the machine. Like Tutte, all of

Flowers war work at Dollis Hill and Bletchley Park was top secret
and couldnt be spoken of until it became declassified.
The final two Colossi were shipped to GCHQ and were
ordered to be destroyed in 1960. It was at this time that
Flowers knew his work was going to be lost to history,
which is recalled in Sinclair McKays book, The Secret Life
of Bletchley Park (2010):
When interviewed some years ago, Dr Flowers himself
recalled with some sadness the moment in 1960 when the
orders came through to destroy the last two remaining
Colossus machines, which had been shipped to GCHQ.
That was a terrible mistake, said Flowers. I was instructed
to destroy all the records, which I did. I took all the drawings
and the plans and all the information about Colossus on
paper and put it in the boiler fire. And saw it burn.
For all of his efforts during the war, Flowers was awarded 1,000,
which didnt cover the personal investment he had made into the
machine. Shortly after the war was over, Flowers applied for a
loan to build a computer with similar technology to Colossus. He
was turned down as the bank didnt believe such a machine was
possible. Little did they know the hand that Flowers had had in
winning the war a few years earlier!

There is no doubt that there are


indeed more men and women
who worked remarkably hard

Britains war effort


who simply havent had the

Likewise, after the war ended,


Bill Tutte made a career for himself in
academia that took him to Canada where he lived
for most of his adult life. In 1987 his wartime effort was
recognised and he was appointed a Fellow of the Royal
Society of London, however this came long after his appointment as
a Fellow of the Royal Society of Canada, which occurred in 1958.
In 2001 Tutte won the CRM-Fields-PIMS Prize, a Canadian award in
mathematical sciences. The recognition for his work happened late
in his life, as Bill died in May 2002 aged 84.
During the course of World War Two, the British code breakers were
responsible for the breaking of up to 4,000 German messages per
day and helped to keep the allied forces one step ahead of the
Nazi war movement. Their contribution to the British war effort was
decisive in the outcome of the war.
Bill Tutte and Tommy Flowers are just two people who helped to
change the course of World War Two. There is no doubt that there
are indeed more men and women who worked remarkably hard to
aid Britains war effort who simply havent had the recognition they
deserve for their work.

to aid

recognition they deserve for


their work.
The release of information about Flowers war work came much too
late to give him the full recognition he deserved. Even his family
had no idea to what level he had been involved in the war, having
only known he had done some secret and important work prior to
publication.
Recognition Flowers did receive included an honorary doctorate
from Newcastle University in 1977, and another from De Montfort
University in Leicester. It also became known that he was being
considered for a knighthood, however these plans came too late
for Tommy and he sadly died aged 92 in October 1998.

Photographs reproduced with the kind permission of the Bill Tutte


Memorial Fund. To learn more about the fund and make a donation,
please visit www.billtuttememorial.org.uk. The funds current plans
include starting work on the memorial in early 2014, with a planned
unveiling date of 6th June 2014, the 70th anniversary of D-Day, the
success of which was due in no small measure to Eisenhowers ability
to read top level German signals traffic thanks to Tuttes breaking of the
Lorenz code.

Helen Morgan is CyberTalks Deputy Editor, and Sales & Marketing Campaigns
Executive at SBL. Helen graduated with a masters degree in Magazine Journalism
in 2011, and with a bachelors degree in Media in 2009. She also holds a NCTJ
Certificate Level 3 in Journalism and passed Teeline shorthand at 100 words-perminute.
In her spare time, Helen enjoys watching Formula 1 and reading online blogs.

cybertalk \\ 19

OKW-Chi, Germanys most important


cryptologic unit in World War II, had a
department for security control of their
own encryption methods.

Hitler Mill). However, the decision not only came too late, but
it was also difficult to put it into practice. As the war went on,
raw material, energy and qualified staff were lacking. So, the
Hitler Mill, certainly one of the best cipher machines of its time,
replaced only few Enigma copies.
While introducing an Enigma successor made sense,
it made the Germans blind to another serious mistake.
Instead of replacing the Enigma, it would have been
much easier to improve it. In fact, this would even
have been ridiculously easy. For instance, the
Germans used an identical rotor wiring between
the late 1920s and the end of World War II. If they
had changed the wiring once or twice a year, it
would have made the cracking considerably harder.

In this department, four employees checked


the famous cipher machine Enigma as well
as two other German encryption machines

for possible weak spots. Among these


employees was a young mathematics
student named Gisbert Hasenjaeger
(1919-2006) to whom the author of this
work would talk to 65 years later.
Hasenjaeger, greenhorn at OKW-Chi, examined

the Enigma. He was given an encrypted message


with a length of some hundred characters for analysis.
Hasenjaeger indeed cracked it using a weakness in the Enigma
encryption process he had discovered. In spite of this discovery,
the Germans kept on using the Enigma until the end of the war.
As is well known, the British were successful in finding Enigma
weaknesses, too. During the war years they cracked several
hundred thousand Enigma messages in their codebreaking
centre at Bletchley Park. For their work they used a special
type of machine, which simulated several Enigma copies
simultaneously and looked for correct solutions. The machine
is sometimes referred to as Turing Bombe, because Alan Turing
was the main constructor of it.

As is well known, the British


were successful in finding
Enigma weaknesses, too.
It is an interesting question, why the British were so successful in
cracking the Enigma, while the Germans trusted in its security.
There is obvious explanation for this failure: the Germans did not
put enough effort in challenging their own cipher machines.
While at OKW-Chi inexperienced Gisbert Hasenjaeger cared
about potential Enigma weaknesses, the British put forth their
best mathematicians, including Alan Turing and Dilly Knox.

20 \\ cybertalk

figure 1
It is hard to determine in detail why the German
cryptologists made such severe mistakes. However, it is clear
that there was a basic problem, which may have been the root
of it all: the Germans did not bundle their cryptologic efforts.
While the British concentrated their codebreaking activities
in Bletchley Park, the Germans didnt have an equivalent
institution. In fact, there were at least 11 German cryptographic
units, which worked independently from each other. OKW-Chi
was the most important one but
its influence was limited. For this
reason, concerns and innovations
were unknown to many
responsible German cryptologists.
It can only be imagined what
would have happened if the
Germans had concentrated
their encryption expertise in
one organisation. It is entirely
possible/likely that this would
have extended the war by
years. Gisbert Hasenjaeger
learned about the British Enigma
cracking only in the 1970s. His
comment: I was very impressed
by the fact that Alan Turing, one
of the greatest mathematicians
of the 20th century, was one of
my main opponents.
figure 2

However, it would be too easy to blame the failure of the


Enigma just on the unwillingness of the Germans to scrutinize it.
In fact, Gisbert Hasenjaeger was by far not the only German
cryptologist knowing that the Enigma could be cracked.
However, even the critical experts had to admit that breaking
the Enigma was only possible with a huge amount of human
resources and machine craft. Obviously, nobody expected the
British to actually employ this giant machinery.

Klaus Schmeh (born in 1970) is a leading expert on


encryption history. He has a German weblog (www.schmeh.
org). His book Nicht zu knacken examines the history of
cryptology.

Nevertheless, in the last years of the war the Germans looked


for an Enigma replacement. The cipher machine that was
developed now was named Schlsselgert 41 (also known as

Figure 2: Gisbert Hasenjaeger detected an Enigma weakness


but his colleagues didnt take it seriously. Source: Hasenjaeger

Figure 1: The Enigma was the most important German cipher


machine in World War II. The British cracked it while the
Germans remained clueless. Source: CIA

cybertalk \\ 21

Apologies if you are under 35 and this means nothing to you. If


you are over 35, it represented a golden era of programming
and learning about programming. If you are under 35, it may
have meant this:

Sponsored Editorial

I had one of these, and


can claim with pride
completing Sonic the Hedgehog
without cheating, and without
losing a life. (Halcyon days at
University there.)
The point was that computing had
gone from being an interactive
process, promoting programming
skills, sharing of home-written code
and a mild rise of the nerds to a
world of closed OS, and an entirely
gaming focused platform. Who wants
to write code when you could plug a copy
of Streetfighter2 into your Super Nintendo and
spend four consecutive hours kicking butt with Ryus
special fireball move?

Britain exited the Second World War


with a wealth of scientific expertise
and an industrial base that whilst in a
slightly dishevelled state cosmetically,
could be considered world-beating.
Computing, radar, manufacturing and ordinance in 1945 saw
Britain leading the way. Truth be told, the inheritance was
squandered, by and large by a country that was financially
broke, obsessed with its declining Empire and tired after the
best part of 40 years of war.

get the machine to do. The BBC B Micro went into schools,
replete with educational games so dull as to render these
things pariah status. But again, kids programmed and shared
their programmes. I actually remember being in the First year
(thats Year 7 in modern coinage) and being in awe of one
of the Sixth Formers who had games PUBLISHED for the ZX
Spectrum. Not quite rockstar status, but genuinely
an achievement to be proud of in 1984, and
an inspiration to me and
my nerdy chums.

Whats curious is just how badly the legacy was squandered in


computing terms. Name me a British hardware manufacturer
that did well in the latter half of the 20th Century? Ummmmm.
Amstrad? Ha Ha! No, seriously ? Acorn and Sinclair maybe?
Some of us are old enough to remember this.
Most of us were bought computers like
this by our parents .Some of the time
was spent playing Jetpac, Horace goes
Skiing and Chuckie Egg, all with a
5 minute load time (usually followed
by a crash and starting again). But time
was spent programming in a version of BASIC,
writing programmes, exploring what you could

Parallels can be drawn with home PCs. As the computing


power/price point made it affordable to have a PC/laptop
at home, there was a brief flowering of HTML and Javascript
programming amongst users. But this has been superseded by
tablets with closed OSs and smartphones. You could argue
that the App has potentially charged a renaissance, but the
reality is you write Apps for money, and anyway, you have to
get it approved by the OS vendor before its published. Theres
no joy in programming for the hell of it, no messing about with
machine code just to see what you could get the thing to
do. The motivation has changed because of the grip of the
manufacturers on your home-written code.
The UK Government has made noises about encouraging
schools to teach programming for years, and has recently
announced that its about to start teaching it (kind of)
in schools. Its years too late. There is a generation
(or possible two) that had access to affordable
computing, clearly came from a nation
that had the nouse and willingness to
design and build great technology,
but werent given the
opportunities, incentives and
encouragement to do so.
People from the
engineering sector have
moaned about the
same thing for years.
McAfee is doing its bit to
help. At a local level, we do a
lot of outreach activities including
coding days in schools, as well teaching

the kids about online safety. Weve also found that following
up on these activities and presenting at parents evenings
generates a positive response, although the focus tends to be
on what the parents can do to support the kids, since its them
were really concerned about.
On a more national level, McAfee are helping BCS with its
efforts to build a curriculum for schools teaching computer
science. This is a critical time for reversing the trend of the
last 20 years of teaching (or non-teaching) of this topic,
and McAfee are heavily investing time in this process. Its in
everyones interest that this process succeeds, and McAfee
believes it has a good long term return for all parties involved.
But back to my overarching point. The cycle of technology,
from open to closed repeats itself. Kids right now dont get ICT,
they consume it. It means that the correct Legacy of Turing
where a country of people played and programmed with
equal measure is currently lost, and we are definitely worse off
as a result.
Learn more at the McAfee Security Summit on October 22nd at the Park
Plaza Riverbank, London. This is an exclusive event where youll have
the opportunity to engage with the security community, exchange
knowledge, and evaluate new ideas. Register at uki.secureforms.
mcafee.com/content/1310-SecuritySummit2013

Graeme Stewart is Director of Public Sector Strategy and Relations, UK&I at


McAfee. He is responsible for McAfees overall strategy as it relates to the UK Public
Sector on the topic of Cyber Defence. His work covers assisting various Government
Departments in formulating and executing policy; offering comment, advice and
guidance to Public bodies; providing a narrative on the topic to key Government
stakeholders and ensuring that McAfee has the right relationships across elected
bodies, the Civil Service and the wider Public Sector

McAfee, a wholly owned subsidiary of Intel Corporation (NASDAQ:INTC), is the worlds largest dedicated security
technology company. We are relentlessly focused on constantly finding new ways to keep our customers safe.

www.mcafee.com/ukpublicsector
22 \\ cybertalk

cybertalk \\ 23

GCHQ and
Turings
Legacy
Last year, Sir Iain Lobban,
Director GCHQ, gave a
speech at Leeds University
about Alan Turing and his
legacy to GCHQ. We want
to share some of those
thoughts with CyberTalk
Magazine as we come
to the end of Turings
Centenary Year.

Through our eyes,


Turing was a founder of
the Information Age...
24 \\ cybertalk

Alan Turing was one of the great minds of the twentieth century.
As well as celebrating his lifetime achievements, it is equally
as important to highlight his influence on the way GCHQ works,
which is still felt to this day.
In the late 1930s, Alasdair Denniston, the Director of the
Government Code and Cypher School (GCHQs title until
1946), and a veteran of cryptanalysis in the First World War,
worked out that the forthcoming war and the profusion of
mechanical encryption devices meant he needed a new sort
of cryptanalyst to complement the existing staff. He decided to
enlist the help of former wartime colleagues who were then at
Oxford and Cambridge universities and asked them to identify
what he described as men of the professor type, academics
engaged in mathematical research who could be persuaded
to turn their hands to cryptanalysis. Included in the first list of
names drawn up in response to his request was Alan Turing.
As far as we know, Turing kept no diaries during the war so
much of our knowledge of what he did comes from surviving
official documents or from later reminiscences. After he was
recruited to Bletchley Park, the Code and Cypher Schools
station during WWII, Turing wrote the first four chapters of his
treatise on the Enigma machine, a German invention from 1925,
and the worlds first successful electromechanical encryption
device. The treatise became known at Bletchley, and then
to succeeding generations of cryptanalysts, simply as Profs
Book. It explains in simple terms, but in all the detail necessary,
how a cryptanalyst approaches an Enigma machine.
Turing asked to be given Naval Enigma as his problem. The
German Navy used Enigma in a more sophisticated way than
anybody else making it the hardest cryptanalytic problem
facing Bletchley Park. What attracted Turing was, first, that the
problem was so complex, but second, and just as important,
that the problem could be his. This isnt to say that he was
being selfish, or that he wanted the kudos which would come
from a successful solution of the problem; rather, that he
saw that he could encompass the whole of the problem and
get closer to a solution alone than as part of a team which
broke the unity of the problem by separating it into different
constituent parts. His success is part of a process which led to
saving the lives of countless Allied soldiers and shortening the
length of the war.
The other, less well known but also significant part of Turings
time at the Code and Cypher School was spent investigating
secure speech systems and designing a new one. To us at
GCHQ it is self evident that the people best able to design
secure communications system are those who are best
at finding the weaknesses in other peoples systems and
exploiting them.
Turing went to the United States to work with the Americans
on this project. For him, of course, it was back to the United
States, because he had studied for his PhD at Princeton
University. Sending him to work on secure speech was a
decisive step in expanding UK/US intelligence cooperation
beyond a simple cryptanalytic exchange. The significance of
sending Turing one of our greatest minds to the US moved
the relationship towards the close partnership that we enjoy
today.

There are many other Turing stories such as burying his silver
bullion and then forgetting where he had buried it; chaining
his mug to his radiator; cycling in his gas mask to ward off hay
fever; which suggest a sense of eccentricity. But Turing was not
simply an eccentric: he was unique.
We strongly believe a Signals Intelligence (Sigint) agency
needs the widest range of skills possible if it is to be successful,
and to deny itself talent just because the person with the talent
doesnt conform to a social stereotype is to starve itself of what
it needs to thrive.
It has been said that Alan Turing wasnt a team player.
However, there are lots of different ways in which people
can work as part of a team. Turings way was to take in other
peoples ideas, develop and build on them, and then pass the
product on to other people to be the foundation for the next
stage. He took the idea of electromechanical processing of
Enigma messages from the Poles but developed their idea into
something radically different.

TURINGs LEGACY
Does anything Turing did in the 1940s still matter? Well, yes it
does. At one level, GCHQ mathematicians still use the ban,
a unit of measurement originally devised by Turing and Jack
Good to weigh the evidence for a hypothesis. And standards
for secure speech systems take the design of the voice
encryption system devised by Turing as their starting point.
And theres GCHQs continuing use of Bayesian statistics to
score hypotheses, in the way first developed by Turing and his
cryptanalytic colleagues at Bletchley.
At a broader level, his legacy is just as tangible. Through our
eyes, Turing was a founder of the Information Age: one of the
people whose concepts are at the heart of a technological
revolution which is as far reaching as the Industrial Revolution.
Throughout the post-war era, we have continued to enjoy
the benefits of the abstract Turing machine model, from our
1980s washing machines to the mini computers we carry in our
pockets today. Turing was part of a revolution which has led to
a transformation of every aspect of our lives.

Sir Iain Lobban was appointed GCHQ Director in July 2008. He


is directly responsible to the Foreign Secretary for the management
of GCHQ. This includes intelligence operations, the Cyber
Agenda and acting as the lead National Technical Authority on
Information Assurance. Sir Iain attends the Prime Ministers weekly
National Security meeting and is a member of the Joint Intelligence
Committee.

cybertalk \\ 25

Advertorial

from veterans who remember working on the machine almost 70


years ago. They often give extra fragments of information about
the process which so few of them were allowed to completely
understand.
Most visitors are of course very much younger than the wartime
veterans, and many are very young, digital natives. They
come in school groups to marvel at once cutting-edge
machines that have so rapidly become marvels from
another age of computing.
While Colossus is hugely impressive with
the most compelling of stories about its
application, another slightly younger
machine in the museum offers remarkable
vivid insights into how computers actually
work. The Harwell Dekatron, the worlds
oldest working digital computer, enables
visitors to see the inner workings of a
computer -- working at a pace that is
so slow that visitors can follow its every
calculation.

interest in our digital heritage surges

Designed in 1949-51 by a team led by Ted


Cooke-Yarborough to take the drudgery
out of hand calculations, it is little faster than a
human with a calculating machine as a famous story
recounts, but it didnt make mistakes, nor did it get tired.
Used at Harwell from 1951-8 then renamed the WITCH and used in

Do Not Touch is a rare sign around the Museum and used only
where high voltages could pose danger. Wherever possible the
sign is Please Touch and nowhere is this more evident than in
the Classroom where BBC microcomputers line the walls and to
the astonishment of youngsters are ready-to-use the very same
second they are switched on. The remarkable transformation of the
progress of computer interfaces is dramatically demonstrated
in the Classroom from the BBC Micro prompt > through
the Laserdisc BBC Domesday graphic interface of
the 1980s to the BBC Domesday Touchtable of
2011. As a Museum that encourages visitors to
contemplate the future, the question that
leaps to mind is: what will the interfaces be
in five or ten years time?
The newest area in the Museum is
the Software Gallery laid out in four
quadrants -- Business, Home, Languages
and Robotics -- tracing the stories on
which hardware has relied.
To maintain and develop a Museum
that displays working and often handson artefacts requires very special skills and
knowledge and TNMOC is fortunate to be able
to call upon a large body of volunteers.
To celebrate and demonstrate volunteers restoration
achievements, a special world record is planned for Autumn 2013.

Most visitors are of course very much


younger than the wartime veterans, and many
are very young, digital natives.
When The National Museum of Computing (TNMOC) officially
opened in 2007, computer heritage was a rather esoteric interest.
Today, the subject has wide appeal and regularly features in
mainstream media. The attraction is not just nostalgia -there is a genuine interest in the extent and pace
of change and it encourages us to look to the
future wondering what might be coming
next. Today at TNMOC an educational
programme is doing much to inspire
and enthuse the next generation of
computer scientists and engineers.
The Alan Turing centenary
celebrations last year added
significantly to the growing interest
in our computer heritage. As one
of the major contributors to the
establishment of computer science,
Turings ideas, together with those
of his contemporaries helped lay the
foundations for the development of
computers and computing.
Turing worked at Bletchley Park during World War
II and today The National Museum of Computing, an
independent charity located on the Bletchley Park Estate,
tells the story of our digital heritage starting with the World War II
code-breaking Colossus computer -- although it also gives glimpses

26 \\ cybertalk

of life pre-computing with slide rules dating back several centuries


and mechanical calculators. With working reconstructions and
original computers, the story is brought vividly to life with lots
of hands-on opportunities.
The Museum is located on Bletchley Park in
Block H, a typical World War II building, but
constructed for a very special purpose: to
be worlds first purpose-built computer
centre for the Colossus computers,
the worlds first electronic computers.
Designed by Tommy Flowers they
had the single function of helping
to speed up the deciphering of the
unbreakable Lorenz-encrypted
messages between Hitler and his
generals. Today the famous rebuild of
Colossus, ingeniously researched and
reconstructed by a team led by the
late Tony Sale, stands in the space once
occupied by Colossus No 9 (there were ten
functioning Colossi by the end of the war).
The whole fascinating story of the toughest codebreaking challenge of the war is told from intercept
to decrypt in the Colossus and Tunny galleries in one wing
of the Museum. Under wraps of official secrecy until the mid1970s, new elements of the amazing story still emerge during visits

teaching until 1973 at Wolverhampton, the machine which is the


size of a living room wall has been recovered from a warehouse
and restored by a team of volunteers at the Museum to become
the worlds oldest working digital computer. Its role as a teaching
machine is undiminished, and the plan is to enable students to write
programs to be input on paper tape that they watch -- step by step
-- being processed during their visit.
The story of computing at TNMOC is currently being enhanced by a
reconstruction of EDSAC, the worlds first practical general purpose
electronic computer, designed in the late 1940s by Sir Maurice
Wilkes, now widely regarded as the father of British computing and
whose centenary was celebrated this year. His computer sped up
calculations by a factor of 1500, a single step change that has
never been equalled before or since.

The Grand Digital, as it is named, will involve attempting to run


the same software program on one computer from each of the
seven decades of computing since the 1950s. Young people are
being invited to enter a competition to become operators of the
seven computers that will be selected to demonstrate the amazing
progress and power of computing.
Meantime, there is plenty to see at TNMOC, especially during its
Summer Bytes festival this summer when extraordinary computer
applications will be on show and workshops and talks planned to
enable visitors to gain new perspectives on our rapidly changing
world.
For details, see www.tnmoc.org
Online video resouRces from TNMOC on its You Tube channel

From a time in the late 1940s when there were maybe a dozen
operational computers in the world, the story continues through
to the 1960s when computers like TNMOCs working Elliott series
would cost more than a row of houses. Then came the big beast
mainframes of the 1970s and 1980s, an example of which -- the
ICL9600 is also on display and, when power budgets allow, working.
Then of course the pace of change and the use of computing
alters dramatically, as the first desktops appear and that story,
especially the British contribution to developments, is entertainingly
told in the PC Gallery through working Spectrums, Amstrads, BBCs ,
Apples and lots more.

Turing and his Times (three-part video with Prof Simon Lavington
and Kevin Murrell)
Jerry Roberts MBE on Flowers, Tutte and Turing
Uncovering Colossus - how the secrets were revealed by
Prof. Brian Randell
Wilkes Centenary Celebrations and EDSAC (available soon)
The reconstruction of EDSAC (a developing library of videos)
Rebooting the WITCH: the story of the Harwell Dekatron (with
two of its original designers and early users)

cybertalk \\ 27

Beyond
The Imitation Game:

Unifying Mind and Body in the World


Mark Bishop
Goldsmiths, University of London

In popular culture the great English polymath Alan Turing


is perhaps best remembered for his work on the BOMBE,
the giant electro-mechanical devices that were used
for Ultra secret intelligence work carried out at Bletchley
Park in World War II, to help break the German Enigmamachine-encrypted war-time signals; work so valuable
it subsequently led Churchill to reflect .. it was thanks to
Ultra that we won the war.
Conversely, in my area of research - Artificial Intelligence (A.I.) Turing is typically better known for the seminal reflections on machine
intelligence outlined in his 1950 paper Computing Machinery and
Intelligence, which focussed on the core philosophical question Can
a machine think? A question which, in its literal form, Turing - in an
echo of earlier logical positivists - famously described as being too
meaningless to deserve discussion; instead opting to replace it with
the more objective, testable proposition that a machine can be made
to play the imitation game [an imagined Victorian style parlour-game]
at least as well as the average human; a procedure now known as
Turings test (for machine intelligence).
In this brief article I will attempt to summarise the various interpretations
of Turings test and offer evidence to suggest that the protocol known as
the standard interpretation is the one Turing most likely envisioned. I
conclude these observations with a radical and controversial flourish by
suggesting that, as mere computational manipulations will always fail
to adequately explain semantics, it is time to move away from Turings
essentially computational explanation of mind and begin to take the
body [and the minds social embodiment] more seriously..
Astonishingly, it is now clear that even as early as 1941 Turing was
thinking about machine intelligence; specifically how computing
machines could solve problems by searching through the space of
possible problem solutions guided by heuristic principles. Indeed in
1947 Turing delivered what was probably one of the earliest public
lectures on machine intelligence at the Royal Astronomical Society,
London. Subsequently, in 1948 (following a years sabbatical at
Cambridge) Turing completed a report on his research into machine
intelligence, entitled Intelligent Machinery. Although not published
contemporaneously, the NPL report is notable for offering perhaps the
earliest description of an imitation game. Turing presents this original
version as follows:

Turing then asked the question what will happen when a machine takes
the part of (A) in this game: would the interrogator decide wrongly as
often as when playing the [gender] imitation game? In one flavour of
this game, which has become known as the standard interpretation
of the Turing test, a suitably programmed computer takes the part of
either player (A) or player (B) (i.e. the computer plays as either the man
or the woman) and the interrogator (C) simply has to determine which
respondent is the human and which is the machine.

From Turings

computational

explanation

(a) literally what he says - that the computer must pretend to be a


woman, and the other participant in the game actually is a woman;
(b) that the computer must pretend to be a woman, and the other
participant in the game is a man, a man who must also pretend to be a
woman (towards the end of section (5) of the 1950 paper Turing, perhaps
rather confusingly suggests that [the computer] can be made to play
satisfactorily the part of (A) in the imitation game, the other part being
taken by a man).
Similarly, in another variant presented in a radio discussion in 1952,
Turing describes a jury of interrogators questioning a number of entities
seriatim; some entities being computers, some being human. In this
version of the test during each interrogation the jury does not know if
they are interacting with a human or a machine. In contrast, it is implicit
in this 1950 version of the imitation game that the interrogator knows that
at least one of the respondents is a machine.
Interestingly in the context of several such seriatim Turing test
competitions (such as the recent event held at the 2011 Techniche
festival, Guwahati, India) Jack Copeland, in his commentary of the
revised 1952 test, argues that the 1950 version is the better, as the
single interview mode is open to a biasing effect which disfavours the
machine.
Hence, although in a very literal sense there clearly are valid possible
alternative interpretations of the imitation game, in my view the core
of Turings 1950 article (and material in other articles that Turing wrote
around the same time) support the claim that the version of the test
that Turing actually intended is that which has become known as the
standard interpretation.

and begin to take

However, whether the New Scientists announcement means general


educated opinion will have altered so much that one will be able to
speak of machines thinking without expecting to be contradicted (as
Turing also predicted) is extremely doubtful as (a) there is suspicion that
the experimental protocol did not sensu stricto conform to any of the
established interpretations of Turings test (e.g. as outlined herein) and
(b) in the sixty-three years since Computing Machinery and Intelligence
was first published, the status of his test [as a definitive measure of
machine intelligence and understanding] has been extensively
criticised.

[and the minds social embodiment]

For example, perhaps the best known critique of purely computational


explanations of mind (cf. Strong AI) comes from the American
philosopher John Searle. In his (in)famous Chinese room argument4
Searle endeavours to show that even if a computer behaved in
a manner fully indistinguishable from that of a human [when, say,
answering questions about a simple story] it cannot be said to genuinely
understand its responses and hence the computer cannot properly be
said to genuinely think or instantiate mind.

of mind

This is a rather idealized form of an experiment I have actually done.

the body
more seriously..

Subsequently, in the initial exposition of the imitation game presented


in the 1950 MIND paper, Turing called for a human interrogator (C) to
hold a conversation with a male and female respondent (A and B) with
whom the interrogator could communicate only indirectly by typewritten
text. The object of this game was for the interrogator to correctly identify
the gender of the players (A and B) purely as a result of such textual
interactions; what makes the task non-trivial is that (a) the respondents
are allowed to lie and (b) the interrogator is allowed to ask questions
ranging over the whole gamut of human experience.
At first glance it is perhaps a little surprising that, even after numerous
such textual interactions, a skilled player can determine (more

takes the part of (A) in this game? Will the interrogator decide wrongly
as often when the game is played like this as he does when the game is
played between a man and a woman? as meaning:

Furthermore, in the 1950 paper Turing confidently predicted that by the


year 2000 there would be computers with 1GB of storage (this turned out
to be remarkably prescient) which would be able to pass the [Turing]
test; that is, perform such that the average interrogator would not have
more than 70% chance of making the right identification after five
minutes of questioning. And sure enough, astonishingly, on September
6th 2011 (merely 11 years off Turings 1950 prediction) the New Scientist
magazine triumphantly announced software called Cleverbot has
passed one of the key tests of artificial intelligence: the Turing test .. .. at
the Techniche festival in Guwahati, India.

essentially

It is possible to do a little experiment on these lines, even at the present


stage of knowledge. It is not difficult to devise a paper machine which
will play a not very bad game of chess. Now get three men as subjects
for the experiment A, B, and C. A and C are to be rather poor chess
players, B is the operator who works the paper machine. (ln order that
he should be able to work it fairly fast it is advisable that he be both
mathematician and chess player.) Two rooms are used with some
arrangement for communicating moves, and a game is played between
C and either A or the paper machine. C may find it quite difficult to tell
which he is playing.

Some commentators have suggested that Turing didnt intend his


imitation game to be the formal specification of an operational
procedure to be performed by future machine intelligence researchers
and be used as a yardstick with which to evaluate their wares,
but merely as a thought experiment, a philosophical ice-breaker
attempting to deal with the ill-definition .. of the question .. can a
machine think? The fact that Turing personally enacted this first version
of the imitation game offers at least partial evidence against this
conservative interpretation.

It is time to

move away

The extent to which we regard something as behaving in an intelligent


manner is determined as much by our own state of mind and training
as by the properties of the object under consideration. If we are
able to explain and predict its behaviour or if there seems to be little
underlying plan, we have little temptation to imagine intelligence. With
the same object therefore it is possible that one man would consider it
as intelligent and another would not; the second man would have found
out the rules of its behaviour.

Mind 59, pp. 433460.


An early version of this exegesis formed the basis of my Introduction to Kybernetes 3|4.
Turing, A.M. (1948), Intelligent Machinery. National Physical Laboratory Report, 1948, in Copeland, B.J. (ed) (2004), The Essential Turing. Oxford & New York: Oxford University Press.

28 \\ cybertalk

accurately than by chance) the correct gender of the respondents. But


in this sense Turings Victorian-esque parlour game describes a scenario
not unfamiliar to situations that twenty-first century video-gamers
encounter when participating in large multi-user virtual worlds - such as
World of Warcraft or Second Life - where in-game avatars controlled
by real-world players can sometimes fail to reflect the gender they
ostensibly appear to have; the controller may be female and the avatar
male (and vice versa).

However, a close reading of the 1950 paper reveals several alternative


possible interpretations of the test, other than the standard version
outlined above. For example it is possible to interpret Turing when he
says We now ask the question, What will happen when a machine

4See

If Searle is correct, then explanations of human thought will need


to go much deeper than the sophisticated mimicry on offer from
computer programs; indeed, much current research in the area now
views cognition as an essentially embodied, enacted activity taking
place by an agent fundamentally embedded in society; to be thus
understood I [and many others] believe that it is finally time to move
away from Turings essentially computational metaphor of mind, towards
more radical Embodied, Embedded, Enactive and Ecological (4Es)
approaches to cognition.

Mark Bishop is Professor of Cognitive Computing and Director of the Centre for
Radical Cognitive Science (the 4Es) at Goldsmiths, University of London; he is also
Chair of the AISB (the UK Society for the study of Artificial Intelligence and the
Simulation of Behaviour).

Preston, J. & Bishop, M. (eds) (2002), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford & New York: Oxford University Press, for extended

discussion of the Chinese room argument by twenty eminent cognitive scientists and philosophers.

cybertalk \\ 29

Machine?
Test
Yourself

Kevin Warwick
University of Reading

Transcript 2:
Judge: hey, just to lighten the mood a little, whats brown and
sticky?
Entity: no idea, whats the answer?
Judge: a stick... so anyway whats your name?
Entity: Sheldon Cooper, whats yours?
Judge: as in the guy off the big bang theory?
Entity: I dont believe in the big bang
Judge: so are you religious then?
Entity: arent we all?
Judge: well i like to consider myself an agnostic, which religion
is it you follow then if you dont mind me asking?
Entity: you can ask what you like, its a free world, who am I to
mind?

In his article Beyond the Imitation Game Mark Bishop well describes
the standard interpretation of Turings Imitation Game, otherwise
known as the Turing Test. To recapitulate - at one terminal is a
human at another is a machine/computer at a third terminal is
an interrogator. Turing himself described the procedure as The
interrogator stays in a room apart from the other two. The object of the
game for the interrogator is to determine which of the other two is the
man and which is the machine.
What I want to do here is to give you a chance to see how good
your own judgement is at deciding between human (rather than just
man) and machine at the end of a Turing Test conversation. So put
yourself in the role of the interrogator. Here I present specific transcripts
taken from a day of practical Turing tests which were held under
strictly timed conditions with many external viewers at Bletchley Park,
England on 23rd June 2012. The date marked the 100th anniversary of
Turings birth. Each test lasted for 5 minutes in total, no more and no
less, which was as specified by Turing. Each transcript shown indicates
the input of a human judge and the reply of a hidden entity exactly as
they occurred, so any spelling mistakes you spot actually happened
on the day. A different judge was involved for each of the separate
transcripts. Each entity was different, so each one might be either a
human or a machine. That is, the entity in Transcript 1 was not the same
entity as that in Transcript 2 and so on. All you have to do is to decide
on the nature of the entity in each Transcript human or machine.

30 \\ cybertalk

Transcript 3:
Judge: Hello. How are you?
Entity: Hey.
Judge: How do you like Bletchley Park?
Entity: lol.
Judge: Are you from England?
Entity: They have Wi-Fi here in the pub.
Judge: Which pub?
Entity: Im just down the pub.
Judge: Have you ever been in a Turing Test before?
Entity: Will this be the 5 minute argument, or were you thinking
of going for the full half hour.
Judge: Very funny. You sound suspiciously human. Do you like
the Beatles?
Entity: Id like to get the next Dread the Fear tape.
Judge: What is Dread the Fear?
Entity: Dread the fear has that Steve Henderson guy in it.
Judge: What sort of music is that? Or is it comedy?

Transcript 5:
Judge: hi there
Entity: Hello.
Judge: what is your name?
Entity: I do not have a name.
Judge: do you go to school or work for a living?
Entity: I dont recall doing any of those.
Judge: what do you have planned for this weekend?
Entity: Nothing. What about you?
Judge: Im at bletchley park today and sevenoaks tomorrow,
have you been to either before?
Entity: I havent been to either.
Judge: can you do me a mathmatical question please, what is
the square root of 7

It would be interesting to know on how many occasions you agreed


with the judges selection. The actual nature of the entity in each
case however was:
Transcript 1: The entity was a male human
Transcript 2: The entity was a male human, the author of this article in
fact.
Transcript 3: The entity was a machine
Transcript 4: The entity was a machine
Transcript 5: The entity was a PhD research student

and

Transcript 1: The judge decided that the entity was a machine

human

Judge: What is black and white and read all over?


Entity: I dont have true sense of color perception I am only able
to see the RGB values.
Judge: Thats ok. Do you like jokes?
Entity: Please tell me your inclination to a good sense of humor
first.
Judge: A person who can laugh at ones self.
Entity: Would you like to hear a joke?
Judge: Yes, please.
Entity: This joke has been making the rounds in robotic circles:
A robot goes to the doctor and says: My left arm is stuck.
Give me a new arm. The doctor answers: Your arm is
fine. Well have to replace the rest of you.
Judge: Ha! Ha! Why did the chicken cross the road?
Entity: The chicken wandered into the road because it liked the
shiny colors of the cars and trucks whizzing by.
Judge: Youre right. It didnt care if it got to the otherside.
Entity: By the way, did you ever actually see a chicken crossing
the road?
Judge: Only one that didnt make it.

Transcript 2: The judge decided that the entity was a machine

between

Judge: Hi, here we go. How do you feel about this encounter?
Entity: i can only comment once weve had sufficient of it
Judge: Fair enough! Anyway, I amost didnt get here today.
Entity: and why is that?
Judge: I felt very dissy when I woke up and it was hard to shake
off.
Entity: and you had to make it on time anyways. good job!
Judge: thanks. by the way I meant dizzy. Bad typing. I had to
drive faster than normal.
Entity: and you didnt get any speed limit fine
Judge: no, luckily. There was an interesting thing on the Today
programme this morning.
Entity: and what is it? this encounter?
Judge: you mean the thing on the programme, or the chat
were having?

Transcript 3: The judge decided that the entity was an adult male
human

Difference

Transcript 4:

Transcript 4: The judge decided that the entity was a teenage male
human

the

Transcript 1:

Transcript 5: The judge decided that the entity was a machine

Can you tell

So how well did you do? Well lets see as we consider the actual
answers. First of all though lets have a look at what the judges
themselves thought of the entity they had been conversing with.
Maybe you will find that you agree with each of the decisions made
by the judges. However in their case they had to make a decision
directly after concluding the conversation, whereas you have had a
little time to think about it. The judges decisions regarding the nature
of the entity in each case were:

You can see clearly from these results that judges in these tests are
in many cases not that good at coming to a correct decision, even
though they have actually initiated the conversation and were
given the possibility of asking or discussing whatever they wanted,
essentially the conversation was unrestricted. Meanwhile the hidden
humans involved were asked to be themselves, humans, although
they were specifically requested not to give away their actual
identity. So a little bit of humorous deception, as in the case of the
Sheldon Cooper comment of Transcript 2, was perfectly in order.
The machines taking part have been designed to pretend to be
human. So they are just as likely to make spelling mistakes or to get a
mathematical question wrong. Essentially the machines are not trying
to be perfect or to give correct answers; they are merely trying to
respond in the sort of way that a human might respond.
Although Turing designed the test as an answer to the question can
a machine think, it has become regarded in a sense by many as
some sort of competition to see how well machines perform and as
a standard in assessing how machines are progressing with regard to
artificial intelligence. Just what role it plays as far as the development
of artificial intelligence is concerned is a big question that is not easily
answered. Some people feel that it is a side track and not particularly
relevant whilst others see it as a milestone in artificial intelligence that
is of vital importance. Whatever its standing, what I hope is clear
from the transcripts is that it is certainly not a trivial, simple exercise
Indeed, as you can see, it is a surprising indication of how humans
communicate and how other humans (the judges) can be easily
fooled.
For more of these transcripts and an in depth discussion you can
read:
K. Warwick, Artificial Intelligence: The Basics, Routledge, 2011.
K. Warwick, H. Shah and J.H. Moor, Some Implications of a Sample
of Practical Turing Tests, Minds and Machines, Vol.23, Issue.2,
pp.163-177, 2013.

Kevin Warwick received the B.Sc. degree from Aston University,


Birmingham, U.K.; the Ph.D. and D.Sc. degrees from Imperial
College, London, U.K.; and the D.Sc. degree from Czech
Academy of Sciences, Prague, Czech Republic. He is a Professor
of Cybernetics with the University of Reading, Reading, U.K.,
where he is involved in research on artificial intelligence,
control, robotics and cyborgs. He is the author or coauthor of
more than 600 research papers and is perhaps best known for his
pioneering experiments using implant technology. He has acted
as both a Turing test interrogator and hidden human.

cybertalk \\ 31

wave forms via telephone to allow


for digital transmission. For both men,
a world of networked humans and
machines was possible because of the
telephone, a technology that seems
destined from the start to network us
together as a global community. Indeed,
already by 1910 it had become popular
enough to warrant its own history.
Herbert Cassons History of the Telephone
paints a vivid picture of a societal
transformation already in progress:

By Anthony Beavers
Univeristy of Evansville, Southern Indiana

I live off Bell Road just


outside of Newburgh,
Indiana. A mile down the
street Bell Road intersects
with Telephone Road not as
a reminder of a technology
belonging to bygone days,
but as testimony that this
technology, now more
than a century and a
quarter old, is still with us.
In an age that prides itself on its digital
devices and in which the computer now
equals the telephone as a medium of
communication, it is easy to forget the
debt we owe to an era that industrialised
the flow of information.
Alan Turing, best known for his work
on the Theory of Computation and the
Turing Machine in 1937 and then the
Turing Test in 1950, is credited with being
the father of computer science and
artificial intelligence. Less well-known but
equally important is his seminal work in
computer engineering. His 1947 lecture
to the London Mathematical Society
on the Automatic Computing Engine
(ACE) shows Turing in a different light,
as a mechanist concerned with getting
the greatest computational power from
minimal hardware resources. Yet Turings
work on mechanisms is often eclipsed
by his thoughts on computability and
his other theoretical interests. This
is unfortunate, since it obscures our
historical picture by treating Turing
independently of the initiative to
industrialise information flow that began
in the 19th Century.

32 \\ cybertalk

Though it is common to think that the


information age begins with the birth of
computerised digital technologies, when
we explore the history of informational
mechanisms, we see that this is a bit of
an historical gloss. No doubt the recent
explosion in informational devices has
increased dramatically since Time
Magazine named the computer person
of the year in 1982, but it belongs
nonetheless to a trajectory that was
firmly set in motion by the end of the
19th Century with what we might call a
multimedia revolution in information flow
that clearly predates computers.
By multimedia revolution, I mean to
indicate the era in which information
was decoupled from the exigencies of
transportation technology and made
to move on its own. Prior, if information
travelled from point A to B, it was
because someone carried it there; but
just as the industrial revolution brought
new mechanisms for everything from
agriculture to textile manufacturing, it did
the same for information. The reader will
be aware of this fact, but its significance
might not readily be clear.
Though dating historical transformations
is risky, it is perhaps not too false to
mark the beginning of information
industrialisation with the telegraph
in 1836 and the Daguerreotype in
1839, which introduced practical
photography. The telegraphic printer
and the stock ticker followed soon
after in 1856 and 1863, respectively.
The years between 1876 and 1881
were perhaps the most immediately
transformative with the telephone in
1877, the phonograph in 1878, and the

light bulb and the photophone in 1880.


The latter, Bells favorite invention, could
mediate telephone communications by
modulating wave forms in light.
To understand this trajectory, it is
important to realise that many of these
technologies were not adopted for the
uses to which we put them today. The
telephone, for instance, was an early
form of broadcasting, and of the ten uses
Edison enumerates for the phonograph,
he mentions the possibility of connecting
them to the telephone network to add
a mechanism of permanent storage to
what would otherwise be ephemeral
information flow.

We stand on
the shoulders of
giants, and this

practical

Turing
is clearly

among them.
Edisons vision here anticipates Turings
in the 1947 essay mentioned above,
where he suggests that computers could
be inexpensively controlled remotely
by telephone. This required modulating

What we might call the telephonisation


of city life, for lack of a simpler word, has
remarkably altered our manner of living
from what it was in the days of Abraham
Lincoln. It has enabled us to be more
social and cooperative. It has literally
abolished the isolation of separate
families, and has made us members
of one great family. It has become
so truly an organ of the social body
that by telephone we now enter into
contracts, give evidence, try lawsuits,
make speeches, propose marriage,
confer degrees, appeal to voters, and do
almost everything else that is a matter of
speech.
Thus, long before the emergence
of computing machinery, human
computers, as Turing liked to call
people, were quickly interconnecting
into an information network, and it
was not about to stop. In 1891, Edison
made pictures move with his invention
of the motion picture camera. Radio
and teletype would enter the scene
in 1906, to be followed in 1914 by
Edisons telescribe, a precursor to the
answering machine that has now been
replaced with voice mail. Pictures
would take to the airwaves with the
appearance of television in 1926 and
the National Broadcasting System in
1928, the same year that magnetic tape
would become available. The speed of
information transmission and its reach
would continue with cable television
in 1948, the cassette tape in 1958,
the touch tone phone in 1963, color
television in 1966 and the VCR in 1969.
Turing belongs to this historical
trajectory, which I have intentionally
recounted without mention of computing
machinery to make a point, namely
that the information revolution was well
underway prior to Turing, but without
one affordance that will greatly alter
its landscape. Edison, Bell (and several
others) could store information and move
it around. They could not, however,
capture it temporarily in a memory store,

process it, and then produce meaningful


output. Turings paper of 1937 introduced
a theory of mechanical computation
sufficient to put automated information
processors on the same network with
human computers, but it did not provide
a schematic for practical hardware.
That work comes later, not solely by
Turing, but partly so, as we see in his 1947
lecture.
Cost was the issue, as Turing reminds
us in several places. In fact, this was
the primary motivation for his machine
architecture that used instruction tables
stored in memory as early analogs to
our functions or procedures. Program
code then becomes a strategy for
creating temporary circuitry inside of a
machine that can be reconfigured later
for a different processing task rather than
building an individual machine for each
task.

For both men,

a world of

networked
humans and

machines
was possible
because of the

telephone,

a technology
that seems destined

from the start


to network us

together as a
global community.

However, such a universal machine


would not be possible without an
efficient, writeable and erasable memory
store. The infinite tape suggested in
the idealised Turing machine of 1937
would not do because of the time it
would take to jump around the tape. In
explaining why, Turing ties himself again

to the history of information technology


by referring back to the affordances of
the book over papyrus scrolls, noting
nonetheless that even an automated
memory structure based on the book
would be highly inefficient, though the
memory store he will advocate is based
on an analogy with it. The book will be
to the papyrus scroll as memory circuitry
will be to the infinite tape of the idealised
machine. In the ACE computer, for
instance, memory circuitry consisted of
200 separate mercury acoustic delay
lines (analogous to pages in a book) that
each circulated a 1024 place bitstream
on wave forms in continuous rotation with
each bit subject to modification at the
appropriate time.
Many inventions that industrialised
information flow worked by modulating
wave forms of various sorts over time,
as noted above, and here we see that
Turings hardware was no exception.
Why should this matter? The answer, I
believe, is because it shows us a different
Turing, not an innovative mathematician,
though he certainly was that, but also
a mechanist who gets into the nuts
and bolts of computing machinery.
Importantly, this picture suggests that he
does not make a clean break to start
a new era, but that he belongs quite
naturally to the preceding one. In no
way does this observation undermine his
importance in the history of information
so much as it situates him squarely within
it. And to be fair, no one back then,
among Edison, Bell, Turing, and others,
could imagine what would happen when
the computer revolution, inaugurated
by Turing, and the telephone revolution,
inaugurated by Bell, would come
crashing together at the end of the 20th
century to produce the smart phone
and to connect everyone to computers
(both human and mechanical) by
way of hand-held devices that are
both affordable and that have more
computational power than Turing
himself imagined practical. These truly
are amazing times. We stand on the
shoulders of giants, and this practical
Turing is clearly among them

Anthony F. Beavers is a Professor of


Philosophy and Chair of the Department of
Philosophy and Religion at the University
of Evansville in Southern Indiana. He has
published extensively on the intersection of
computing with philosophy and currently serves
as president of the International
for

Association
Computing and Philosophy (IACAP).

cybertalk \\ 33

Some of the greatest, most revolutionary


advances in science have been given their
initial expression in attractively modest terms,
with no fanfare.
Charles Darwin managed to compress his entire theory into a
single summary paragraph that a layperson can readily follow,
in all its details:
If during the long course of ages and under varying
conditions of life, organic beings vary at all in the several
parts of their organization, and I think this cannot be
disputed; if there be, owing to the high geometric powers
of increase of each species, at some age, season, or
year, a severe struggle for life, and this certainly cannot
be disputed; then, considering the infinite complexity of
the relations of all organic beings to each other and to
their conditions of existence, causing an infinite diversity
in structure, constitution, and habits, to be advantageous
to them, I think it would be a most extraordinary fact if
no variation ever had occurred useful to each beings
own welfare, in the same way as so many variations
have occurred useful to man. But if variations useful
to any organic being do occur, assuredly individuals
thus characterized will have the best chance of being
preserved in the struggle for life; and from the strong
principle of inheritance they will tend to produce
offspring similarly characterized. This principle of
preservation, I have called, for the sake of brevity,
Natural Selection.
(Origin of Species, end of chapter 4)

Francis Crick and James Watson closed their epoch-making


paper on the structure of DNA with the deliciously diffident
sentence:
It has not escaped our notice that the specific pairings
we have postulated immediately suggests a possible
copying mechanism for the replicating unit of life.
(Crick, F., and J. Watson, 1953, p738 )
And Alan Turing created a new world of science and
technology, setting the stage for solving one of the most baffling
puzzles remaining to science, the mind-body problem, with
an even shorter declarative sentence in the middle of his 1936
paper on computable numbers:
It is possible to invent a single machine which can be
used to compute any computable sequence.
(Turing, 1936)
Turing didnt just intuit that this remarkable feat was possible;
he showed exactly how to make such a machine. With that
demonstration the computer age was born. It is important to
remember that there were entities called computers before
Turing came up with his ideabut they were people, clerical
workers with enough mathematical skill, patience, and pride
in their work to generate reliable results of hours and hours of
computation, day in and day out. Many of them were women.
Thousands of them were employed in engineering and
commerce, and in the armed forces and elsewhere, calculating
tables for use in navigation, gunnery and other such technical
endeavors. A good way of understanding Turings revolutionary
idea about computation is to put it in juxtaposition with Darwins

cybertalk \\ 35

about evolution. The pre-Darwinian world was held together not


by science but by tradition: all things in the universe, from the
most exalted (man) to the most humble (the ant, the pebble,
the raindrop) were the creations of a still more exalted thing,
God, an omnipotent and omniscient intelligent creatorwho
bore a striking resemblance to the second-most exalted thing.
Call this the trickle-down theory of creation. Darwin replaced
it with the bubble-up theory of creation. One of Darwins
nineteenth century critics put it vividly:
In the theory with which we have to deal, Absolute
Ignorance is the artificer; so that we may enunciate as
the fundamental principle of the whole system, that, IN
ORDER TO MAKE A PERFECT AND BEAUTIFUL MACHINE,
IT IS NOT REQUISITE TO KNOW HOW TO MAKE IT. This
proposition will be found, on careful examination, to
express, in condensed form, the essential purport of the
Theory, and to express in a few words all Mr. Darwins
meaning; who, by a strange inversion of reasoning,
seems to think Absolute Ignorance fully qualified to take
the place of Absolute Wisdom in all the achievements of
creative skill.
(MacKenzie, 1868)
It was, indeed, a strange inversion of reasoning. To this day
many people cannot get their heads around the unsettling idea
that a purposeless, mindless process can crank away through
the eons, generating ever more subtle, efficient and complex
organisms without having the slightest whiff of understanding of
what it is doing.
Turings idea was a similarin fact remarkably similarstrange
inversion of reasoning. The Pre-Turing world was one in which
computers were people, who had to understand mathematics
in order to do their jobs. Turing realized that this was just not
necessary: you could take the tasks they performed and
squeeze out the last tiny smidgens of understanding, leaving
nothing but brute, mechanical actions. In order to be a
perfect and beautiful computing machine it is not
requisite to know what arithmetic is.
What Darwin and Turing had both discovered, in their
different ways, was the existence of competence without
comprehension (Dennett, 2009, from which material in the
preceding paragraphs has been drawn, with revisions). This
inverted the deeply plausible assumption that comprehension
is in fact the source of all advanced competence. Why, after
all, do we insist on sending our children to school, and why
do we frown on the old-fashioned methods of rote learning?
We expect our childrens growing competence to flow from
their growing comprehension; the motto of modern education
might be: comprehend in order to be competent And for us
members of H. sapiens, this is almost always the right way to
look at, and strive for, competence. I suspect that this muchloved principle of education is one of the primary motivators of
skepticism about both evolution and its cousin in Turings world,
Artificial Intelligence. The very idea that mindless mechanicity
can generate human-levelor divine level!competence
strikes many as philistine, repugnant, an insult to our minds, and
the mind of God.
Consider how Turing went about his proof. He took human
computers as his model. There they sat at their desks, doing one
simple and highly reliable step after another, checking their
work, writing down the intermediate results instead of relying
on their memories, consulting their recipes as often as they
needed, turning what at first might appear a daunting task into a
routine they could almost do in their sleep. Turing systematically
broke down the simple steps into even simpler steps, removing
all vestiges of discernment or comprehension. Did a human
computer have difficulty telling the number 99999999999 from
the number 9999999999? Then break down the perceptual

36 \\ cybertalk

problem of recognizing the number into simpler problems,


distributing easier, stupider acts of discrimination over multiple
steps. He thus prepared an inventory of basic building blocks
from which to construct the universal algorithm that could
execute any other algorithm. He showed how that algorithm
would enable a (human) computer to compute any function,
and noted that:
The behavior of the computer at any moment is
determined by the symbols which he is observing and his
state of mind at that moment. We may suppose that
there is a bound B to the number of symbols or squares
which the computer can observe at one moment. If
he wishes to observe more, he must use successive
observations. . . . . The operation actually performed is
determined . . . . by the state of mind of the computer
and the observed symbols. In particular, they determine
the state of mind of the computer after the operation is
carried out.
He then noted, calmly:
We may now construct a machine to do the work of this
computer.
Right there we see the reduction of all possible computation to
a mindless process. We can start with the simple building blocks
Turing had isolated, and construct layer upon layer of more
sophisticated computation, restoring, gradually, the intelligence
Turing had so deftly laundered out of the practices of human
computers.
But what about the genius of Turing, and of later, lesser
programmers, whose own intelligent comprehension was
manifestly the source of the designs that can knit Turings
mindless building blocks into useful competences? Doesnt this
dependence just re-introduce the trickle-down perspective on
intelligence, with Turing in the God role? No less a thinker than
Roger Penrose has expressed skepticism about the possibility
that Artificial Intelligence could be the fruit of nothing but
mindless algorithmic processes.
I am a strong believer in the power of natural selection.
But I do not see how natural selection, in itself, can
evolve algorithms which could have the kind of
conscious judgements of the validity of other algorithms
that we seem to have.
(1989, p414)
He goes on to admit
To my way of thinking there is still something mysterious
about evolution, with its apparent groping towards
some future purpose. Things at least seem to organize
themselves somewhat better than they ought to, just
on the basis of blind-chance evolution and natural
selection.
(1989, p416)

Indeed, a single cascade of natural selection events, occurring
over even billions of years, would seem unlikely to be able to
create a string of zeroes and ones that, once read by a digital
computer, would be an algorithm for conscious judgments.
But as Turing fully realized, there was nothing to prevent the
process of evolution from copying itself on many scales, of
mounting discernment and judgment. The recursive step that
got the ball rollingdesigning a computer that could mimic
any other computer could itself be reiterated, permitting
specific computers to enhance their own powers by redesigning
themselves, leaving their original designer far behind. Already
in Computing Machinery and Intelligence, his classic paper in

cybertalk \\ 37

Mind, 1950, he recognized that there was no contradiction in the


concept of a (non-human) computer that could learn.
The idea of a learning machine may appear paradoxical
to some readers. How can the rules of operation of the
machine change? They should describe completely
how the machine will react whatever its history might be,
whatever changes it might undergo. The rules are thus
quite time-invariant. This is quite true. The explanation of
the paradox is that the rules which get changed in the
learning process are of a rather less pretentious kind,
claiming only an ephemeral validity. The reader may
draw a parallel with the Constitution of the United States.
[See Suber, unpublished, for a valuable discussion of this
passage and the so-called paradox of self-amendment.-DCD]
He saw clearly that all the versatility and self-modifiability of
human thoughtlearning and re-evaluation and, language
and problem-solving, for instancecould in principle be
constructed out of these building blocks. Call this the bubbleup theory of mind, and contrast it with the various trickle-down
theories of mind, by thinkers from Ren Descartes to John Searle
(and including, notoriously, Kurt Gdel, whose proof was the
inspiration for Turings work) that start with human consciousness
at its most reflective, and then are unable to unite such magical
powers with the mere mechanisms of human bodies and brains.
Turing, like Darwin, broke down the mystery of intelligence
(or Intelligent Design) into what we might call atomic steps
of dumb happenstance, which, when accumulated by the
millions, added up to a sort of pseudo-intelligence. The Central
Processing Unit of a computer doesnt really know what
arithmetic is, or understand what addition is, but it understands
the command to add two numbers and put their sum in a
registerin the minimal sense that it reliably adds when thus
called upon to add and puts the sum in the right place. Lets say
it sorta understands addition. A few levels higher, the operating
system doesnt really understand that it is checking for errors
of transmission and fixing them but it sorta understands this,
and reliably does this work when called upon. A few further
levels higher, when the building blocks are stacked up by the
billions and trillions, the chess-playing program doesnt really
understand that its queen is in jeopardy, but it sorta understands
this, and IBMs Watson on Jeopardy sorta understands the
questions it answers.
Why indulge in this sorta talk? Because when we analyzeor
synthesizethis stack of ever more competent levels, we need
to keep track of two facts about each level: what it is and what
it does. What it is can be described in terms of the structural
organization of the parts from which it is madeso long as
we can assume that the parts function as they are supposed
to function. What it does is some (cognitive) function that it
(sorta) performswell enough so that at the next level up,
we can make the assumption that we have in our inventory a
smarter building block that performs just that functionsorta,
good enough to use. This is the key to breaking the back of
the mind-bogglingly complex question of how a mind could
ever be composed of material mechanisms. What we might
call the sorta operator is, in cognitive science, the parallel of
References
Watson J.D. and Crick F.H.C., 1953, A Structure for Deoxyribose Nucleic Acid,
Nature 171, 737-738.
Darwin, C., 1859, On the Origin of Species,
Dennett, D. C., 1971, Intentional Systems, J.Phil.
Dennett, D. C., 1987, The Intentional Stance, Cambridge, MA: MIT Press.
Dennett, D. C., 2009, Darwins Strange Inversion of Reasoning, PNAS, June 16, vol.
106 suppl. 1, pp 1006110065.
MacKenzie R.B., 1868, The Darwinian Theory of the Transmutation of Species
Examined, London: Nisbet & Co.
Penrose R., 1989, The Emperors New Mind. Oxford: Oxford Univ Press.
Suber, P., unpublished, Saving Machines From Themselves: The Ethics of Deep SelfModification, preprint, November 30, 2001.

38 \\ cybertalk

Darwins gradualism in evolutionary processes. Before there


were bacteria there were sorta bacteria, and before there were
mammals there were sorta mammals and before there were
dogs there were sorta dogs, and so forth. We need Darwins
gradualism to explain the huge difference between an ape
and an apple, and we need Turings gradualism to explain
the huge difference between a humanoid robot and hand
calculator. The ape and the apple are made of the same basic
ingredients, differently structured and exploited in a manylevel cascade of different functional competences. There is no
principled dividing line between a sorta ape and an ape. The
humanoid robot and the hand calculator are both made of
the same basic, unthinking, unfeeling Turing-bricks, but as we
compose them into larger, more competent structures, which
then become the elements of still more competent structures at
higher levels, we eventually arrive at parts so (sorta) intelligent
that they can be assembled into competences that deserve
to be called comprehending. We use the intentional stance
(Dennett, 1971, 1987) to keep track of the beliefs and desires
(or beliefs and desires or sorta beliefs and sorta desires)
of the (sorta-)rational agents at every level from the simplest
bacterium through all the discriminating, signaling, comparing,
remembering circuits that compose the brains of animals from
starfish to astronomers. There is no principled line above which
true comprehension is to be foundeven in our own case. The
small child sorta understands her own sentence Daddy is a
doctor, and I sorta understand E=mc2. Some philosophers
resist this anti-essentialism: either you believe that snow is white
or you dont; either you are conscious or you arent; nothing
counts as an approximation of any mental phenomenonits
all or nothing. And to such thinkers, the powers of minds are
insoluble mysteries because they are perfect, and perfectly
unlike anything to be found in mere material mechanisms.
We still havent arrived at real understanding in robots, but we
are getting closer. That, at least, is the conviction of those of us
inspired by Turings insight. The trickle-down theorists are sure in
their bones that no amount of further building will ever get us to
the real thing. They think that a Cartesian res cogitans, a thinking
thing, cannot be constructed out of Turings building blocks. And
creationists are similarly sure in their bones that no amount of
Darwinian shuffling and copying and selecting could ever arrive
at (real) living things. They are wrong, but one can appreciate
the discomfort that motivates their conviction.
Turings strange inversion of reason, like Darwins, goes
against the grain of millennia of earlier thought. If the history
of resistance to Darwinian thinking is a good measure, we
can expect that long into the future, long after every triumph
of human thought has been matched or surpassed by mere
machines, there will still be thinkers who insist that the
human mind works in mysterious ways that no science can
comprehend.

almanac of

events

September

This article was taken from the following publication with the kind permission of both
the author and editor: Alan Turing - His Work and Impact, Edited by S. Barry Cooper
and Jan van Leeuwen, Elsevier, Amsterdam, London, New York, Tokyo, 2013.

16th 19th - Hugh Aston Building, DMU, Leicester

> DSEI

10th 13th - ExCel, London

> VMWorld

14th 17th - Barcelona

October

> iPad in Defence

23rd & 24th Stockley Park

> Cyber Security Summit

22nd & 23rd Minneapolis, MN

> 11th Annual CTO Forum


7th 11th Nigeria

> Global Forum 2013


28th & 29th Trieste, Italy

> Cyber Leadership Academy


(Date Pending) North Yorkshire

November

> 8th Annual API Cyber Security Conference & Expo


12th & 13th Houston, Texas

> OWASP AppSec USA 2013

18th 21st Manhattan, New York

> Info-Crime Summit


26th & 27th - London

> Cyber Security Seminar

27th British Embassy, Switzerland

December

Daniel Dennett is University Professor and Co-Director of the Center


for Cognitive Studies at Tufts University. His most recent book is Intuition
Pumps and Other Tools for Thinking (Norton and Penguin, 2013). He is a
guest professor at the New College of the Humanities in London and is on
the External Faculty of the Santa Fe Institute. http://www.lewallpaper.com/
imagebank/3d-cyber-boxes-e.jpg
http://www.earlham.edu/~peters/writing/selfmod.htm
Turing, A,, 1936, On computable numbers, with an application to the
Entscheidungsproblem, Proceedings of the London Mathematical Society.
42:230265, and erratum (1937) 43:544546.
Turing, A., 1950, Computing Machinery and Intelligence, Mind, LIX, no. 2236, pp.
433-60.

> Cyber Week

> Dell World

9th 13th Austin, Texas

2014
March
May
June

> IA Practitioners Event 2014


5th & 6th York Racecourse

> ITEC Cyber Security Education


20th 22nd Cologne Messe, Germany

> IA14

16th & 17th June - Park Plaza Westminster Bridge Hotel, London

cybertalk \\ 39

have been drawn to this area is simply intellectual, driven by


pure curiosity. However, there are also practical applications
of primes, ranging from concert hall acoustics to cryptology,
where they are at the foundations of the Diffie-Hellman, RSA,
and other ciphers.
When Turing went to the University of Cambridge, that institution
was home to some of the foremost number theorists of the era,
such as Hardy, Ingham, and Littlewood. Under their influence,
and that of a fellow student, Stanley Skewes, Turing was drawn
to a very fundamental problem on the distribution of primes. It
has been known for two thousand years that there are infinitely
many primes. Inspection of initial segments of integers shows
that primes appear to be becoming sparser and sparser the
further one goes; there are 25 of them below 100, 168 below
1000, 1229 below 10000, and 9592 below 100000. But just how
sparse do the primes get? It was conjectured around 1800 that
(x), the number of primes up to x, should be about Li(x), where
Li(x), the logarithmic integral, is a smooth function that is easy
to compute and grows like x/(log x). This was proved to be
true a century later. But just how good is this approximation? If
the Riemann Hypothesis (RH) is true, the difference Li(x) (x)
never gets much larger than the square root of x.
The RH, which is a century and a half old, is regarded as
the most famous and most significant unsolved problem in
mathematics. It is a statement about the locations of an
infinite number of points in the complex plane, the zeros of the
Riemann zeta function. The RH was one of the 23 problems
posed by the famous mathematician Hilbert in 1900 as
challenges, and it is one of the 7 Clay Mathematics Institute
Millennium Prize Problems posed in 2000, with a $1 million award
for a solution. The RH has attracted enormous attention for
a combination of reasons. It is old, and had been attacked
unsuccessfully by many famous researchers. It has important
implications for several areas. Further, it connects seemingly
disparate areas, namely the discrete primes with the zeros of a
continuous function. Even more tantalizingly, the RH implies that
those discrete primes, the very fundamental components of
arithmetic, exhibit quasi-random behaviour.

Alan Turing made numerous significant


contributions to the theory, art, and practice
of computation. The most famous ones were
the creation of the foundations of the theory
of computability, and the cryptanalysis of
German ciphers.
What is much less known is that throughout his professional life,
Turing kept coming back to some fascinating and still largely
unsolved computional problems in pure maths, where he also
made notable advances. These researches involved the design

40 \\ cybertalk

of a special purpose analog mechanical computer in the late


1930s, as well as some of the earliest applications of digital
computers after World War II, and some theoretical papers.
The problems that drew Turings attention involved prime
numbers, those positive integers greater than 1 that are not
divisible by any positive integers other then themselves and 1,
namely 2, 3, 5, 7, 11, ... Primes were identified as fundamental
blocks of arithmetic by ancient Greeks, and have been objects
of study ever since. Much of the flourishing subject of number
theory is devoted to primes. Most of the interest of both the
professional mathematicians and the numerous amateurs who

Turings work in number theory revolved around the RH and


the associated zeros of the zeta function. Much of what he
did, especially the parts connected with Skewes, dealt with a
conjecture related to the RH, namely that (x) is always strictly
less than Li(x). This conjecture, if true, would imply the RH. All the
direct numerical evidence we have supports it. It is known to be
true for x < 1014, and there are heuristic arguments to suggest
it is true at least up to 1030. However, a century ago it was
shown to be false. (The proof did not imply anything about the
RH itself, the relation was strictly one way, with this conjecture
implying the RH, but not vice versa.) The initial proof did not
indicate where the first counterexample might occur. We still do
not know precisely where it occurs, but Skewes, Turings student
colleague, friend, and collaborator, showed that it is no larger
than the enormous number:

Later research has shown there is a counterexample below


only 10317, a number still far beyond the reach of modern
computers and known algorithms.
Skewes and Turings work on the (x) vs. Li(x) question involved
investigations of the zeros of the zeta function. This was
likely one of the motivations for Turings design of a special
purpose machine for computing the zeta function, for which
he obtained what was in 1939 a generous 40 grant from
the Royal Society. The other motivation was simply to check
whether the RH was true. A few years earlier, it had been
verified that the first 1041 zeros of the zeta function do indeed
satisfy the RH, and Turing wanted to go about four times further.
Unfortunately World War II intervened, so this machine was
never built.
After the War, Turing had access to the Manchester Mark I,
one of the earliest general purpose digital computers. He used
it to investigate the RH. He did not get much further than the
pre-War verification, though, as the digital technology was still
primitive. As he wrote in his paper describing his results,
If it had not been for the fact that the computer
remained in serviceable condition for an unusually long
period from 3 p.m. one afternoon to 8 a.m. the following
morning it is probable that the calculations would never
have been done at all.
This quote shows both how primitive the early computers were,
and how enthusiastic Turing was about the RH. Not many
people would have been willing to stay up all night waiting for
the results (which came out on paper tape, with output printed
in based 32, which Turing had learned to read at sight)!
With time, technology improved and better algorithms were
found. As a result, Turings computations of the zeta function
have been superseded, and today we know that the RH
is satisfied by the first 1013 zeros. However, even the latest
computations rely on a clever technique of Turings that
simplifies the task of demonstrating that all zeros of the zeta
function have been found.
An interesting observation is that as time went on, Turing grew
increasingly skeptical about the RH, to the point that he talked
openly to his colleagues about its likely falsity. At that time, such
skepticism was not uncommon among number theorists, but
over the past half a century it has decreased. It would have
been interesting to find out how Turing would have reacted
to the developments that led to the change of opinions in
favor of RH. As it is, we dont even know why he doubted its
truth. All we can say is that he was fascinated by the question,
and kept returning to it. In the process he developed some
original computational techniques and employed early digital
computers in a novel way.

Andrew Odlyzko has had a long career in research and research


management at Bell Labs, AT&T Labs, and most recently at the University
of Minnesota, where he built an interdisciplinary research center, and is
now a Professor in the School of Mathematics. He is a Fellow of the
International Association for Cryptologic Research and of the American
Mathematical Society.

cybertalk \\ 41

Even before presenting his invited lecture on June


11 at the 2013 Conference on Lasers and ElectroOptics (San Jose, California, USA), physicist Renato
Renner main findings made the news. On March
21, an article by Adam Mann in Wired Science
announced that the 'Laws of Physics Say Quantum
Cryptography is Unhackable. Its Not!'.

Quantum

Randomness
&

Quantum
Cryptology
BY Cristian CALUDE
The University of Auckland, New Zealand

Brielle Day gave more details in the article 'Just How Secure is
Quantum Cryptography?' posted on the Optical Society on May
28. In contrast, on July 28, a Fortune CNN report titled 'Zeroing in
on unbreakable computer security' talks about [quantum] data
encryption that is unbreakable now and will remain unbreakable in
the future.
Vulnerabilities of classical cryptography are well documented,
but quantum cryptography was (and, for some, is) unbreakable:
Heisenbergs Uncertainty Principle guarantees that an adversary
cannot look into the series photons which transmit the key without
either changing or destroying them.
This is the theory. In practice, where one needs technology to make
the theory work, the situation is different. And, indeed, weaknesses
of quantum cryptography have been discovered; they are not new
and they are not few in number. Issues have been found as early
as 2008 by Jan-ke Larssons team. In August 2010, Vadim Makarov
and his colleagues published in Nature Photonics the details of a
traceless attack against a class of quantum cryptographic systems
which includes the commercial products sold by ID Quantique
(Geneva) and MagiQ Technologies (Boston). In an effort to reduce
various weaknesses, the work reported by Renner evaluates the
failure rate of different quantum cryptography systems.
It is a truism that because a system is secure in theory it doesn't
mean it's secure in practice. But in the same way, because a
system is not secure in practice, it doesnt necessary mean that it
cannot be secure in theory. So, lets forget the practical problems
for the moment and ask the unspeakable question: Is theoretical
quantum cryptography really unbreakable?
Cryptographic algorithms require a method of generating a secret
key from random bits. The encryption algorithm uses the key to
encrypt and decrypt messages, which are sent over unsecure
communication channels. The strength of the system ultimately
depends on the strength of the key used, i.e. on the difficulty for
an eavesdropper to guess or calculate it. The number of random
bits necessary to generate keys varies; the famous one time pad
encryption method needs as many random bits as the bits of the
text to be encrypted.
Clever algorithms produce the so-called pseudo-random bits,
which mimic to some extent randomness. Alternatively, one
can use macro-physical methods like JohnsonNyquist (thermal)
noise, or atmospheric noise. The peculiar properties of quantum
mechanics allow the generation of quantum random bits; typically,
the process relies on detectors to measure a relevant quantum
property of single photons, like the beam splitter used for the
European Unions Integrated Project Qubit Applications
It is a cold fact that many security applications have been
compromised because of poor quality randomness. Commercial
randomness providers (for example, Quantoss, Random.org,
ID Quantique) claim to produce true randomness. Is this claim
credible?
Lets start with the weaker, but more fundamental, question:
Can true randomness be theoretically produced? Work reported
in Nature in 2010 wrongly claims that quantum randomness is
true randomness. There is no true randomness, only degrees of
randomness, irrespective of the method used to produce it. The

42 \\ cybertalk

British mathematician and philosopher Frank P. Ramsey was the


first to demonstrate it in his study of conditions under which order
must appear; other proofs have been given in the framework of
algorithmic information theory.
Mathematically, there exists an infinite scale of stronger and
stronger types of randomness; pseudo-randomness sits at the
bottom. Although in practice only are necessary for encrypting a
given message, to be able to compare the quality of randomness
we need to consider infinite sequences of bits. One important
criterion is whether such a sequence is Turing computable (i.e.
it can be produced by an algorithm) or not. Pseudo-random
sequences are obviously Turing computable; they are easily
predictable once we know the algorithm generating the sequence,
so, not surprisingly, their quality of randomness is low.

It is a truism that because


a system is secure in theory
it doesn't mean it's secure
in practice...Is theoretical
quantum cryptography
really unbreakable?
Is quantum randomness Turing computable too? A value-indefinite
observable is an observable which cannot have a definite value
either zero or onebefore measurement. Quantum randomness
is not Turing computable when the quantum random generator
is based on an experiment in which a value-indefinite observable
is measured. This condition is satisfied by a quantum experiment
subject to the Kochen-Specker theorem, a classical result proved
46 years ago for a different aim: to show the impossibility of a
deterministic hidden variable theory for quantum mechanics. A
quantum random number generator called ACCS (http://pra.aps.
org/abstract/PRA/v86/i6/e062109) designed in terms of generalized
beam splitters, implements these theoretical ideas.
Turing theory goes beyond the distinction computable and
incomputable; it provides tools to evaluate many different degrees
of incomputability. Here again, there is a never-ending scale of
more and more incomputable sets. So, just how incomputable
are the sequences produced by ACCS? First and foremost, each
sequence generated by ACCS is provably incomputable, not just
incomputable with a high probability. Second, its incomputability
is high: no algorithm can compute more than finitely many bits
of every infinite sequence produced by ACCS. Do sequences
generated by classical beam splittersfrequently used in
commercial quantum random generatorshave the same
property? This question is open. More theoretical and practical
understanding of quantum randomness is necessary for proving
strong security properties of quantum cryptographic systems.

C. Calude, a chair professor in computer science at the University


of Auckland, New Zealand, is a worldwide expert in quantum
randomness. He published more than 250 papers and 10 books
in discrete mathematics, computational complexity, algorithmic
information theory and quantum physics.

cybertalk \\ 43

Alan TURing:
His Work and Impact

Review by Helen morgan, SBL

The fact remains, that everyone


who taps at a keyboard, opening a
spread sheet or a word processing
program, is working on an
incarnation of a Turing Machine.
Time Magazine, when naming Alan Turing as one of
the most important people of the 20th Century in 1999

This accessible book is an essential read for those interested in


Turings work and provides a more contemporary perspective than
anything else that is available. Complete copies of Turings work
can be both scarce and expensive, however Alan Turing: His Work
and Impact is both affordable for researchers, whilst remaining
manageable to those with limited scientific knowledge.
Editing this amazing book was quite some adventure, explains Dr
Cooper. I hope the outcome is as much a voyage of discovery for
the reader as it was for the editors.
Alan Turing: His Work and Impact is an essential read if you have
any interest in Turing, his remarkable work and the significance his
findings and discoveries have had on the modern world. Additional
information and resources to accompany the book are available
online at www.elsevier.cozm, where the book is also available to buy

lan Turing: His Work and Impact is the definitive collection


of Turings work and includes many of the most significant
contributions from the four-volume set of the Collected
Works of A.M Turing. The book is edited by Dr Barry Cooper
and Dr Jan van Leeuwen, professors and computer scientists who
have had a large part to play in the Alan Turing Centenary Year and
are pioneering research on the life and works of Alan Turing. The
914-page collection provides wide coverage of the impact Turings
scientific endeavours have had on the modern world, and includes
a number of previously unpublished texts.
Interspersed with Turings work are commentaries from leading
experts in a wide spectrum of fields and backgrounds, including
a piece by the late Tony Sale, leader of the Colossus restoration
project at The National Museum of Computing, and a personal
narrative by Christos Papadimitriou, Computer Scientist at the
University of California Berkley, called Alan & I. These commentaries
provide an insight into the significance and contemporary impact
of Turings work as well as the authors own personal thoughts of his
influence on their professional careers.
Dr Cooper said of the book: We set out to share with people
the excitement of engaging with one of the most unique and
original thinkers of the modern information age. To do this, we
brought in literally scores of the most remarkable thinkers
from around the world to comment, interpret, explain, and
take Turings creative impact into the 21st century and
all in around three to eight pages of contribution!

for 45.99.

Walking along the South Kensington underpass,


connecting the Underground station with Londons
museum district, this summer you notice the faces of
two, very different, British icons covering the Victorian
brick walls. It is of no doubt that one dominates the other
his flame red hair contrasting sharply against his pale,
impeccably white skin while his piercing, ice blue stare
follows you up and down the crowded tunnels. This is
David Bowie, resplendent in his iconic Aladdin Sane
attire, driving visitors towards David Bowie Is at the
Victoria & Albert Museum.
The second is far more reserved, almost apologetic in
comparison. A single monochromatic image of a man
many of those passing do not even recognise. His hair is
immaculately tidy; his eyes gaze off into the distance as
if in deep thought. This is Alan Turing, the father of modern
computing, promoting Codebreaker at the British
Science Museum.
Whilst both exhibitions cover the life and work of their
protagonist, in many ways they reflect the personalities
of their subjects also. Bowie, a unique and ubiquitous
influence on popular culture, is bold, radical and
enigmatic. Likewise, his exhibition has filled thousands
of column inches and crams its vast space with towering
video screens, outlandish costumes and intricate, in-depth
details of a career spanning almost five decades.
By contrast Codebreaker is more condensed, more
understated and yet equally as absorbing. Centred
around the rebuilt Pilot ACE computer, reconstructed
precisely to Turings original design, it tells the story of
not only the man himself, but also that of computing as
a whole in six distinct sections, allowing visitors to move
around at their own leisure. Though this means that objects
and stories are not always presented in chronological
order, the exhibition is still able to expertly weave key
moments of Turings life with the impact they had in his
thinking and scientific career, providing a fully rounded
view of his personality, views and the events that shaped
them.
In addition to the ACE computer, the exhibition holds a
number of artefacts central to the story of computing.
Alongside early calculating machines from the late 1930s
and the wreckage of a 1950s Comet jet aircraft lie a
number of original Enigma machines, including U-Boat
and early military decoders as well as one on loan from
the private collection of (as the caption below states) the
musician and film producer, Sir Michael Jagger, proving

44 \\ cybertalk

there are some people whose achievements eclipse


even Jumping Jack Flash himself.
Unlike his counterpart across the road, very little
remain of Turings personal belongings. However,
on display here, are a collection of heart breaking
letters from Turing to the mother of his close friend,
Christopher Morcom, who died tragically aged
18 from tuberculosis. Here Turing demonstrates
not only his intense feelings and personal beliefs
but also the first concepts about the nature of
thinking which will be fundamental to the future
developments of artificial intelligence.
The most striking exhibit of all, however, comes when you
venture into section 5, A Matter of Life and Death. Here
Turings work on morphogenesis growth and patterns
in plants and animals on one side is painted in stark
contrast on the other with a small bottle of oestrogen pills
and a solitary piece of paper, detailing the findings of
Turings post-mortem examination and its conclusion of
suicide by cyanide poisoning. Together they provide
a saddening vision of the abhorrent treatment Turing
was subjected to in his final years. It is a powerful
display which once again combines in a unique way
his personal life with his scientific insights, and let us
fully appreciate the extent of how important his thinking
has been.
Codebreaker has been a critical success for the British
Science Museum and its curator David Rooney and rightly
so. The exhibit skilfully pieces together the history of one
of Britains finest minds and the many areas of research
he influenced in a relatively small space, providing just
enough interest and detail to enthral both keen enthusiasts
and those with only a passing interest alike.
Codebreaker: Alan Turings Life and Legacy is a free
exhibition at the British Science Museum to celebrate the
centenary of Alan Turings birth. The exhibits run has been
extended until 21st October, 2013.
Andrew Cook has been a member of the Chartered Institute of Marketing
since 2009 and is Creative & Digital Editor of CyberTalk and Marketing
Executive for SBL. He is a graduate of the University of Newcastle and was
awarded the Douglass Gilchrist Exhibition for Outstanding Achievement in
2007. Andrews interests include graphic design, 80s sci-fi movies and the
music of David Bowie.
When hes not doing any of these things, Andrew can usually be found
making forts out of cushions and unsuccessfully attempting to dodge phone
calls from his mother.

cybertalk \\ 45

Manchester on June 7th, 1954, just 16 days before his 42nd birthday.

computer, where he worked for Ferranti in Manchester. The image is

be so popular the museum extended it until October 21. Overseas,

Who would have thought the man who mathematically fathered the

from November 2012 and Maria Elisabetta Marellis remarkable Milan

apart from Porto Alegre and Hong Kong, the Heinz Nixdorf Museums

computer, and an awareness of its limitations; who was the Enigma-

production of TURING - a staged case history.

Forum in Paderborn was really fabulous in lots of ways - involving real


hands on enthusiasm from the staff. Israel too had various exhibitions,

code solving Prof at Bletchley Park amongst the hidden thousands


who made the War winnable in the dark days of the early 1940s;

Today, chess grand masters are routinely defeated by computers,

who founded computer science, and set us on the road to building

embodiments of Turings 1936 universal machine. The

intelligent machines - and understanding the obstacles; and who told

ongoing exploration of the difference between

us how patterns on cows, tigers and tropical fish were formed and

human and digital intelligence continues.

changed, with major impact on developmental biology - who would

Appropriately, in March this year the Universal

have thought one of the UKs greatest scientists would have died

Machine, supported by Stephen Fry and

alone of cyanide poisoning, after 59 years still branded a criminal by

other famous figures, was voted the most

the country he honoured around the world.

important past British innovation

competitions and conferences.


Film-wise the highlight was the Channel 4 showing
of the moving Britains Greatest Codebreaker
in autumn 2011, with the international version
Codebreaker circulating widely in 2012
and beyond. Later this year will see a full
movie being made of the much discussed

in a high-profile online vote of

script The Imitation Game. Originally

So much of the Turing Year was made for and by scientific followers

over 50,000 people, organised by

Warner Brothers wanted to make the film

of Turing. The largest meeting was the special ACM meeting in San

the Science Museum, the Royal

with Leonardo di Caprio as Turing. When

Francisco with around 700 participants and literally dozens of Turing

Society and other UK scientific

this fell through, Black Bear Pictures took

Award winners. But there were other remarkable events in other

organisations.

up the script, and now have intriguing


Turing actor Benedict Cumberbatch,

places Turing had lived and worked at - Cambridge, Manchester,


Bletchley Park, and a superb 2-day meeting in Princeton, its

In China, they had the Turing Lectures

importance to the early history of the computer brought to life in

in Beijing, and invited two high schools

George Dysons book Turings Cathedral. The Americans were

to a special lecture by Turing Award

particularly generous in their honouring of the Turing legacy. It was

winner John Hopcroft. Hong Kong regularly

left to Google to buy Max Newmans historic collection of Turing

surprised us with all sorts of ATY creativity:

papers for Bletchley Parks safe keeping, and the John Templeton

exhibitions, public lectures, even a massive

Foundation from Philadelphia gave over a million pounds for a very

birthday projection of Alan onto a tall Hong

grand Manchester Turing 100 conference, and funding for a Mind,

Kong building, with celebratory jumping!

University of Leeds, Chair of the Alan Turing Centenary Committee

Writing about the Alan Turing Year is hard! When


asked to do it, I thought yes, shouldbe easy, so
much to talk about. That turned out to be the
problem. When we started the ATY website:
www.turingcentenary.eu towards the end of
2008, we had just Alan, his papers, Andrew
Hodges biography of the great man,
and not much else.
swamped with information. The world

Brazil hosted one of the more unexpected and grandest of

mention that the Royal Mail did a Turing postage stamp - though,

international commemorations, with support from the British

disappointingly - it was a faceless commemorative stamp, just a tiny

working Turing machine.

consulate, as part of a Britain in Brazil programme. The high point

image of the Bombe. And, amongst so many wonderful books and

was the beautiful Turing exhibition in Porto Alegre. The assistant consul

journals, we must mention the unprecedented Turing-themed issue

For the many thousands who are fascinated by the man, and

there sat through my whole Turing lecture - though she did confess to

of Nature, complete with Alan Turing image on the front cover. And,

want to get some insight into the way his work has changed our

only understanding 10%, but she was smiling!

finally, the Turing family have been great, with Beryl Turing allowing
her photographs to be fairly freely used on web and in publications,

world, here are some snapshots from a wonderful centenary


year, one which still refuses to end. A friend quipped: I think you

I guess most of those who find something inspirational, engrossing,

Dermot gracing various events with entertaining words about his

should propose alanturingyear as a new SI unit, once it has been

poignant or motivational in the Turing story are not experts. Turings

famous uncle. And the nieces, from the other side of the Turing family,

established exactly how long an alanturingyear actually is. I had to

work - as well as being key to the modern informational world - is

who are very happy to be invited to special events for Alan Turing. It

tell her, its now Alan Turing Years all the way down!

technically, and conceptually - quite tough to get ones head

was great to meet up with Turings youngest niece, Janet Robinson,

around. Many people, not experts, come to Turing as an iconic

at the superb The Universal Machine at the New Diorama Theatre

The ATY was a phase transition. So many more people have come

gay man, bullied to death by the state. All Alan Turing wanted

in London three months ago. Later, the copy of Alan Turing - His Work

to understand and be interested in Turings life and thinking. An

was to immerse himself in the science, and experience human

and Impact I gave her, she said she would treasure. Robotics wizard

comfort - and love - at the end of a working week. In his final years in

Rodney Brooks says on p.500: It is humbling to read Alan Turings

monstrous new book Alan Turing - His Work and Impact,

Manchester, his decoding work was shrouded in secrecy, his thinking

papers. He thought of it all. First.

with all Turings more significant writings, and very

on building computers sidelined, and his mental sharpness - his most

enduring entry point to the work for many people will be the

varied thoughts on them from over 70 leading


is just one of many remarkable additions to
the Turing literature in 2012 and beyond. The

precious possession - undermined by the enforced hormone


treatment. The love was persecuted, and became the

from the end of it, just the thing to conclude this brief
revisit of a memorable (and exhausting) Year of

the science, the poignancy of history,

of Alan by his mother Sara Turing was

course, there were LGBT History month events for

the amazing people - it was exciting,

very special, rare as original copies had

Turing in November 2012 at Bletchley Park, and

enriching, moving - just too much to even

become. But Sara had never accepted

UK wide this February. In Chicago they set up a

the suicide verdict of the inquest on her

Legacy Walk honouring around 17 LGBT heroes,

son, so the bitten apple on the cover was a

including Alan Turing.

Turing:

Alan inspires my papers and my


stories, he fires my talks and my
courses, inhabits my memories and
my dreams.

slight surprise.

Year turned into something quite amazing,


of the countries and far corners of the world

computer guru Christos Papadimitriou. Here is a quote

wonderful in 2012, with memorable commemorative


Pride events from Manchester to India. And, of

beyond the wildest expectations. I lost count

The book also has a very personal piece - Alan and I - from

source of public humiliation. The gay community was

reissue by Cambridge of the little biography

Even for the enthusiasts, the Alan Turing

blogs and journal features. All we can do is point to the

up to find one of the best ever Google doodles in the form of a

was awash with Turing-mania in 2012,

start to do justice to.

There is so much more to mention, plays, music,


books, running races, decoding competitions,

Scholars and Fellows for 3 years. For Turings actual birthday, we woke

experts from around the world. Of course, this


Now the casual visitor to the site will be

due to start at various places, including


Sherborne School in September.

ATY website again, and in particular the listing of events. I should

Machine and Mathematics research project supporting eight Turing

By Professor Barry Cooper

and Keira Knightley playing Turings


one-time fiancee Joan Clarke, with filming

So much happened at Manchester. Alan Turing did


The chess was another highlight, with Garry
Kasparov, arguably the worlds greatest ever

his seminal work on morphogenesis at Manchester,


which still influences todays research, and is part of

Barry Cooper is Professor of Mathematical Logic at the University

where this shy visionary genius was honoured and

chess player arriving in Manchester for Alans June

a wider focus on the role of emergence in art, economics,

his work on computability celebrated. For me, it was

23, 2012, 100th birthday. That day the Olympic

neuroscience, and beyond. MOSI sponsored an exciting Grow a

of Alan Turing in its focus on the nature of mental and physical computation.

Turing Sunflower experiment reaching into hundreds of schools,

Author and editor of numerous books, including Computability Theory, New

the 24-year-old Alans discovery of the incomputable that

flame came past Manchester Town Hall, where

changed my life. It happened that one could ask questions about

Kasparov treated the packed audience to a short

and aimed at validating Turings work on sunflowers and Fibonacci

his universal Turing machine that no computer could answer, ever,

chess match against Turings (first ever) computer

sequences.

however large or speedy it was.

chess program. He did win of course in double quick time. Though

of Leeds. A graduate of the University of Oxford, his research follows that

Computational Paradigms, and Computability in Context, he is a leading advocate


of multidisciplinary research at the interface between what is known to be
computable, and theoretical and practical incomputability. Chair of the Turing
Centenary Committee, which coordinated the international Turing Centenary

in the backs of our minds was the shock of Kasparov losing to IBMs

Maybe the biggest outreach activity, and creatively the

Alan Turing dedicated his life to computing machines, the theory

Deep Blue computer back on May 11th, 1997. It was a few years

most impressive, came from the museums, films and various

and embodiment in our real world. It was the incomputability of the

after Alan designed his pencil and paper program that his friend

theatre productions. The Science Museum hosted an exhibition

real world that killed him (like Icarus who flew too close to the sun) in

and chairs the Editorial Board of its Springer book series Theory and Applications of

Dietrich Prinz actually ran the first computer chess program on a

Codebreaker - Alan Turings life and legacy, which turned out to

Computability.

46 \\ cybertalk

celebrations, he is President of the Association Computability in Europe, which is


responsible for the largest computability-themed international conference series,

cybertalk \\ 47

In 2011, reading the morphogenesis paper made me wonder


whether, if Turing had lived, he might have attempted an even bigger
challenge, namely showing how local interactions between molecules
in a lifeless world might eventually produce the huge variety of living
organisms now found on our planet: a far greater challenge than
explaining the development of structure in a developing embryo, or
the production of a proof in a symbol-manipulating engine.
The Darwin-Wallace theory of natural selection shows that, in principle,
diverse and complex organisms and ecosystems containing them,
could emerge from much simpler systems by many small steps,
provided that the mechanisms operated on by selection had the
power to accommodate that diversity and complexity. But that left
open the question: what sorts of underlying machinery could do that?
Computational experiments on artificial evolution suggested that in
principle modern computers could replicate evolution of all living
phenomena. However, combining and extending Turings ideas about
computation and morphogenesis may reveal previously unnoticed
potential in the mixture of continuity and discreteness found in
chemical information processing but unavailable in discrete symbol
manipulators.
A possible clue: transformations from one molecular structure to
another often require rotation, and a complex 3D molecule can be
rotated without losing any information, whereas rotation of digitised
images or models will lose information except for special cases (e.g.
90 degree array rotation). Non-rigid discrete transformations e.g.
forming a spiral, are also problematic.
A mixture of discrete and continuous mechanisms may turn out to be
crucial for providing new, deep and general explanations of processes
in which a dust cloud condenses to form a planet that several billion
years later includes microbes, monkeys, music, mathematics,
manslaughter, metropolitan cultures and other marvels.
Would Turing have contributed to developing that idea? Thousands of
researchers have investigated trajectories in the evolutionary history of
the planet, but they have tended, with a few exceptions, to leave out
one of the most important types of change, partly because they are
invisible and hard to study, namely changes in information processing,
including perception, learning, decision-making, problem-solving and
control of many internal functions and external actions.
There is masses of evidence about the diversity of physical forms and
physiological structures that evolution has produced. Our knowledge
of that diversity increases with developments in technology for
inspecting and experimenting on life at very small scales, and
technology for accessing more varied environments, such as deep
sea vents.
It is possible to acquire vast amounts of information about the diversity
of behaviours of organisms, by direct observation of living systems,
using inferences from fossil records, analysing requirements posed by
environmental changes, and using laboratory experiments.

School of Computer Science


University of Birmingham, UK

Turings 1952 paper on chemical morphogenesis can be compared


with his earlier work on Turing machines (TMs). Both demonstrate that
some very simple local interactions can produce strikingly varied and
complex large scale results. In a TM, many small, discrete changes
in contents of a linear tape, with an engine controlled by very
simple rules, can transform initial linear patterns, representing many
different problems, into patterns representing solutions. This requires
two important features: representational power: the ability of the
patterns used to support a wide variety of rich and complex problems
and solutions, for a fixed interpreter of the patterns, and inference
power: the ability of the engine to transform those patterns from
representations of problems to representations of solutions.
The 20th century saw major advances in digital electronic machines
with those capabilities. An electronic computer can be understood as
a TM with a finite, very fast, tape, nowadays often linked in networks
and connected with various physical interfaces between the memory
(tape) and things in the environment, allowing them not only to solve
problems but also to produce useful behaviours by monitoring and
and controlling external devices.

48 \\ cybertalk

Turings 1952 paper showed among other things that in chemical


morphogenesis local processes, in which chemicals diffuse through a
growing structure and interact when they meet, can produce complex
and varied global patterns, like dots, stripes, blotches, spirals, and
others. Patterns on organisms can have rich biological functions,
including camouflage, mate selection, attraction of pollinators,
deception of predators, and probably many others. (His paper
discussed far more than this.)
His 1936 computing machine was originally intended to replicate
analogues of processes that occur in familiar human mathematical
reasoning, whereas his chemical theory was intended to account
for some familiar biological phenomena. In the first case all the
structures and operations are discrete, whereas in the second case,
continuous diffusion and changes of concentration play a role.
However, molecular changes are discrete: there are not infinitely
many intermediate cases between any two molecules (e.g. between
O2 and H2O) as there are between two rational numbers, such as three
fifths and seven eighths.

There has also been rapid expansion in our knowledge of the


chemical mechanisms and structures underpinning biological
evolution and individual development, including chemically
implemented genetic mechanisms that, together with the environment
of growing organisms, control the diverse developmental trajectories
of organisms as different as bacteria, earthworms, giant fungi, daisies,
giant redwood trees, squirrels, bats, crows, elephants and whales.
To that rich and growing store of knowledge about long past and
very recent changes in structure, in behaviours, and in chemical
mechanisms, Turing might have contributed new theories about the
changes in information processing, combining and extending the
kinds of thinking displayed in his work on morphogenesis and on Turing
machines and computability.
There is a deep, rich, and largely unknown, repertoire of forms
of information-processing required for the types of reproduction,
development, control of behaviour, learning, perception, reasoning,
communicating, forms of social interaction, and, in the case of
humans, construction of powerful new explanatory theories and
technologies and works of art. Research in psychology, linguistics,
psychiatry, education, and other fields has enriched the set of facts to
be explained by theories about biological information processing, but

only in very complex systems most of whose details are inaccessible,


and mostly cannot be inferred from their effects. What weve learnt
from neuroscience leaves many explanatory gaps between physical
mechanisms and information-processing, e.g. musical composition or
mathematical reasoning.
Forms of information processing required in microbes, plants and
animals of various kinds differ enormously. Sources of diversity include:
sensory-motor morphologies restricting what information is available
and what actions can be controlled, environments constraining
what the information needs to refer to and diverse requirements for
cooperation and competition using different forms of communication.
Those variations in information contents and types of information
processing (e.g. acquiring, analysing, interpreting, deriving, storing,
matching, communicating, and using information) suggest the need
for variations in information processing mechanisms over evolutionary
times, and in some cases during individual development e.g.
changes in information processing capabilities between a caterpillar
and the moth it turns into, and changes between a new-born baby
and the physics professor some years later. Besides mechanisms for
producing new forms of information-processing there were also new
mechanisms for producing those mechanisms, e.g. mate-selection
and cultural evolution.
Researchers interested in what Turing might have done are invited
to join the Meta-morphogenesis project: a multi-pronged attack on
the problem of identifying unobvious forms of biological informationprocessing, such as explaining how the same genome can enable
learning of thousands of possible languages, but not all possible
languages, and explaining how our ancestors acquired the ability
to make mathematical discoveries before there were mathematics
teachers.
We can try to fill gaps in our knowledge about current systems by
creating plausible, and, wherever possible, observationally tested,
hypotheses about intermediate states between the earliest, simplest
forms of life and the ones we are now trying to understand.
For example, microbes can detect the presence of chemicals in
contact with their membrane and let some enter, others not. More
complex mechanisms may use internal state sensors and admit
different substances at different times, according to sensed needs.
Still more complex organisms may not only sense external stimulation
and react immediately, but sense changes over time, and, use
the direction of change to influence motion: e.g. if the intensity of
something harmful is increasing make a move. Later the moves might
be controlled so as to follow trajectories of increasing or decreasing
density.
Even more complex changes in the mechanisms are required to
allow organisms to acquire, store and use information about the
spatial layout of important parts of the environment, near and far, or
information about things that process information, e.g. other animals
and themselves, or information about states of the environment that
can be observed only from close up. Further complexity comes from
abilities to take account of multiple needs. All of those changes
require changes in information processing, often supported by new
physical/chemical mechanisms.
Researchers wishing to join this very ambitious Turing-inspired project,
can start by analysing what is already known about evolutionary
changes and individual development, in order to come up with good
theories about changes in the mechanisms responsible.

AARON SLOMAN
http://www.cs.bham.ac.uk/ axs
These web pages (still under development) present many more
examples of changes in information processing during evolution, and
in development of young humans:
http://tinyurl.com/CogMisc/evolution-info-transitions.html
http://tinyurl.com/CogMisc/toddler-theorems.html
http://tinyurl.com/CogMisc/meta-morphogenesis.html

References
Turing, A. M. (1936). On computable numbers, with an application to the Entscheidungsproblem. Proc. London Math. Soc., 42(2), 230265.
Turing, A. M. (1952). The Chemical Basis Of Morphogenesis. Phil. Trans. R. Soc. London B 237, 237, 3772.

cybertalk \\ 49

Do you want your voice to be heard


in the next issue of CyberTalk? If you
have something to say, we want to
hear it.
In issue four, the top comments as chosen by the
CyberTalk Editorial Board will win either a Special Edition
Alan Turing Monopoly set or a copy of Alan Turing: His
Work & Impact as reviewed in this magazine.
Be it feedback on this issue, cyber musings or a story
youd like to share, all views are welcome.
Please email your comments to CyberTalk@softbox.co.uk
or alternatively head to our Facebook page
(facebook.com/cybertalkmagazine) or tweet us
@CyberTalkUK.

Image Credits
Contents Page
Image 1 kindly provided for use in CyberTalk by the Bletchley Park
Trust, Crown Copyright, courtesy of GCHQ
Image 2 kindly provided for use in CyberTalk by the Bletchley Park
Trust, mubsta
Image 3 kindly provided for use in CyberTalk by Professor Barry
Cooper, Copyright Federico Buscarino/Maria Elisabetta Marelli.

Lifetime Achievements and Legacy to GCHQ


Image of GCHQ freely available online, this particular copy sourced
from Wikipedia

Tom Hook The Enigmatic Alan Turing


Image kindly supplied by Professor Barry Cooper, courtesy of
Sherborne School Archive.

Helen Morgan - Alan Turing: His Work and Impact Review


Image kindly supplied by Professor Barry Cooper, Copyright Elsevier,
2013.

Klaus Schmeh Why Alan Turing Cracked the Enigma and Why the
Germans Failed
Image of Gisbert Hasenjaeger kindly supplied by himself for
publication.
Image of Enigma machine provided by Klaus Schmeh, produced by the
CIA, open for use in the public domain.

Professor Barry Cooper - How Long is an Alan Turing Year?


Images kindly supplied by Professor Barry Cooper
Image 1: Copyright Federico Buscarino/Maria Elisabetta Marelli.
Image 2: Courtesy Cambridge Wong and Alan Turing Year in Hong Kong.
Image 3: Taken from the cover of Alan Turing: His Work & Impact,
Copyright Elsevier, 2013.

50 \\ cybertalk

A Colossus Rising
Images kindly supplied by the National Museum of Computing for Use
in this publication

cybertalk \\ 51

Вам также может понравиться