Вы находитесь на странице: 1из 34

Technological Singularity

Seminar Report
Submitted By
Ansil S Shajil
B120205EC
In Partial Fulfillment of the Requirements
for the Award of the Degree of Bachelor of Technology

DEPARTMENT OF ELECTRONICS AND COMMUNICATION


ENGINEERING
NATIONAL INSTITUTE OF TECHNOLOGY CALICUT
KERALA, INDIA
February 2016

NATIONAL INSTITUTE OF TECHNOLOGY CALICUT


DEPARTMENT OF ELECTRONICS AND COMMUNICATION
ENGINEERING

CERTIFICATE
This is to certify that the report titled Technological singularity is a
bona fide record of the seminar presentation made by Ansil S Shajil (Roll
No. B120205EC), under my supervision and guidance, in partial fulfillment
of the requirements for the award of the Degree of Bachelor of Technology in
Electronics and Communication Engineering from the National Institute of
Technology Calicut.

Dr. G. Abhilash
Associate Professor

Dr. Elizabeth Elias


Head of the Department

Place: NIT Calicut


Date : 19 February, 2016

Acknowledgement

I express my deepest gratitude to Dr .G Abhilash, , Associate Professor ,


Department of Electronics and Communication Engineering for his support
in the selection of the topic for this paper and guidance in achieving the completion of this paper. I would like to thank thank Dr. Elizabeth Elias, Head
of Department, Department of Electronics and Communication Engineering
for providing all the facilities for the seminar. I would also like to thank the
NITC Library for the various resources provided which helped in my search
to gain more knowledge . I would also like to express my sincere gratitude
to the other faculties for the knowledge it has imparted to me over the past
years, which has helped me grow as an engineer. Last but not least, I would
like to praise and thank my God for the grace, love and guidance he has
showered on me throughout the completion of this paper.

Abstract

The technological singularity is a hypothetical event related to the advent of


genuine artificial general intelligence. Such a computer, computer network, or
robot could surpass all intellectual activities and theoretically be capable of
recursive self-improvement (redesigning itself), or of designing and building
computers or robots better than itself on its own. That means machines
become more intelligent than humans. In other words, it can be said that
the first ultra intelligent machine will be the last invention man ever need
to make. Famous scientist Ray Kurzweil predicts the singularity to occur
around 2045. The major consequence is a chance that the human race would
become slaves and these machines will be our masters.

Contents
1 Acknowledgement

2 Introduction

3 Singularity The Origin

4 The Six Epochs

5 Singularity Scenario
5.1 AI scenario . . . . . . . .
5.1.1 Moore0 s law . . . .
5.2 IA scenario . . . . . . . .
5.3 Biomedical Scenario . . .
5.4 Internet Scenario . . . . .
5.5 The Digital Gaia Scenario
6 AI
6.1
6.2
6.3

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

Seed
13
Self-modification . . . . . . . . . . . . . . . . . . . . . . . . . 13
Self-improvement . . . . . . . . . . . . . . . . . . . . . . . . . 13
Recursive Self-Improvement . . . . . . . . . . . . . . . . . . . 14

7 Primary Building Blocks of Singularity


8 Technologies that aid to reach singularity
8.1 Carbon Nanotubes . . . . . . . . . . . . . . . . . . . .
8.2 Computing with Molecules . . . . . . . . . . . . . . . .
8.3 Self-Assembly . . . . . . . . . . . . . . . . . . . . . . .
8.4 Emulating Biology . . . . . . . . . . . . . . . . . . . .
8.5 DNA Computing . . . . . . . . . . . . . . . . . . . . .
8.6 Computing with Spin (Spintronics or Fluxtronics) . . .
8.7 Computing with Light (optical or photonic computing)
8.8 Quantum Computing . . . . . . . . . . . . . . . . . . .
9

11
11
11
12
12
12
12

16

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.

17
17
18
19
19
19
20
21
22

Limits of Computation
24
9.1 Energy Requirement . . . . . . . . . . . . . . . . . . . . . . . 24
9.2 Reversible Computing . . . . . . . . . . . . . . . . . . . . . . 24
9.3 Memory and Computational Efficiency . . . . . . . . . . . . . 25
4

10 Technical aspects of singularity


10.1 A reliable and long lasting power source . . . . . . . . . . . .
10.2 Faster and more efficient chips . . . . . . . . . . . . . . . . . .
10.3 Memory back up . . . . . . . . . . . . . . . . . . . . . . . . .

26
26
26
26

11 How singularity can transcend our biology

28

12 Nearing secularity?
12.1 Exo Hiker . . . .
12.2 Japans HAL 5 . .
12.3 MIT Exoskeleton
12.4 Big Dog . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

29
29
29
30
30

13 Fears of Technological Singularity

31

14 Conclusion

32

15 References

33

Introduction

Intelligence is the ability to comprehend, understand and profits from experience .Humans are the most intelligent organism in this planet .Human brain
is pretty impressive in many ways. But it has certain limitations. Brains
parallelism (one hundred trillion interneuronal connections working simultaneously) can be used to quickly recognize subtle patterns. As the neural
transactions are slow compared to some electronic circuits, our thinking process is slow. This makes our ability to process information limited compared
to the exponential growth of human knowledge.
Machines were invented by man to assist him while performing human
tasks. As years passed by, new and more improved machines were invented by
man. One among them was named robot, a mechanical machine controlled
by a computer program or an electronic circuit. Robotics developed and
autonomous robots were developed . In 1955 , John McCarthy coined a new
term called artificial intelligence (AI) . The main aims of AI are reasoning ,
knowledge , planning , learning ,natural language processing ,perception and
the ability to move and manipulate objects .
Now coming to what is singularity. It can be said as an era that will happen in future where the pace of technological change will be so rapid and its
impact so deep, that human life will be irreversibly transformed. Nowadays
researchers are, with some success, making machines that are more intelligent and responsive to solving real world problems.. Robotics departments
are trying to bring out robots that understand their environment better and
act according to the situations. There have been quantum leaps made in producing artificial balancing for these robots though not complete compared
to a human balancing system till date. Above all artificial intelligence is
growing day by day and is coursing through the blood of embodied science
But still we are a very long way away from understanding how consciousness
arises in a human brain. We are even a long way from the much simpler goal
of creating autonomous, self-organizing and perhaps even self-replicating machines.

Singularity The Origin

In 1982, Vernor Vinge proposed that the creation of an intelligence which is


smarter-than-human intelligence represented a breakdown in humans0 ability
to model their future. Vinges argument was that the authors cannot write realistic characters that surpass the human intellect, as the thoughts of such an
intellect would be beyond the ability of humans to express. Vinge named this
event the Singularity. He compared it to the breakdown of the then-current
model of physics when it was used to model the gravitational singularity beyond the event horizon of a black hole. In 1993, Vernor Vinge associated
the Singularity more explicitly with I. J. Goods intelligence explosion, and
tried to project the arrival time of artificial intelligence (AI) using Moore0 s
law, which thereafter came to be associated with the Singularity concept.
Futurist, Ray Kurzweil generalizes singularity to apply to the sudden growth
of any technology, not just intelligence; and argues that singularity in the
sense of sharply accelerating technological change is inevitably implied by
a long-term pattern of accelerating change that generalizes Moore0 s law to
technologies predating the integrated circuit, and includes material technology, medical technology, and others. Aubrey de Grey has applied the term
the Methuselarity to the point at which medical technology improves so
fast that expected human lifespan increases by more than one year per year.
Robin Hanson, taking singularity to refer to sharp increases in the exponent of economic growth, lists the agricultural and industrial revolutions
as past singularities. Extrapolating from such past events, Hanson proposes that the next economic singularity should increase economic growth
between 60 and 250 times. An innovation that allowed for the replacement
of virtually all human labor could trigger this event.

This is a graph taken from Ray Kurzweils The Singularity is near . Many
significant technological and biological developments are shown in the graph.
This essential shows how rapidly things are changing now. In the early years
of life, things evolved slowly. The graph is exponential . It goes to show that
more developments are going to occur in the next two decades than what
happened in the past two decades.

The Six Epochs

Evolution is the process by which something passes by degrees to a different


stage (especially a more advanced or mature stage). We can categorise evolution into 6 epochs or stages.
1. Physics and Chemistry Origin of life can be traced back to a state that

Figure 1: The Six Epochs


represents information in its basic structures: patterns of matter and energy.
2. Biology and DNA-carbon-based compounds became more and more intricate until complex aggregations of molecules formed self-replicating mechanisms, and life originated. Molecules named DNAs were used to store information.
3. Brains -DNA-guided evolution produced organisms that could detect infor9

mation with their own sensory organs and process and store that information
in their own brains and nervous systems.
4. Technology- Humans started creating technology to ease their work. This
started out with simple mechanisms and developed into automated machines.
5. Merge of technology with human intelligence-merger of the vast human
knowledge with the vastly greater capacity, speed, and knowledge-sharing
ability of our technology.
6.The universe wakes up The universe becomes saturated with intelligent
processes and knowledge.

10

Singularity Scenario

According to Vernor Vinge , Singularity is expected to come as a combination


of the following

5.1

AI scenario

It involves creating super human artificial intelligence in computers where


databases and computers become sufficiently effective enough to be considered a superhuman being. AI researches are highly technical and specialized
and are deeply divided into sub fields that often fail to communicate with
each other. The main aims of AI researchers are reasoning, knowledge, planning, learning, natural language processing, perception and the ability to
move and manipulate objects.
5.1.1

Moore0 s law

It is an observation that the no of transistors in an IC doubles in every two


years. Or in other words , the speed of the IC is increasing year by year.Many
do argue that the effect of Moores Law is saturating with the limitations of
silicon . But technology always had paradigm shifts. A newer method will
be found making computation more powerful.

Figure 2: Moore0 s law

11

Advancements in digital electronics are strongly linked to Moore0 s law:


quality-adjusted microprocessor prices, memory capacity, sensors and even
the number and size of pixels in digital cameras. All of these are improving
at exponential rates.

5.2

IA scenario

It is the way of improving human intelligence through amplification of intelligence. It implies the efficient employing of information technology in
enhancing human intelligence. The term was first put forward in 1950s by
Cybernetics and early computer pioneers. IA is sometimes contrasted with
AI, that is, the project of building a human like intelligence in the form of
an autonomous technological system such as computer or robot.

5.3

Biomedical Scenario

We directly increase our brainpower by improving the neurological actions


of our brains.

5.4

Internet Scenario

Humanity, its networks, computers, and databases become sufficiently effective to be considered a superhuman being.

5.5

The Digital Gaia Scenario

The network of embedded microprocessors becomes sufficiently effective to


be considered a superhuman being

12

AI Seed

Seed AI refers to the ability of highly intelligent machines to improve its


own program recursively. Software capable of improving itself has been a
dream of computer scientists since the inception of the field. Since the early
days of computer science, visionaries in the field anticipated creation of selfimproving intelligent system frequently as an easier pathway to creation of
true artificial intelligence. As early as 1950s Alan Turing wrote instead of
trying to produce a program to stimulate the adult mind why not rather try
to produce one which stimulates the childs?. If this were then subjected to
an appropriate course of education one would obtain the adult brain. Let an
ultra-intelligent machine be defined as a machine that can far surpass all the
intellectual activities of any man however clever. Since the design of machines
is one of the intellectual activities, an ultra-intelligent machine could design
even better machines; there would then unquestionably be an intelligent
explosion and the intelligence of man would be left far behind. Thus the first
ultra-intelligent machine is the last invention that man ever needs to make.
Once program with a genuine capacity for self-improvement has been devised
a rapid revolutionary process will begin. As the machine improves both
itself and its model of itself there begins a phenomenon associated with the
terms consciousness, intuition and intelligence itself.Self-improving software
can be classified by the degree of self-modification it entails. In general we
distinguish three levels of improvement modification, improvement (weak
self-improvement) and recursive improvement (strong self-improvement).

6.1

Self-modification

Self-modification does not produce improvement and is typically employed


for code obfuscation to protect software from being reverse engineered or
to disguise self-replicating computer viruses from detection software. While
a number of obfuscation techniques are known to exist, ex. self-modifying
code, polymorphic code, metamorphic code, diversion code, none of them are
intended to modify the underlying algorithm.

6.2

Self-improvement

Self-improvement or Self-adaptation is a desirable property of many types of


software products and typically allows for some optimization or customiza13

tion of the product to the environment and users it is deployed with. Common examples of such software include such as Genetic Algorithms or Genetic Programming which optimize software parameters with respect to some
well understood fitness function and perhaps work over some highly modular programming language to assure that all modifications result in software
which can be compiled and evaluated. Omohundro proposed the concept of
efficiency drives in self-improving software. Because of one of such drives,
balance drive, self-improving systems will tend to balance the allocation of
resources between their different subsystems increase. While performance of
the software as a result of such optimization may be improved the overall
algorithm is unlikely to be modified to a fundamentally more capable one.

6.3

Recursive Self-Improvement

Recursive Self-Improvement is the only type of improvement which has potential to completely replace the original algorithm with a completely different approach and more importantly to do so multiple times. At each stage
newly created software should be better at optimizing future version of the
software compared to the original algorithm. As of the time of this writing it
is a purely theoretical concept with no working RSI software known to exist.
However, as many have predicted that such software might become a reality
in the 21st century it is important to provide some analysis of properties
such software would exhibit.
Self-modifying and self-improving software systems are already well understood and are quite common. Consequently, we will concentrate exclusively on RSI systems. In practice performance of almost any system can be
trivially improved by allocation of additional computational resources such
as more memory, higher sensor resolution, faster processor or greater network bandwidth for access to information. This linear scaling doesnt fit the
definition of recursive-improvement as the system doesnt become better at
improving itself. To fit the definition the system would have to engineer a
faster type of memory not just purchase more memory units of the type it
already has access to. In general hardware improvements are likely to speed
up the system, while software improvements (novel algorithms) are necessary
for achievement of meta-improvements.
It is believed that AI systems will have a number of advantages over human programmers making it possible for them to succeed where we have so
14

far failed. Such advantages include : longer work spans (no breaks, sleep,
vocation, etc.), omniscience (expert level knowledge in all fields of science, absorbed knowledge of all published works), superior computational resources
(brain v/s processor, human memory v/s RAM), communication speed (neurons v/s wires), increased serial depth (ability to perform sequential operations in access of about a 100 human brain can manage), duplicability (intelligent software can be instantaneously copied), editability (source code
unlike DNA can be quickly modified), goal coordination (AI copies can work
towards a common goal without much overhead), improved rationality (AIs
are likely to be free from human cognitive biases) , new sensory modalities
(native sensory hardware for source code), blending over of deliberative and
automatic processes (management of computational resources over multiple
tasks), introspective perception and manipulation (ability to analyze low level
hardware, ex. individual neurons), addition of hardware (ability to add new
memory, sensors, etc.), advanced communication (ability to share underlying
cognitive representations for memories and skills).

15

Primary Building Blocks of Singularity

The human genome contains the complete genetic information of the organism as DNA sequences stored in 23 chromosomes structures that are organized from DNA and protein.
The Singularity will unfold through these three overlapping revolutions:
1.Genetics(G)
2.Nanotechnology(N)
3.Revolution(R)
These are the primary building blocks of the impending Singularity as Ray
Kurzweil sees them. He calls them the three overlapping revolutions, and
he says they will characterize the first half of the twenty-first century which
we are in now. He goes on to say, These (GNR) will usher in the beginning
of the Singularity. Now we are in the early stages of the 0 G0 revolution today.
By understanding the information processes underlying life, we are starting
to learn to reprogram our biology to achieve the virtual elimination of disease,
dramatic expansion of human potential, and radical life extension.
Ray Kurzweil then says regarding nanotechnology, The 0 N0 revolution
will enable us to redesign and rebuild - molecule by molecule - our bodies and
brains and the world with which we interact, going far beyond the limitations
of biology.
Of the three (GNR), Ray Kurzweil believes that the most powerful impending revolution is the 0 R0 revolution. He says, Human-level robots with
their intelligence derived from our own but redesigned to far exceed human
capabilities represent the most significant transformation, because intelligence is the most powerful force in the universe. Intelligence, if sufficiently
advanced, is, well, smart enough to anticipate and overcome any obstacles
that stand in its path.

16

Technologies that aid to reach singularity

Technologies that will help to achieve singularity are molecular three-dimensional


computing include nanotubes and nanotube circuitry, molecular computing,
self-assembly in nanotube circuits, biological systems emulating circuit assembly, computing with DNA, spintronics (computing with the spin of electrons), computing with light, and quantum computing. Many of these independent technologies can be incorporated into computational systems that
will in the long run approach the theoretical maximum capacity of matter
and energy to perform computation and will far outpace the computational
capacities of a human brain.

8.1

Carbon Nanotubes

In the last few decades, there has been a nearly constant exponential growth
in the capabilities of silicon-based microelectronics( Moores law). When
silicon layers become thin as 5 atoms ,effect of thermodynamics and quantum mechanics takes place. Because of the fundamental physical limitations
of silicon, which prevent current designs from functioning reliably at the
nanometer scale, will be reached while at the same time exponentially rising
fabrication costs will make it prohibitive to raise integration levels. This is
where the importance of nano-tubes come . Carbon nanotubes which are
allotropes of carbon( with a cylindrical structure) have molecules organized
in three dimensions to store memory bits and to act as logic gates and are
the most likely technology to lead in the era of three-dimensional molecular
computing. The chip designer company Nantero provides random access as
well as non-volatility (data is retained when the power is off), meaning that
it could potentially replace all of the primary forms of memory: RAM, flash,
and disk.. They are ultra fast compared to the conventional ones used. Nantero is producing RAMs named NRAMs (Nano RAMs) using this carbon
nanotube technology. The chips based on this super-fast and dense technology can be used in a wide array of markets such as mobile computing,
wearables, consumer electronics, space and military applications, enterprise
systems, automobiles, the Internet of Things, and industrial markets. In the
future, Nantero expects to be able to store terabits of data on a single memory chip, enabling that chip to store hundreds of movies on a mobile device,
or millions of songs.

17

Figure 3: Structure of carbon nanotube

8.2

Computing with Molecules

In ,addition to nanotubes, major progress has been made in recent years in


computing with just one or a few molecules. The idea of computing with
molecules was first suggested in the early 1970s by IBM0 s Avi Aviram and
Northwestern University0 s Mark A. Ratner. At that time, we did not have
the enabling technologies, which required concurrent advances in electronics,
physics, chemistry, and even the reverse engineering of biological processes for
the idea to gain traction. One type of molecule that researchers have found
to have desirable properties for computing is called a rotaxane, which can
switch states by changing the energy level of a ringlike structure contained
within the molecule. Rotaxane memory and electronic switching devices
have been demonstrated, and they show the potential of storing one hundred
gigabits ( 1011 bits) per square inch. The potential would be even greater
if organized in three dimensions. Rotaxanes are mechanically interlocked
molecular architectures consisting of a dumbbell-shaped molecule, the axle,
that threads through a ring called a macrocycle. Because the rings can
spin around and slide along the axle, rotaxanes are promising components
of molecular machines. While most rotaxanes have been entirely organic,
the physical properties desirable in molecular machines are mostly found in
inorganic compounds. Working together, two British groups at the University
of Edinburgh and the University of Manchester have bridged this gap with
hybrid rotaxanes, in which inorganic rings encircle the organic axles.

Figure 4: Structure of rotaxane


18

8.3

Self-Assembly

Self-assembling of nanoscale circuits is another key enabling technique for effective nanoelectronics. Self-assembly allows improperly formed components
to be discarded automatically and makes it possible for the potentially trillions of circuit components to organize themselves, rather than be painstakingly assembled in a top down process. Conventional assembly technology
has been adopted to pick and place devices by picking microchips from a
wafer and placing them on the substrate. But the techniques encounter
speed and cost constraints. In addition, while the size of chips is in the micro scale, it has a serious sticking problem due to electrostatic forces, van
der Waals forces, and surface forces. It0 s also important that nanocircuits be
self-configuring. The large number of circuit components and their inherent
fragility (due to their small size) make it inevitable that some portions of
a circuit will not function correctly. It will not be economically feasible to
discard an entire circuit simply because a small number of transistors out of
a trillion are non functioning.

8.4

Emulating Biology

The idea of building electronic or mechanical systems that are self-replicating


and self-organizing is inspired by biology, which relies on these properties.
There are self-replicating proteins like prions which can be used to construct
nanowires.

8.5

DNA Computing

The term refers to computation using DNA and not computing on DNA.
This field was initially developed by Leonard Adleman of the University of
Southern California, in 1994. DNA is nature0 s own nanoengineered computer, and its ability to store information and conduct logical manipulations
at the molecular level has already been exploited in specialized DNA computers..Instead of using electrical signals to perform logical operations, these
DNA logic gates rely on DNA code. They detect fragments of genetic material as input. Each such strand is replicated trillions of times using a process
called polymerase chain reaction (PCR). These pools of DNA are then put
into a test tube. Because DNA has an affinity to link strands together, long
strands form automatically, with sequences of the strands representing the
19

different symbols, each of them a possible solution to the problem. Since


there will be many trillions of such strands, there are multiple strands for
each possible answer. The next step of the process is to test all of the strands
simultaneously. This is done by using specially designed enzymes that destroy strands that do not meet certain criteria. The enzymes are applied to
the test tube sequentially, and by designing a precise series of enzymes the
procedure will eventually obliterate all the incorrect strands, leaving only
the ones with the correct answer. There0 s a limitation, however, to DNA
computing: each of the many trillions of computers has to perform the same
operation at the same time (although on different data), so that the device
is a single instruction multiple data(SIMD) architecture. A gram of DNA
can hold about 1x1014 MB of data.With bases spaced at 0.35 nm along DNA,
data density is over a million Gbits/inch compared to 7 Gbits/inch in typical
high performance HDD.

8.6

Computing with Spin (Spintronics or Fluxtronics)

In addition to their negative electrical charge, electrons have another property that can be exploited for memory and computation: spin. According to
quantum mechanics, electrons spin on an axis, similar to the way the Earth
rotates on its axis. This is a theoretical notion, because an electron is considered to occupy a point in space, so it is difficult to imagine a point with
no size that nonetheless spins. However, when an electrical charge moves,
it causes a magnetic field, which is real and measurable. An electron can
spin in one of two directions, described as up and down,so this property
can be exploited for logic switching or to encode a bit of memory. spin of
the electron can be transported without any loss of energy, or dissipation.
Furthermore, this effect occurs at room temperature in materials already
widely used in the semiconductor industry, such as gallium arsenide. Thats
important because it could enable a new generation of computing devices.
The potential, then, is to achieve the efficiencies of superconducting (that
is, moving information at or close to the speed of light without any loss
of information) at room temperature. It also allows multiple properties of
each electron to be used for computing, thereby increasing the potential for
memory and computational density.

20

8.7

Computing with Light (optical or photonic computing)

Usual computers use transistors which rely on the motion of electrons . But
as the size of transistors decreases(to integrate more number of transistors
in an IC), quantum mechanics effects come into play. So instead of electrons we use photons . Photons travel at a speed which 1000 times greater
than that of electrons . And also since light does not have resistance , there
wont be much power dissipation and will result in less heating. This is another approach to SIMD computing is to use multiple beams of laser light
in which information is encoded in each stream of photons. Optical components can then be used to perform logical and arithmetic functions on the
encoded information streams. SIMD technologies such as DNA computers
and optical computers will have important specialized roles to play in the
future of computation. The replication of certain aspects of the functionality
of the human brain, such as processing sensory data, can use SIMD architectures. For other brain regions, such as those dealing with learning and
reasoning, general-purpose computing with its multiple instruction multiple
data (MIMD) architectures will be required. For high-performance MIMD
computing, we will need to apply the three-dimensional molecular-computing
paradigms described above. Optical fibres will be used in these computers.
Instead of voltage packets used as signals in our computers, these use light
pulses. Processors change from binary code to light pulses using lasers. In
the figure below, a simple building block of opto computer is shown. Its like
a transistor that emits light when its ON and doesnt emit light in its OFF
condition. HP has introduced something similar named HP Silicon Microring resonator which absorbs light when a beam of light passes near that ring.
The absorption can be turned off by applying a small voltage. The smallest
ring that can be made is 3 microns. That is size also increased with speed.
A solution to this will be to use nano-science. Metal nano particle is like an
antenna. It resonates with a specific frequency of light i.e the electrons in the
metal oscillate in resonance with the frequency of the light .This can be used
to control light and channel light much below the diffraction limit of light If
we put nano-particles in a row and shine light at one end , photons travel
through this path .This is much faster than an electron diffusing through the
semiconductor.

21

Figure 5: Building Block of Opto Computing

8.8

Quantum Computing

Quantum computing is an even more radical form of SIMD parallel processing. A quantum computer contains a series of qubits, which essentially are
zero and one at the same time. The qubit is based on the fundamental ambiguity inherent in quantum mechanics. There are a no of physical objects that
can be used as a qubit a single photon, nucleus, electron etc .In a quantum
computer, the qubits are represented by a quantum property of particlesfor
example, the spin state of individual electrons.. When the qubits are in an
entangled state, each one is simultaneously in both states. In a process called
quantum decoherence the ambiguity of each qubit is resolved, leaving an
unambiguous sequence of ones and zeroes. If the quantum computer is set
up in the right way that decohered sequence will represent the solution to a
problem. Essentially, only the correct sequence survives the process of decoherence. In quantum mechanics, state of a qubit will be a superposition
(weighted sum) of all the possible states. Consider 2 qubits . We have four
combinations of states- 00,01,10,11. The state of the qubits will be a superposition of these four states. In other words N qubits is equivalent to 2N
bits in a classical computer .Like in the case of DNA computer described
in the previous point , a key to successful quantum computing is a careful
statement of the problem, including a precise way to test possible answers.
The quantum computer effectively tests every possible combination of values
for the qubits. So a quantum computer with one thousand qubits would
test 21000 bits in a classical computer .D- wave is the main company in this
field. They produced the first commercially available quantum computer in

22

2011.Quantum computers cant replace classical computers. Quantum computers reduce the no of steps considerably in a complex operation. But it
doesnt increase the execution time of a single step. Therefore for simple tasks
like playing a video or browsing the internet, classical computers are better
than quantum computers. There is a joint initiative by Google and NASA
called Quantum Artificial Intelligence Lab (QuAIL) which does research on
how quantum computing can solve complex computational problems.

23

Limits of Computation

9.1

Energy Requirement

From the below graph we can say that the power per MIPS (Microprocessor
without Interlocked Pipeline Stages) is reducing. However, we also know that
the number of MIPS in computing devices has been growing exponentially.
The degree to which enhancements in power usage have kept pace with processor speed depends on the extent to which we use parallel processing. A
larger number of less-powerful computers can inherently run cooler because
the computation is spread out over a larger area. Processor speed is related
to voltage, and the power required is proportional to the square of the voltage. So running a processor at a slower speed significantly reduces power
consumption.

Figure 6: Plot of Watts per MIPS vs Years

9.2

Reversible Computing

Ultimately, organizing computation with massive parallel processing, as is


done in the human brain, will not by itself be sufficient to keep energy levels
and resulting thermal dissipation at reasonable levels. The current computer
paradigm relies on what is known as irreversible computing, meaning that
we are unable in principle to run software programs backward. At each step
24

in the progression of a program, the input data is discardederasedand the


results of the computation pass to the next step. Programs generally do not
retain all intermediate results, as that would use up large amounts of memory unnecessarily. This selective erasure of input information is particularly
true for pattern-recognition systems. Vision systems, for example, whether
human or machine, receive very high rates of input (from the eyes or visual
sensors) yet produce relatively compact outputs (such as identification of
recognized patterns). This act of erasing data generates heat and therefore
requires energy. When a bit of information is erased, that information has
to go somewhere. According to the laws of thermodynamics, the erased bit
is essentially released into the surrounding environment, thereby increasing
its entropy, which can be viewed as a measure of information (including apparently disordered information) in an environment. This results in a higher
temperature for the environment (because temperature is a measure of entropy).
Landauer0 s principle asserts that there is a minimum possible amount of
energy required to erase one bit of information, known as the Landauer limit
given by
E = kT ln 2 = 2.75zJ or 0.0172 eV
There are ongoing researches in this field trying to make computation a
reversible process so that it becomes energy efficient.

9.3

Memory and Computational Efficiency

With the limits of matter and energy to perform computation in mind, two
useful metrics are the memory efficiency and computational efficiency of an
object. Our brains have evolved significantly in their memory and
computational efficiency from pre-biology objects. To match its memory
and efficiency is going to be a difficult task.

25

10
10.1

Technical aspects of singularity


A reliable and long lasting power source

Solar cells are well known for their use as power sources for satellites,
environmentalist green energy campaigns and pocket calculators. In
robotics solar cells are used mainly in BEAM robots( Biology, Electronics,
Aesthetics and Mechanics).Commonly these consist of a solar cell which
charges a capacitor and a small circuit which allows the capacitor to be
charged up to a set voltage level and then be discharged through the
motor(s) making it move . For a larger robot solar cells can be used to
charge its batteries. Such robots have to be designed around energy
efficiency as they have little energy to spare.

10.2

Faster and more efficient chips

Carbon based transistors have attracted significant interest dueto their


versatility and high intrinsic mobility. Carrier mobility in graphitic forms of
carbon such as nanotubes and thin graphite sheets can be very high. An
alternative form of carbon is graphene. Graphene is a horizontally extended
single atomic layer of graphite. Recently graphene devices have been built
on thin exfoliated sheets of highly oriented pyrolytic graphite.
Graphene has many extraordinary properties. It is about 207 times stronger
than steel by weight, conducts heat and electricity efficiently and is nearly
transparent. Graphene holds a great promise for the future electronic
technology. It has excellent thin film property, films that are thin as 0.4 nm
have been shown to have high mobility. This is in contrast to silicon where
mobility rapidly degrades as a function of thickness at the nanometer scale.

10.3

Memory back up

The search for new nonvolatile universal memories is propelled by the need
for pushing power efficient nano-computing to the next higher level. As a
potential for the next memory technology of choice, the recently found the
missing fourth circuit element, memristor has drawn a great deal of
research interests. The basic circuit elements, resistance, capacitance, and
inductance, describe the relations between fundamental electrical
quantities: voltage, current, charge and flux. Resistance relates voltage and
26

current (dv= Rdi), capacitance relates charge and voltage (dq=Cdv), and
inductance relates flux and current (d=Ldi), respectively. However there is
a missing link between flux and charge which scientist Chua called as
memresistance. While in the linear case, memristance becomes constant
which acts like resistance. However if -q relation is nonlinear, the element is
referred to as memresistance, which can be charge-controlled.
Memresistance is given by
M(q)=d/dq
The prototyped memristor devices can be scaled down to 10nm or below
and the memristor memories can achieve an integration density of 1000
gbits/cm3, a few times higher than today advanced flash memory
technologies. In addition, the nonvolatile nature of memristor memory
makes it an attractive candidate for the next generation memory
technology. The switching power consumption of memristor can be 20
times smaller than flash. Memristor memories are non-volatile so computers
can start without reboot. Moreover it has unique characteristics that can
be used for self-programming. It could vary value according to the current
passing through it and could even remember it even after the current has
disappeared.

27

11

How singularity can transcend our


biology

Augmentation
There are lot of people who are born disabled or disabled due to accidents.
With the help of robotics we could create prostheses that could resolve the
problems due to deficiencies of human body .We could build our way,
engineer our way to make a better way around it making their lives easier
Control our body
If we could understand how cancer works and a real molecular on it and
turn things off when it starts to go wrong.
Backing up the human brain
If all of our functions are controlled by brain. we could back up our brain
every day to computers or machines that could simulate brain function.
Every morning if we are backing up ourself , it doesnt matter if we die later
that day .On other words , humans can become immortal . Below is an
image of TIME magazines cover in February 2013 illustrating this
possibility.

Figure 7: Time Magazine Cover Feb 2013


Leaving the human body
This is another possibility of technological singularity. If our body become
unsuitable for life like the body is having some deadly disease, one could
leave their human body and continue living in some another substrate.
This substrate can be a machine or even could be a human body made
from ones own DNA .
28

12

Nearing secularity?

Fueled by creative imagination coupled with technological expertise,


wearable robotic applications like exoskeletons are moving out of the realm
of science fiction and into the real world. Military applications can turn
ordinary people into super soldiers with the ability to carry far more
weights faster, farther and for longer periods of time than is possible for
humans alone. Exoskeletons can protect wearers from any enemy fire and
chemical attack. By increasing speed, strength and protection, these
wearable robots can help rescue workers more effectively dig people out
from under rubble after earthquakes or carry them from burning buildings
while protecting the rescuers from falling debris and collapsing structures.

12.1

Exo Hiker

A recent force driving exoskeleton development has been a U.S. Defense


Advanced Projects Agency (DARPA) program known as Exoskeleton for
Human Performance Augmentation (EHPA). One example is its ExoHiker,
which weighs 31 pounds, including power unit, batteries, and onboard
computer. It operates with virtually imperceptible noise. With lithium
polymer batteries, the device can travel 42 miles per pound of battery at a
speed of 2.5 miles per hour. With a small pack-mounted solar panel its
mission time will be unlimited. It enables wearers to carry 150 pounds
without feeling the load on their shoulders and features retractable legs,
unfettered driving while using the device.

12.2

Japans HAL 5

A research team led by a professor in the Department of Intelligent


Interaction Technologies has developed the Robot Suit Hybrid Assistive
Limb (HAL) exoskeleton for applications in physical training support,
activities of daily living, heavy labor support for workers, and rescue
support for emergency disaster personnel. HAL can magnify a persons
strength by two times or more. The suit detects faint bio-signals on the
surface of the skin when the human brain tries to move the exoskeleton.
When the robot suit detects the signal, it helps the user to move and this
information is then relayed back to the brain.

29

12.3

MIT Exoskeleton

The Massachusetts Institute of Technology (MIT) Media Lab


Bio-mechatronics Group has developed an exoskeleton that can support up
to 80-pound load and which requires only two watts of electrical power
during loaded walking. The quasi-passive design does not use any actuators
for adding power at the joints. Instead the design relies completely on the
controlled release of energy stored in springs during the (negative power)
phases of the walking gait. The quasi-passive elements in the exoskeleton
were chosen based on analysis of the kinetics and kinematics of human
walking.

12.4

Big Dog

Big Dog is a dynamically stable robot funded by DARPA in the hopes that
it will be able to serve as a robotic pack mule to accompany soldiers in
terrain too rough for conventional vehicles. Instead of wheels or treads, Big
Dog uses four legs for movement, allowing it to move across surfaces that
would defeat wheels. The legs contain a variety of sensors including joint
position and ground contact. Its walking pattern is controlled with four
low-friction hydraulic cylinder actuators that power the joints.

30

13

Fears of Technological Singularity

Extinction
It is the most feared aspect of technological aspect of technological
singularity . These highly intelligent machines could overthrow the human
race.
Slavery
Another possibility is that humans becoming slaves of these machines just
like animals are slaves of humans.
War
First and second world war were fought by humans. There might be one in
future where humans and machines will fight each other.
Economic Collapse
Machines would replace humans in jobs there by creating unemployment.
Also higher rates of production would also result in economic collapses.
Moving away from nature
When we live in a global society where everything is mass produced by
robots, our manufactured civilization will sever the last connection to the
natural world. We will lose the very last bit of respect for Mother Nature.
Matrioshka Brains
A Matrioshka brain is a hypothetical megastructure of immense
computational capacity. Based on the Dyson sphere, the concept derives its
name from the Russian Matrioshka doll and is an example of a planet-size
solar-powered computer, capturing the entire energy output of a star. To
form the Matrioshka brain all planets of the solar system are dismantled
and a vast computational device inhabited by uploaded or virtual minds,
inconceivably more advanced and complex than us, is created. So the idea
is that eventually, one way or another, all matter in the universe will be
smart. All dust will be smart dust, and all resources will be utilized to their
optimum computing potential. There will be nothing else left but
Matrioshka Brains and/or computronium

31

14

Conclusion

When greater than human intelligence drives progress, that progress will be
much more rapid. In fact, there seems no reason why progress itself would
not involve the creation of still more intelligent entities on a still shorter
timescale. The best analogy is with the evolutionary past. Animals can
adapt to problems and make inventions but often no faster than natural
selection can do is work. We humans have the ability to internalize and
conduct what ifs in our heads; can solve many problems thousands of times
faster than natural selection.
Smarter-than-human intelligence, faster-than-human intelligence, and
self-improving intelligence are all interrelated. If you are smarter that
makes it easier to figure out how to build fast brains or improve your own
mind. In turn, being able to reshape your own mind is not just a way of
starting up a slope of recursive self-improvement; having full access to your
own source code is, in itself, a kind of smartness that humans don0 t have.
Self-improvement is far harder than optimizing code; nonetheless, a mind
with the ability to rewrite its own source code can potentially make itself
faster as well. And faster brains also relate to smarter minds; speeding up a
whole mind doesn0 t make it smarter, but adding more processing power to
the cognitive processes underlying intelligence is a different matter. Who
would have believed that 100 years ago that the following technological
advances will be possible? -Moving pictures of events around the world
-Instantaneous wireless global communication
-Portable computing devices that can store trillions of words and execute
billions of instructions
-Human landing on moon and an international man space station
Similarly who knows in the next 50 years intelligence superior to human
intelligence will come into existence which can even question the mere
existence of humans on this planet

32

15

References

1. Kurzweil, Ray. The singularity is near: When humans transcend


biology. Penguin, 2005.
2. Vinge, Vernor. Signs of the Singularity. IEEE Spectrum 45.6
(2008): 76-82
3. Yampolskiy RV, From Seed AI to Technological Singularity via
Recursively Self-Improving Software, arXiv preprint
arXiv:1502.06512, 2015 Feb 23.
4. Chang, Chia-Shou, et al. Self-assembly of microchips on substrates.
Electronic Components and Technology Conference, 2006.
Proceedings. 56th. IEEE, 2006.

33

Вам также может понравиться