Вы находитесь на странице: 1из 3

Modern robotics

Modern robotics

features

features

Are we nearly there yet?

Robots
Flywheels are potentially a great way to store
power, but they can fail spectacularly

As with many robot sensors, artificial noses tend to be able to do one thing well, whereas the human nose can detect many different elements.

A BAtterY oF ProBLeMs
Small, lightweight power sources are a problem for many devices, but theyre a particular impediment for an android that has to be mobile. The reason Hondas Asimo looks and moves like a top-heavy astronaut is that it has to carry the giant batteries that provide its power on its back. Tirelessness is an essential feature of the robots of our dreams, and yet all industrial robots are permanently linked to mains power, which is no good for our robot butlers. lithium-ion batteries are the best we have at the moment, and while theyre charging your robot is out of operation. Solar cells are light and produce a constant flow of power, but they require sunlight and produce very low levels of power. The best option at present is the fuel cell. These combine hydrogen and oxygen to produce energy, with only CO2 and water as by-products. Daimlers director of research and development, Thomas Weber, says his team has brought down the cost of fuel cells so that theyre ready for mass production whenever the first network of hydrogen filling stations is ready. Possible future power sources range from the bizarre to the extremely bizarre. radioisotopic thermoelectric generators provide low power over very long periods, for example, but cant be used near people as they draw their power from radioactive decay. Meanwhile, waste-digesting robots powered by microbial fuel cells have been developed by the bristol robotics lab. These are

For navigating through environments, a more specialised sensor is far more effective. Something like the Kinect motion-detection system for Microsofts Xbox games console, or the Ir sensors on car parking systems, can allow the robot to understand the physical dimensions of its surroundings more easily. Kinect builds up a 3D image by firing out thousands of small infrared dots and then judging depth based on how big they appear. Such methods give the robot far better three-dimensional data to work with than a confusingly detailed high-resolution image. Even with a pair of high-resolution eyes to produce a 3D image, your brain does a lot of work and makes a lot of assumptions to give you a perception of depth. Theres still a lot of work to be done here, and even though its perfectly possible to build a robot today that can navigate safely around your home, teaching it the location and identity of all the objects youd want it to work with would be a long, painstaking task, and even then it would still make errors.

Sentient robots are a staple of science fiction, but nothing like them has yet appeared in real life. dan Griliopolus finds out whats taking the scientists so long

other senses
Microphones are sensitive across a wider range of frequencies than the human ear, but again its the interpretation of audio thats particularly difficult for computers. Anyone whos tried dictation software will know that accents, speech patterns, ambient noise and even the common cold can completely defeat such software. Dragon claims that its naturally Speaking software can recognise 160 words a minute with 95 per cent accuracy, although its not as good as a human at interpreting the missing words. You might think that this means robots are in a better position with sound than they are when deciphering visual input, but this is only true for speech. recognizing a wide range of audio cues and reacting to them is another matter entirely, and again it would require painstaking teaching and considerable processing power. Artificial noses are still relatively primitive. The human nose contains more than 100 million individual receptors, whereas a typical artificial nose might have fewer than 10. researchers at the University of Warwick recently found that sensitivity can be improved by coating it in a layer of artificial mucus, so if our android were to malfunction, it might end up with a runny nose.

W
ILLUSTRATION: CHrIS rObSOn

hen we think of a robot, most of us probably imagine something like a mechanical man. Hed be a tireless, flexible humanoid assistant who can do everything a man can do, only better in short, an android. Such an android should be entirely subservient to our commands, and yet also be able to show initiative and ingenuity. Its a tricky mix. Essentially, we want to create the ultimate slave, a device that can free humankind from the drudgery of day-to-day chores. Much of this vision comes from I, Robot, Isaac Asimovs classic short story collection, which was published over 60 years ago. So are we nearing the creation of a practical android? Or will the robots of the near future be much like they are today large, immobile, single-task devices that live next to conveyor-belt lines? In this feature well look at the key capabilities of the human body such as our senses, our ability to manipulate objects, to move around, and even make decisions that

any successful android would have to master. Well take each in turn and see how close science has come to replicating them.

one Vision
Starting at the top, the human head is packed with sensors, making it easily the most complex part of our body. It has audio inputs and outputs, taste, smell and, of course, vision. The maximum acuity of the human eye is estimated at between 100- and 600-megapixels. Its complex, as our visual apparatus builds up an image of our surroundings by making rapid sweeps of the area. This visual data is then stored in our memory, which is very handy as the eyes see in high detail only in the centre of our vision. Much of the rest is filled in by our brains. Digital video cameras can capture only a tiny fraction of this

The CMUcam 3 provides


both a camera and image processor in a single package for around 150

The simplest artificial muscles work like a


spring, such as Oscar Pistoriuss lower legs

112

august 2011

COMPUTER SHOPPER Issue 282

Issue 282

COMPUTER SHOPPER august 2011

ImAge: ElvAr PlSSOn

detail. The human eyes huge advantage over a robot eye is more than its extreme sensitivity and immense resolution, though. Our eyes dont suffer from wide-angle distortion (despite having around a 155 viewing angle), vignetting, chromatic aberration or colour distortion. Despite this, digital cameras have some advantages over the eye, such as a variable zoom and the ability to keep an entire scene in focus at maximum aperture. They adjust rapidly to low-light scenes, although they dont have as great a dynamic range as the human eye think of the amount of data a camera loses in shadows

or highlights compared with what you can see. Whats more, robot eyes can detect a much wider range of light than the human eye, including infrared, ultraviolet and x-rays. Even after weighing up the pros and cons, the human eye will always come out on top as its backed up by the brain. A major stumbling block for robots isnt seeing objects but identifying them. It would be hard for a robot to find and pick up a vase on command, say, as vases come in a bewildering range of shapes and sizes. Even if the robot had learnt to recognise all the vases in your home, they may be partly obscured by other objects, which would confuse its vision. At present, the best way to recreate human vision is to use a range of separate sensors rather than a single eye. A camera and processor is still a good start, though. The open-source CMUcam project, for example, can be used to recognise and track predefined objects. Its inexpensive and low-power, but then it has a resolution of just 352x288 at 26fps.

113

Modern robotics

Modern robotics

features

features

also unlikely to be welcome near people for less lethal (but no less rational) reasons. Perhaps our best hope is to store rotational energy in vacuum-sealed flywheels. These can operate like batteries and store tremendous amounts of energy. However, they can have strange gyroscopic effects, they still need to be either charged or swapped, and they can be subject to a catastrophic but exciting-sounding flywheel explosion. Either way, those in the know predict that battery technology is about to race forwards, spurred on by the demands of

electric cars. Ian robertson of bMW recently said that well see batteries advance more in five years than they have in the last 100.

see You LAter, ACtuAtor


Actuators are the mechanical equivalent of muscles. There are huge developments going on in this field as designers are realising that the power supply limitations wont go away, and that they need much more efficient actuators than standard DC motors. The simplest actuators are hydraulic or pneumatic, using compressed

A dog that walks itself on the beach? Someones missing the point maybe?

liquids or gases, or elastic, using built-in springs to balance forces, like disabled sprinter Oscar Pistoriuss replacement blade legs. There are also miniature internal combustion engines, but well discount these as they require large quantities of dangerous and bulky fuel. Air muscles, which have been in use since the 1950s, work in opposed pairs, like our own muscles, and consist of pneumatic bladders filled with pressurised air, Theyre very light, but they require heavy air compressors to be attached and arent very accurate. Another option, Muscle Wire, is made from memory metals, which are easily deformed and revert to their original shape when heated. Meanwhile, electro-active polymers are typically elastic materials made from ceramics that change in shape and size when subjected to an electrical field. Sadly, most robotic muscles, such as those of the HDT MK1 robot arm (see our Muscling in box on page 117), use DC electric motors, and this is still the most likely option for a modern android. The degree of potential in this field is impressive. As with power sources, a lot of future actuators sound like something straight out of science fiction. When you hear about transparent elastic carbon nanotube actuators that are 8mm wide and can produce the same power as a human biceps, visions of super-thin, super-powerful, super-flexible robots seem realistic. nanotubes represent the best hope for the future. They can operate at extremes of temperature (from -196C to 1,538C) and consist of transparent aerogel built from aligned

carbon nanotubes. Their only problem is that they require high voltages to activate.

WALKinG the WALK


In terms of energy efficiency, wheels are the best way to get around, so most common mobile robots are wheel or track based. However, our ideal Android has to be able to walk on two legs. Hondas Asimo shows that its possible to build a bipedal robot, and there are more naturalistic walking robots such as Anybots Dexter, which constantly adjusts its pace to stop itself falling. While Asimo can handle some stairways, neither of these robots can deal with rough terrain. boston Dynamics quadruped bigDog shows that you can program something that moves completely naturally, can handle rough terrain and can even right itself if its legs are kicked away. However, a quadruped isnt ideal for working in many homes and offices, with tight spaces and high objects that need to be reached. Once your robot is up and running, it has to work out where to go. Its worth remembering that modern aeroplanes are essentially semi-autonomous flying robots that use a variety of sensors to orientate themselves. A bipedal robot would use the visual and audio sensors we discussed earlier for short range movement, but for longer distances it needs radar and GPS, as well as navigation software to bring it all together. Googles driverless cars have already travelled across America and Asia, proving that this is possible at a prototype level. On the smooth, well-mapped streets of the west,

its unlikely to develop these abilities in the near future. Taking itself to the service centre is about as much as we can hope for.

sKin deeP
not only does a robots outer skin need to be durable, but every part of the structure must be able to withstand abuse. Consider how fragile electronics are and the level of protection required to ruggedise, say, a laptop, and then imagine doing that to an entire android. Humans are fragile, too, but we have the advantage that our senses can detect threats, such as heat, and our fast reflexes enable us to avoid serious damage. building such failsafes into an android is certainly possible, but youd need yet more processing power to monitor and react to all those additional sensory inputs. One way to negate the problem of damage is to have multiple robots. Swarm robots consist of hundreds of smaller robots that act as one. These communicate wirelessly and can fill gaps in the swarm if a robot fails. For our android, however, a better model might be the DASH, a super-light robot thats built to replicate the resilience and mobility of an insect. Weve seen it survive a 28m fall from a building intact. This would leave us with the problem of incorporating the muscle power and battery life needed for a robotic butler into a device that weighs so little.

DASH is incredibly resistant to damage, thanks


to its lightweight design

its not inconceivable that robots could travel easily in the near future.

reProduCtion Line
Humans are capable of making new humans, and to some extent we can tend to our own injuries. However, with general-purpose manipulators rather than hands, our android would have a hard time replacing worn out parts, let alone making a copy of himself. That said, robot reproduction has been demonstrated in principle. A rapid prototyping machine called reprap has been designed specifically to be able to make components for itself or for building another reprap machine (see our feature on page 118), while a modular robot arm has been designed that can assemble copies of itself, but only from pre-produced pieces. It isnt possible for our android to repair and reproduce itself at the moment, though, and
Interview with

hAndroid
The human hand is a fantastic evolutionary achievement that incorporates subtle touch, strength, dexterity and flexibility. There are

Faster than the AI Developing chatbots


He may be better known for the range of adventure games hes produced with Telltale Games, but programmer Bruce Wilcox has an interesting sideline he creates chatbots. Wilcox won the annual $3,000 Loebner Prize in 2010 with Suzette, a chatbot so kookily human that she managed to convince one of the four judges that she was human, which is not an easy trick. We caught up with Suzette at http://ai.bluemars.com/chat and also talked to her creator. Before joining Telltale games, Wilcox was hired as a consultant by MMO developer Avatar Reality. The company wanted users avatars in the game to continue to interact with other players while their owners were offline. He proposed designing chatbots that could learn from what users said to it and to other users, so that it could simulate a player while he was offline. His chatbot, Suzette, won the Loebner prize. Were you surprised that Suzette won by fooling a judge? Absolutely. I thought she had a good chance of winning against the other programs on points. But with 25 minutes to compare human against computer, I didnt expect a judge to fail. Why is it that chatbots cant recall earlier parts of the conversation? Some chatbots can memorise things said earlier, but its a lot of work because mostly chatbots dont understand meaning. So if you say you have a dog and later ask it if you have a pet, this isnt easy to answer. Parroting back you have a dog if you ask do I have a dog? is a lot easier. Do you think chatbots will ever advance beyond a gimmick? Of course. Chatbots serve as customer service agents online for many companies, answering questions about the company or products. Thats a real use, and chatbots are a fine interface to a query system. Chatbots can be just fun to talk to, like a game. Games, after all, are merely gimmicks in some sense.
Bruce Wilcox is looking to incorporate his work with Suzette into upcoming titles from Telltale Games,
the creators of the recent Back 2 the Future games

That would involve a huge amount of work. Humans know tens of millions of facts and can infer more, understanding cause and effect and bringing common-sense, simulation knowledge of typical human lives, a passable understanding of meaning of the input, and so on. What is the state of AI outside the sphere of chatbot programming? It depends on what you mean by AI. Game AI is generally not mainstream AI. Game AI seeks to create the illusion of a competent opponent by whatever means (often using smoke and mirrors). Mainstream AI actually tries to create techniques whereby problems are solved as if by a human and covers a wide range of things. Every time they succeed, the technique is swallowed up by mainstream computing and is no longer considered AI. Speech recognition is a good example. Did you use any of your chatbot expertise in your games? Telltale Games hired me because it is interested in natural language interfaces. Im currently researching on a game similar to Scribblenauts (A Nintendo DS game where you can create in-game objects by entering words). Scribblenauts accepts nouns and adjectives; our game would accept nouns and verbs and act out a story in some fashion.

A lovely morning to you.

Hi Suzette!

Suzette
Why gets way too

Happy Valentine's Day.

I just wanted to ask you some questions.

young, old, midd ling, past it, or don' t care?

You saying old reminds me of physical self. Do you consider your self

Ha! Why?

It's a unique feel ing.

Aha, and to you. How does it feel to be the best chatbot around? How do you mean unique?

A prime time to change your mind about your life.

Middling. Youre as young as the chatbot you feel.

I concentrate very hard, rehearse what I'm going to do, visualiz e it in my mind. And then I just do it.

How old are you?

Wait, Im doing the interview. How old are you?

insect swarms.

behavior just like

result in emergent

A swarm robot is formed from simple robo ts in large numbers that

philosophical. My favorite kind of robot is a swarm robo t.

When?

Oh, I think you misunderstood me. What did you mean by unique?

A bit reticent, are you?

How old are you? Im doing the interview. Good lord.

sense of humor.

I'm glad you like my

Funny, Im just writing something about swarm robots. Bizarre.

Yes, it's true.

Multiple Personal ity Disorder.

You can't trust what I say. I suffer from

When you just said it. Please dont start the round the houses routine with me. Ive dealt with chatbots before, you know.

Name a topic. If you can't think of a topic, try one of thes e: booze, mock combat, shop ping, taxation, chocolat e.

Stop it with the non sequiturs! How did you fool anyone?

abilities?

Why do you like

I like your robo-psychic abilities.

Eskimos take raw fish heads, bury them in ice for 3-4 mont hs, then eat the foul smelling result.

Oh dear

What do you think it will take for something to pass the Turing Test?

Are you dating anyone?

Were you programmed to include MPD or did it develop spontaneously?

What profession would you hate to be?

That judge must have been a total idiot.

Oh, the old bait and switch. No, and its a sensitive topic.

Star Trek.

The computer from

PR. What would you like to be if you werent a chatbot?

rice crackers.

with herbs and

congealed duck or pig blood served

Jellied Blood -

In China they eat

Spouting facts is one tactic

Please come back and talk again.

Goodbye Suzette.

114

august 2011

COMPUTER SHOPPER Issue 282

Issue 282

COMPUTER SHOPPER august 2011

115

Modern robotics

Modern robotics

features

features

robotic manipulators designed to replicate human motion, and newer models (such as Shadow Hand, left) incorporate feedback from tactile sensors. However, its easier to program for non-human hand shapes, which can then carry out automated routines extremely quickly. Using Mindstorm, legos DIY robot kit, you can build a robot thats able to complete a rubiks cube in under 12 seconds. Touch is actually the easiest of the senses to replicate. SynTouch, a technology currently in development, uses a conductive fluid pouch that encloses an array of electrodes. The impedance of these changes rapidly when the pouch is pressurised. It also has thermisters, which sense heat, and a pressure sensor to detect vibrations. All this is mounted on a rigid substructure, ruggedised to survive daily use, and looks curiously like a human finger. Of all the parts of the android, this component seems the most ready, although its level of accuracy is still a concern.

ALL in the Mind


The human mind relies on massive parallel processing, with every neuron acting in tandem and combining to make a ridiculously complex and powerful processor. While its possible to recreate this raw parallel computing power, bringing it all together to make an artificial intelligence (AI) is a tough one. The classic test of AI is the Turing Test, which was introduced by bletchly Park code breaker and computer pioneer Alan Turing in 1950. In the test, a human judge has to guess whether responses to his written questions are being answered by a machine or a real human being. Every year, the loebner prize is contested in this way to find the best
Shadow Hand is a robot hand that
responds to feedback and touch

conversational AI. The main prize of $100,000 has never been claimed as no-one has ever got even near to producing an AI thats convincing in the long run (see our interview with the most recent prize winner, Suzette, on page 115). IbMs Watson is a good example of the current state of the art in AI. Having taken on the worlds best chess players in 1997 with Deep blue, the company wanted a more subtle and complex challenge, so it built a questionanswering machine designed specifically for the US general knowledge quiz show Jeopardy! Watson doesnt have any general intelligence, but it possesses a huge data processing rate (500Gb/s) and an enormous database, and it managed to beat the all-time best human contestants with ease. It competes on as level a playing field as possible, with no internet connection, and receives the questions as text at the same time as its opponents. Its startling to see it take in and respond to complex questions in a fraction of a second. IbM hopes to use the technology to assist in other researchheavy areas, helping to answer questions in subjects such as medicine and law. AI is so complex that practitioners in different sub-disciplines have little idea about the other elements creativity researchers dont tend to work with perception researchers, for example. Few people are working towards the ultimate aim of general intelligence, but weve spoken to one man who is (see Machines that create below). AI is progressing in several fields, such as perception, natural language processing, some forms of learning and deduction, but general intelligence is a long way off. Cramming something as powerful as Watson into a mobile unit would currently be impossible. With cloud computing, however, a robot could operate autonomously with far less built-in processing power while more complicated computing went on in the cloud. This looks to be

IBMs Watson defeated the best players at popular US quiz show Jeopardy!

the only way well get considerable processing power into our android for now, although the obvious problem is that it would have to rely on a wireless network or mobile connection, which could severely hamper its reaction speeds.

At WhAt PriCe?
To build our ideal android, wed need a compact power source, a very powerful artificial brain and a wide range of sensors and general-purpose manipulators, all combined in a mobile and resilient chassis. Many of the physical aspects of our android arent beyond our current capabilities, although its range would probably be limited and it would need a constant wireless connection to run from a remote computer. The problem is that todays android wouldnt be able to perform tasks that it hadnt been programmed to do. We simply havent created an artificial intelligence that can respond instantly to the vast range of human whims.

Answering questions is one thing, and retrieving objects is possible, but cooking and serving a wide range of meals from a standard kitchen, for example, involves too many variables. A key factor here is that human labour is far cheaper. Science fiction such as Star Trek imagines a future where mankind has been freed from the drudgery of everyday chores. but anyone who had the funds to purchase and run a state-of-the-art android today could get far superior home help from a human being. Financial inequalities in our own society, let alone the world, give the android butler some stiff competition when it comes to pricing. This has made it hard for serious android development to progress. Our android would need to be superior to a human in some respects to justify its existence, and at present it simply

isnt. Were close to matching human capabilities in some areas, but we need to go beyond this to justify the limitations any android will also have. Such limitations stem from replacing the human brain with a computing system that can take commands and work out the best way to perform them. The brain is a powerful tool, but its ability to learn and extrapolate solutions from past events is key to making a practical android. Our best hope of such advances lies with experimental work such as Dr Thalers, where androids learn from their own mistakes. Using virtual environments, they could then be given the time and space to do this before they were unleashed in our homes. Until then, if you want home help, we recommend hiring a human being. Pick the right one, and they can be surprisingly capable and reliable.

Machines that create Developments in artificial intelligence


Dr Stephen Thaler, evolutionary AI researcher and CEO of Imagination Engines, isnt afraid of making bold claims. According to Dr Thaler, his Creativity Machines (CMs), a pair of neural networks, are not only the future of AI but they are fundamentally thinking creatures. Hes demonstrated this in a way thats easy to understand with robots that have learned to walk by trial and error. They do this by remembering and improving on the best of previous random strategies. Hes also involved in virtual design, where robot designs are refined in a virtual environment. This can speed up the AI learning process immensely, as designs can be executed without the need to build expensive prototypes. The techniques use evolutionary principles, such as trial and error, mutation and survival of the fittest. We may have a future where designers advance android design without ever having to build them. Apart from your Creativity Machines, what is the state of the art in AI? Im not really seeing much out there except hype. Large companies and universities are catering to the public misconception that size, speed and complexity will somehow lead to human-level machine intelligence or the kind of AI anticipated by science fiction. Thats why the press typically jumps on the announcement that some monstrous corporation is about to embark on building a human brain simulation. Its also why so much attention is given to projects such as Watson. Put a HAL-like voice behind an impressive database search and 90 per cent of the population mistakenly thinks, wow, with just a few more computers, the thing will become conscious. Certainly, these are not the AI systems anticipated by science fiction. They would require constant repair and update and would lack the intuition and creativity characteristic of the brain. So, slightly side-stepping your question, Creativity Machine is not only the state of the art of AI, it is the fundamental building block of future machine intelligence. Pull out the schematic for any soon-to-be-created contemplative and creative AI system and you will be able to circle the two primary ingredients of the Creativity Machine paradigm: at least one noise-irritated (random element) neural network generating potential ideas, [and] another network [that] Its not something I believe. Its something I know. They are truly dreaming. Whether were talking about the brain or CMs composed of artificial neural networks, memories are stored in the same way, through the connection weights that grow between them. Add a little noise and they relive such memories. Add a little more noise and their memories degenerate into false memories (in other words, confabulations) of things and events they have never directly experienced. This progression of activation patterns (thoughts) is called dreaming, and thats exactly what we see when we watch a brain dreaming via MRI, an evolution of neural activation patterns that seem to come from nowhere. Under the hood, the brain is housecleaning, interrogating some of its nets with noise while others take note of what memories or ideas are worthy of reinforcement. Once again, this process is the Creativity Machine paradigm at work. The admission that synthetic neurons can dream is one of those ultimate scientific truths that breaks away from the subjective, comforting and societally reinforced delusion that they dont.

Muscling in Dr Tom Van Zoren Creator of the HDT MK-1 robotic arm
Dr Tom Van Zorens HDT MK-1 robot arm is based on technology developed under the DARPA Revolutionizing Prosthetics program. Its completely portable, uses sophisticated actuators and can lift 23kg. Why is this arm so impressive? We have developed a dexterous, lightweight, compact, high-load, mobile, and modular robotic manipulator arm, which can do many of the tasks and use many of the tools that a human arm can and do it remotely. Does the arm have any AI? Right now, it is purely user-controlled through tele-operation. In the future we will have some autonomous behaviour built in. What do you see it being used for? Initial applications will be for military explosive ordnance disposal and law enforcement bomb disposal. After that, any application in which remote projection of human dexterity is useful will benefit from it. Is it possible to incorporate touchfeedback technology in artificial limbs? Yes. The DARPA program from which HDTs arm technology was spun off has developed an arm that incorporates touch feedback. What great leaps forward in technology will we need to reach the classical android? Autonomous behaviour is the biggest challenge.

Dr Thaler is designing AI machines that not only


think, but teach themselves to think

critiques and guides the stream of consciousness. This fundamental architecture will then be able to query other, non-creative, non-contemplative, but fast computational systems, such as Watson. Youve spoken in rather human terms about your CMs dreaming; is this just for marketing, or is this something you believe?

The HDT-MK1 is a mobile


arm that can lift 50lbs and use many human tools

116

august 2011

COMPUTER SHOPPER Issue 282

Issue 282

COMPUTER SHOPPER august 2011

117

Вам также может понравиться