Вы находитесь на странице: 1из 16

NEURAL NETWORKS

ARTIFICIAL HAND

BY K.ASMATH BASHA
C.ABDUL HAKEEM COLLEGE OF ENGG & TECH SUBMITTED BY K.ASMATH BASHA E-MAIL ADDRESS s_asmath@yahoo.co.in

ABSTRACT
The loss of hand function following an injury and amputation of the arm can severely affect a persons quality of life. Artificial hands are used to serve the handicap. Ideally speaking, any artificial hand should be capable of emulating the natural hand in terms of grasping and gripping objects of varying geometries and physical properties. However, despite many years of research, the most commonly used prosthetic hand is the clawhook. Recent technological advances and innovations have led to be development of sophisticated artificial hand which serves only limited number of users. More importantly most of artificial hands developed so far have failed address the problems of achieving versatile grasp and grip. Our goal is to design and develop an artificial hand that can be used to provide versatile grasp, high grip and high artificial sensibility. The best remedy to bring variety of actions in the prosthetic hand is only through NEURAL NETWORKS. Here we have used the hydraulic pumps to provide strength to the prosthetic hand. The sensors provided in the hand senses the mechanical activities of the hand. As the muscle contracts ANN produces a specified voltage, which gives exclusive command for the prosthetic hand for specified action.

INTRODUCTION:
The overall objective is to develop a new strategy for motor control of functional hand prostheses based on electrical signals generated from multiple muscle electrodes or microchips implanted in the peripheral or central nervous system. The use of Artificial Neural Network (ANN) is essential to fulfil this purpose. The purpose is also to develop systems for artificial sensibility to be applied to such hand prostheses and to patients with

loss of sensory nerve function. We can register nerve signals via the chip after electrical stimulation of the nerve roots. In future experiments the influence of chip design on regeneration success will be determined. We have also demonstrated that central nervous axons are capable of growing into a chip if attracted by pieces of peripheral nerve. ANN has been used to recognize complex muscle signals from multiple surface electrodes in order to associate specific signal patterns with specific movement of a virtual hand. These experiments indicate that it is possible to create artificial sensibility in prosthesis or a hand with sensory dysfunction. NEURAL NETWORKS: Neural networks are composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. ANNs, like people, learn by example. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is true of ANNs as well. Neural networks do not perform miracles. But if used sensibly they can produce some amazing results.

NEURAL NETWORKS FOR PROCESSING:

Why use neural networks? A trained neural network can be thought of as an "expert" in the category of information it has been given to analyze. This expert can then be used to provide projections given new situations of interest and answer "what if" questions. Neural

networks process information in a similar way the human brain does. Neural networks learn by example. They cannot be programmed to perform a specific task. The examples must be selected carefully otherwise useful time is wasted or even worse the network might be functioning incorrectly

History of Neural Networks


As their name implies, neural networks take a cue from the human brain by emulating its structure. Work on neural networks began in the 1940s by McCulloch and Pitts and was followed by the advent of Frank Rosenblatts Perceptron. The neuron is the basic structural unit of a neural network. In the brain, a neuron receives electrical impulses from numerous sources. If there are enough agonist signals, the neuron fires and triggers all of its outputs. A neural network neuron functions similarly. A neuron receives any number of inputs that possess weights based on their importance. Just as in a real neuron, the weighted inputs are summed and output based on a threshold function sent to every neuron downstream. A barrage of positive inputs will provide a positive output and visaversa. The original Perceptron received two inputs, and gave a single output. Although this system worked well for simple problems, Minsky demonstrated in 1969 that non-linear classifications, such as exclusive-or (XOR) logic, were impossible. It wasnt until the 1980s that training algorithms for multi-layered networks were introduced to solve this problem, restoring

faith in neural networks. A multi-layered network consists of numerous neurons, which are arranged into levels. Each level is interconnected with the one above and below it. The first layer receives external inputs and is aptly named the input layer. The top layer provides the classification solution, and is called the output layer. Sandwiched between the input and output layers are any number of hidden layers. It is believed that a three-layered network can accurately classify any nonlinear function. Multi-layered networks commonly use more sophisticated threshold functions such as the sigmoid function. This is advantageous because the sigmoid functions range is [-0.5, 0.5] and therefore prevents any individual output from becoming too large and overpowering the network EMG ELECTRODES:

EMG is an acronym of electromyogenic. These electrodes are used to sense the electric field generated on the muscles by the charge separation in electrolytes and by the movement of electrons. Using silver chlorides on the skin and couple it with a conducting gel we can sense the voltage at the location. INSTRUMENTATION AMPLIFIER:

The magnitude of the voltage is related to how much a subcutaneous muscle contracts. The problem that remains is that the electrode produces a very small signal, at best few mill volts. The instrumentation amplifier is necessary to provide the high impedance, high common mode rejection ratio, and gain necessary to extract the biopotential signal produced by the contracting muscles. ANALOG TO DIGITAL CONVERTER: Signals from instrumentation amplifier are in the form of analog for accurate control of artificial hand .In order to provide digital inputs for the neural networks, we need to convert signal from the instrumentation amplifier into digital form through ADC. In this project we use successive approximation type of ADC. SERVOMOTORS AND HYDRAULIC ACTUATORS: A servomotor is an electromechanical device in which an electrical input determines the position of armature of the motor. Here small size of servomotors used to give the force to the oil filled hydraulic actuators for specified action.

OPERATION:
Three surface electrodes sense the muscle contraction voltages. The two surface electrodes will be mounted close together above the muscle. The third electrode is ground reference. The instrumentation amplifier is constructed with high CMRR.The instrumentation amplifier was chosen because it can extract a very small signal difference between the two signal electrodes (electrode 1 and 2) while significantly attenuating common mode noise and other signals common to both electrodes. However, something called a motion artifact can still occur due to relative motion between the electrodes and the tissue.

Relative motion can produce voltage sufficient to saturate the second-stage amplifier. The frequencies of the motion artifact are usually at the low end of the bandwidth of the EMG signal. Therefore, the 2Hz high pass filter on the input of the second stage of the amplifier that follows can be used to reduce these artifacts. At this point the EMG signal observed on the oscilloscope would look like the following, where the large amplitude bursts are associated with muscle contractions. As shown in the

following figure This is a rather high-frequency signal with components between a few Hz and 250Hz. To make this signal more useful for control purpose, we need to extract the envelope of the signal between 0V and its maximum positives amplitude. We can accomplish this with a rectifier and low-pass filter. A normal silicon diode would not be satisfactory to rectify the signal since it requires a 0.7V turn-on voltage, which is larger than the amplitude of the input signal. Because the signal is very small, we must use a precision rectifier circuit that more closely approximates the action of an ideal diode. The precision-rectified EMG and the resulting low pass filtered signal look like those shown in

After rectification the analog signal is sampled and quantized by the chip ADC804.

Mechanical construction: A single actuator element consists of a feeding channel for the pressurized air or liquid and a chamber which is connected to the two movable parts of a joint. During the inflation of the actuator element by air/liquid the volume of the element expands and the height of the element vertical to the flexible wall of the chamber increases. This change of distance between the opposite lateral surfaces is called the expansion behaviour. During this process the volume energy is converted into deformation energy. Joint Structure: By using the single actuator elements described above different joint structures can be realized. By using many fluidic actuator elements together structures with very complex flexibility can be created. Thus making many different and unusual movements possible. For the effective design of such complex structures it is necessary to derive mathematical models for the expansion behaviour of the actuator elements. Such models enable the deformation properties and the possible force behaviour of a potential structure to be found. Mechanism and Design

A conventional powered prosthetic hand usually consists of an energy source, one or two actuators, a simple control unit and the mechanical construction. All components except for the myoelectric sensors and the energy source have to be in the hand itself because in the socket there is very little space left. So we integrated a total of 18 miniaturized flexible fluidic actuators into the mechanical construction of the fingers and the wrist of the hand. Our aim was to mimic as closely as possible the geometry of a male adult human hand. The new hand can be divided into 2 (+l optional) sections.

Fingers: They contain the flexible fluidic actuators that lead to a flexion of the finger, flex sensors. Wrist: It contains flexible fluidic actuators that bend the wrist. The extension of the joint is done passively by electrometrical spring-elements. Self-adaptability: The flexible fingers of the new hand are able to wrap around objects of different sizes and shapes. Because of the elastic properties of the actuators the contact force is spread over a greater contact area. Additionally the surface of the fingers is soft and the silicone-rubber glove that covers the artificial hand increases the friction coefficient. The

result is a reduced grip force is needed to hold an objects. As a side effect from the softness and elasticity of the hand it feels more natural when touched than a hard robotic hand and the risk of injury in direct interaction with other humans is minimized.

Artificial neuron:

An artificial neuron is a device with many inputs and one output. The neuron has two modes of operation; the training mode and the using mode. In the training mode, the neuron can be trained to fire (or not), for particular input patterns. In the using mode, when a taught input pattern is detected at the input, its associated output becomes the current output. If the input pattern does not belong in the taught list of input patterns, the firing rule is used to determine whether to fire or not. The inputs are weighted; the effect that each input has at decision-making is dependent on the weight of the particular input. The weight of an input is a number which when multiplied with the input gives the weighted input. These weighted inputs are then added together and if they exceed a pre-set threshold value, the neuron fires. In any other case the neuron does not fire.

Complicated neuron:
In mathematical terms, the neuron fires if and only if;

X1W1 + X2W2 + X3W3 + ... > T The addition of input weights and of the threshold makes this neuron a very flexible and powerful one. The Complicated neuron has the ability to adapt to a particular situation by changing its weights and/or threshold. Various algorithms exist that cause the neuron to 'adapt'; the most used ones are the Delta rule and the back error propagation. A simple firing rule can be implemented by using Hamming distance technique. The rule goes as follows: Take a collection of training patterns for a node, some of which cause it to fire (the 1taught set of patterns) and others which prevent it from doing so (the 0-taught set). Then the patterns not in the collection cause the node to fire if, on comparison, they have more input elements in common with the 'nearest' pattern in the 1-taught set than with the 'nearest' pattern in the 0-taught set. If there is a tie, then the pattern remains in the undefined state.

NEURALNETWORKS IN ACTION

The back-propagation Algorithm :

Units are connected to one another. Connections correspond to the edges of the underlying directed graph. There is a real number associated with each connection, which is called the weight of the connection. We denote by Wij the weight of the connection from unit ui to unit uj. It is then convenient to represent the pattern of connectivity in the network by a weight matrix W whose elements are the weights Wij. Two types of connection are usually distinguished: excitatory and inhibitory. A positive weight represents an excitatory connection whereas a negative weight represents an inhibitory connection. The pattern of connectivity characterizes the architecture of the network.

A unit in the output layer determines its activity by following a two step procedure. First, it computes the total weighted input xj, using the formula:

Where yi is the activity level of the jth unit in the previous layer and Wij is the weight of the connection between the ith and the jth unit. Next, the unit calculates the activity yj using some function of the total weighted input. Typically we use the sigmoid function:

Once the activities of all output units have been determined, the network computes the error E, which is defined by the expression:

Where yj is the activity level of the jth unit in the top layer and dj is the desired output of the jth unit.

The back-propagation algorithm consists of four steps: 1. Compute how fast the error changes as the activity of an output unit is changed. This error derivative (EA) is the difference between the actual and the desired activity.

2. Compute how fast the error changes as the total input received by an output unit is changed. This quantity (EI) is the answer from step 1 multiplied by the rate at which the output of a unit changes as its total input is changed.

3. Compute how fast the error changes as a weight on the connection into an output unit is changed. This quantity (EW) is the answer from step 2 multiplied by the activity level of the unit from which the connection emanates.

4. Compute how fast the error changes as the activity of a unit in the previous layer is changed. This crucial step allows back propagation to be applied to multilayer networks. When the activity of a unit in the previous layer changes, it affects the activities of all the output units to which it is connected. So to compute the overall effect on the error, we add together all these separate effects on output units. But each effect is simple to calculate. It is the answer in step 2 multiplied by the weight on the connection to that output unit.

By using steps 2 and 4, we can convert the EAs of one layer of units into EAs for the previous layer. This procedure can be repeated to get the EAs for as many previous layers as desired. Once we know the EA of a unit, we can use steps 2 and 3 to compute the EWs on its incoming connections.

ADVANTAGES:
Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience.

Self-Organization: An ANN can create its own organization or representation of the information it receives during learning time. Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability.

OVERVIEW OF

NEURAL NETWORKS:

Conclusion:
The prosthetic hand presented here is able to grasp different objects and the movements appear to be natural because of flexible actuators. These very compact and lightweight actuators have been integrated completely into the fingers of artificial hand. The palm of the hand remains empty and provides enough space for the micro pump. The self adapting properties of the fingers while grasping different objects enable the development of a low mass prosthetic hand with high functionality

BIBILIOGRAPHY:
1. An introduction to neural computing. Alexander, I. and Morton, H. 2nd edition

2. Industrial Applications of Neural Networks (research reports Esprit, I.F.Croall, 3. DARPA Neural Network Study (October, 1987-February, 1989). MIT Lincoln Lab. Neural Networks, Eric Davalo and Patrick Naim 4. J.P.Mason) 5. Neural Networks by Eric Davalo and Patrick Naim 6. Learning internal representations by error propagation by Rumelhart, Hinton and Williams (1986). 7. Industrial Applications of Neural Networks (research reports Esprit, I.F.Croall, J.P.Mason) 8. Rojas, R. (1996). Neural Networks: A Systematic Introduction. Springer, Berlin.

Вам также может понравиться