Академический Документы
Профессиональный Документы
Культура Документы
BY
ADEBAYO OLAMIDE
SUBMITTED TO
ENGINNEERING DEPARTMENT
EKITI STATE.
ENGINEERING.
JUNE 2016.
ABSTRACT
A Brain computer interface (BCI) is a system that allows direct communication between a computer
and the human brain, bypassing the body’s normal neuromuscular pathways. The signals recorded
by the system are processed and classified to recognize the intent of the user. Though the main
application for BCIs is in rehabilitation of disabled patients, they are increasingly being used in other
application scenarios as well. One such application is the control of wheelchair movement.
In this project, an effective BCI application is developed which will help the physically challenged
to lead an independent life with the help of their brain signals using non-invasive techniques.
ii
DEDICATION
This report is dedicated to the almighty God, for his grace, mercies, protection and divine strength
from above. I also dedicate it to my family, most especially my father and mother for their moral and
financial support.
iii
ACKNOWLEDGEMENTS
Firstly, I am forever grateful to God for his mercy, grace and guidance to finish this project. I would
I would like to thank my supervisors, Dr. T Adegboola and Engr S.O. Akinola, for their valuable
help and advice throughout this project. They encouraged me, helped me stay focused and
motivated.
I am also thankful to the lecturers of the engineering college, as they gave up their valuable time to
suggest their useful ideas and requirements. They also provided valuable feedback and suggestions,
iv
CERTIFICATION
This is to certify that this report was prepared and presented by ADEBAYO, OLAMIDE with
of Engineering, Afe Babalola University Ado-Ekiti Nigeria during the 2015/2016 academic session
under my supervision.
_______________________ _______________________
_______________________ _______________________
_______________________ _______________________
_______________________ _______________________
_______________________ _______________________
v
TABLE OF CONTENTS
ABSTRACT ..........................................................................................................................................ii
DEDICATION .....................................................................................................................................iii
ACKNOWLEDGEMENTS ................................................................................................................. iv
CERTIFICATION ................................................................................................................................. v
CHAPTER ONE.................................................................................................................................... 1
INTRODUCTION ................................................................................................................................. 1
2.2.1 Infra-Low.............................................................................................................................. 6
vi
2.2.6 Gamma Waves .................................................................................................................... 8
2.3 Neurofeedback............................................................................................................................. 8
METHODOLOGY .............................................................................................................................. 22
vii
3.3.1 LabVIEW ...................................................................................................................... 25
REFERENCES .................................................................................................................................... 44
APPENDIX A ..................................................................................................................................... 47
APPENDIX B...................................................................................................................................... 51
viii
LIST OF TABLES
ix
LIST OF FIGURES
Figure 3.4 LabVIEW front panel used to interact with Mindwave headset………………………....26
Figure 3.8 Sketch showing exact code for feature extraction and translation…….…………..……..33
Figure 3.9 Sketch showing code for controlling arduino pins based on attention levels……...……..34
x
CHAPTER ONE
INTRODUCTION
1.1 Background
Until recently, we could say that the reality of being able to control digital devices through human
thoughts had been in the realm of science fiction. However, the advance of technology has brought a
whole new reality: Today, humans can use brainwaves to interact with devices or even affect their
individuals unable to speak and/or use their limbs to once again communicate or operate assistive
A BCI is a computer-based system that acquires brain signals, analyzes them, and translates them
into commands that are relayed to an output device to carry out a desired action. Mind-controlled
between the user’s brain and an external device. It has the potential to augment or even repair
In order to acquire these brain signals, a basic understanding of these signals and the anatomy of the
brain is highly essential. Generally when the brain neurons fire, certain signals are emitted called
brainwaves. These brainwaves are produced by synchronized electrical pulses from masses of
neurons communicating with each other. They are also divided into bandwidths to describe their
functions. Various kinds of brainwaves include alpha waves (8-12Hz), beta waves (12-88Hz),
gamma waves (38-42Hz), theta waves (3-8Hz), delta waves (<3Hz) etc (Lang & Bradley, 2007).
Alpha brainwaves are dominant during quietly flowing thoughts and in some meditative states.
Alpha is the resting state for the brain. Alpha waves aid overall mental coordination, calmness,
alertness, mind/body integration and learning. Beta brainwaves dominate our normal waking state of
consciousness when attention is directed towards cognitive tasks and the outside world. Beta is fast
1
activity, present when we are alert, attentive, engaged in problem solving, judgement, decision
making, and engaged in focused mental activity (Symphonic Mind LTD, 2009).
A BCI is a computer-based system that acquires brain signals, analyzes them, and translates them
into commands that are relayed to an output device to carry out a desired action. Thus, BCIs do not
use the brain's normal output pathways of peripheral nerves and muscles. This definition strictly
limits the term BCI to systems that measure and use signals produced by the central nervous system
(CNS). Furthermore, an electroencephalogram (EEG) machine alone is not a BCI because it only
records brain signals but does not generate an output that acts on the user's environment.
BCIs do not read minds in the sense of extracting information from unsuspecting or unwilling users
but enable users to act on the world by using brain signals rather than muscles. The user and the BCI
work together. The user, often after a period of training, generates brain signals that encode
intention, and the BCI, also after training, decodes the signals and translates them into commands to
By 2006, a microelectrode array was implanted in the primary motor cortex of a young man with
complete tetraplegia after a C3-C4 cervical injury. Using the signals obtained from this electrode
array, a BCI system enabled the patient to open simulated e-mail, operate a television, open and
close a prosthetic hand, and perform rudimentary actions with a robotic arm (Hochberg, et al., 2006).
This actually reflects a rich promise of BCIs. They may be used eventually to routinely replace or
restore useful function for people severely disabled by neuromuscular disorders; they might also
improve rehabilitation for people with strokes, head trauma, and other disorders. At the same time,
this exciting future can come about only if BCI researchers and developers engage and keep up with
2
1.2 Problem Statement
BCI is still undergoing a lot of research and hence there are still a lot of limitations. In Nigeria today,
a handful of people have the knowledge of BCI. This project looks at how BCIs can be trained to
decode brain signals to tackle the problem of people with limited moving abilities and control their
In particular people who cannot move their muscles or do not have perfect control over their body,
but are able to control signals produced by the CNS( Central Nervous System) with their minds. This
project will hopefully devise a promising way for paralyzed patients to control devices like
computers or wheelchairs; by wearing a cap and undergoing training to learn to control a device like
a wheelchair by imagining that they’re moving a part of their body, or triggering commands with
1.3.1 Aim
The aim of this project is to design and implement a brain computer interface to restore useful
function for people with severe disability by neuromuscular disorders via directional movements of a
robot.
1.3.2 Objectives
3
1.4 Scope of Project
This project consists of a BCI system which measures majorly alpha and beta brainwaves and then
sends it to a processing system. This processing system translates the EEG data into output
commands to move the robot. The robot will be programmed to move backward and forward. Most
sensors designed to read brainwaves for delta, gamma, and theta waves are highly expensive and
thus, for this purpose only alpha and beta waves are intended to be used.
1.5 Methodology
The Neurosky Mindwave BCI (NeuroSky, 2016) sends brainwave and debugging information
wirelessly through Bluetooth communication to the computer system. LabVIEW communicates with
the BCI and displays the signals on the computer screen. LabVIEW is a visual programming
environment that seamlessly performs signal processing and permits easy development of complex
features along with error handling capabilities. The processed signal is then sent as an output which
Another option is for the Mindwave BCI to send brainwave information wirelessly through
in the arduino IDE and then sends out the processed signal through arduino to other digital object.
4
CHAPTER TWO
LITERATURE REVIEW
2.1 Introduction
For many years it has been speculated that electroencephalographic activity or other
electrophysiological measures of the brain function might provide a new effortless pathway for
sending messages and commands to the external world – a brain–computer interface (BCI). Over the
past 15 years, productive BCI research programs have arisen. This was encouraged by new
understanding of brain function, the advent of powerful low-cost computer equipment and by
growing recognition of the needs and potentials of people with disabilities. Research has
concentrated on developing new communication and control technology for those with severe
neuromuscular disorders.
BCIs today determine the intent of the user from a variety of different electrophysiological signals.
These signals include slow cortical potentials, P300 potentials, and mu or beta rhythms recorded
from the scalp, and cortical neuronal activity recorded by implanted electrodes. They are translated
in real-time into commands that operate a computer display or other device. Successful operation
requires that the user encode commands in these signals and that the BCI derive the commands from
the signals. Thus, the user and the BCI system need to adapt to each other both initially and
Current BCIs have maximum information transfer rates up to 10–25 bits/min (Allison, et al., 2007).
This limited capacity can be valuable for people whose severe disabilities prevent them from using
conventional augmentative communication methods. At the same time, many possible applications
of BCI technology, such as neuroprosthesis control, may require higher information transfer rates.
Future progress will depend on recognition that BCI research and development is an
5
computer science (Shih, et al., 2012). Development of BCI technology will benefit from greater
important new communication and control option for those with motor disabilities and might also
give those without disabilities a supplementary control channel or a control channel useful in special
2.2 Brainwaves
At the root of all our thoughts, emotions and behaviors is the communication between neurons
within our brains. Brainwaves are produced by synchronized electrical pulses from masses of
neurons communicating with each other. Brainwaves are detected using sensors placed on the scalp.
They are divided into bandwidths to describe their functions but are best thought of as a continuous
spectrum of consciousness; from slow, loud and functional - to fast, subtle, and complex.
It is a handy analogy to think of Brainwaves as musical notes - the low frequency waves are like a
deeply penetrating drum beat, while the higher frequency brainwaves are more like a subtle high
pitched flute. Like a symphony, the higher and lower frequencies link and cohere with each other
through harmonics. Our brainwaves change according to what we’re doing and feeling. When
slower brainwaves are dominant we can feel tired, slow, sluggish, or dreamy. The higher frequencies
are dominant when we feel wired, or hyper-alert. The descriptions that follow are only broadly
descriptions - in practice things are far more complex, and brainwaves reflect different aspects when
Brainwave speed is measured in Hertz (cycles per second) and they are divided into bands
delineating slow, moderate, and fast waves (Symphonic Mind LTD, 2009).
2.2.1 Infra-Low
Infra-Low brainwaves are thought to be the basic cortical rhythms that underlie our higher brain
functions. Very little is known about infra-low brainwaves. Their slow nature make them difficult to
6
detect and accurately measure, so few studies have been done. They appear to take a major role in
Delta brainwaves are slow, loud brainwaves (low frequency and deeply penetrating, like a drum
beat). They are generated in deepest meditation and dreamless sleep. Delta waves suspend external
awareness and are the source of empathy. Healing and regeneration are stimulated in this state, and
Theta brainwaves occur most often in sleep but are also dominant in deep meditation. It acts as our
gateway to learning and memory. In theta, our senses are withdrawn from the external world and
focused on signals originating from within. It is that twilight state which we normally only
experience fleetingly as we wake or drift off to sleep. In theta we are in a dream; vivid imagery,
intuition and information beyond our normal conscious awareness. It’s where we hold our ‘stuff’,
Alpha brainwaves are dominant during quietly flowing thoughts, and in some meditative states.
Alpha is ‘the power of now’, being here, in the present. Alpha is the resting state for the brain. Alpha
waves aid overall mental coordination, calmness, alertness, mind/body integration and learning.
Beta brainwaves dominate our normal waking state of consciousness when attention is directed
towards cognitive tasks and the outside world. Beta is a ‘fast’ activity, present when we are alert,
attentive, engaged in problem solving, judgment, decision making, and engaged in focused mental
7
activity. Beta brainwaves are further divided into three bands; Lo-Beta (Beta1, 12-15Hz) can be
thought of as a 'fast idle, or musing. Beta (Beta2, 15-22Hz) is high engagement or actively figuring
something out. Hi-Beta (Beta3, 22-38Hz) is highly complex thought, integrating new experiences,
high anxiety, or excitement. Continual high frequency processing is not a very efficient way to run
Gamma brainwaves are the fastest of brain waves (high frequency, like a flute), and relate to
simultaneous processing of information from different brain areas. It passes information rapidly,
and as the most subtle of the brainwave frequencies, the mind has to be quiet to access
it. Gamma was dismissed as 'spare brain noise' until researchers discovered it was highly active
Gamma is above the frequency of neuronal firing, so how it is generated remains a mystery. It is
speculated that Gamma rhythms modulate perception and consciousness, and that a greater presence
of Gamma relates to expanded consciousness and spiritual emergence (Symphonic Mind LTD,
2009).
2.3 Neurofeedback
Neurofeedback is a way to quantify and train brain activity; it is brainwave biofeedback. The basic
principles of how neurofeedback works are deceptively simple. Communication between groups of
cells in the brain generates thoughts, sensations, actions and emotions. This activity is detectable in
During a neurofeedback session, sensors detect your brainwaves to see your brain in action. A
computer compares your brain activity to targets or goals for you to reach. Sounds and images tell
you immediately when your brain reaches your goal and when not - when you are activating or
8
suppressing the target area of the brain. Through this simple method, you learn how to quiet
brainwaves associated with low performance and increase brainwaves associated with optimal brain
function. Much like physical exercises strengthen and develop specific muscles, the more your brain
is exercised into reaching a new more comfortable, more efficient position, the better and stronger it
gets at it (neuroplasticity). Like any new skill, it simply requires feedback and practice (Symphonic
activity of the brain. It is typically noninvasive, with the electrodes placed along the scalp, although
invasive electrodes are sometimes used in specific applications. EEG measures voltage fluctuations
resulting from ionic current within the neurons of the brain. In clinical contexts, EEG refers to the
recording of the brain's spontaneous electrical activity over a period of time as recorded from
multiple electrodes placed on the scalp. Diagnostic applications generally focus on the spectral
content of EEG, that is, the type of neural oscillations (popularly called "brain waves") that can be
observed in EEG signals. EEG is most often used to diagnose epilepsy, which causes abnormalities
in EEG readings. It is also used to diagnose sleep disorders, coma, encephalopathies, and brain
death. EEG used to be a first-line method of diagnosis for tumors, stroke and other focal brain
disorders, but this use has decreased with the advent of high-resolution anatomical imaging
techniques such as magnetic resonance imaging (MRI) and computed tomography (CT). Despite
limited spatial resolution, EEG continues to be a valuable tool for research and diagnosis, especially
Because of high time resolution, noninvasiveness, ease of acquisition, and cost effectiveness, the
9
Electroencephalography (EEG) is the most studied non-invasive interface, mainly due to its
fine temporal resolution, ease of use, portability and low set-up cost. The technology is somewhat
susceptible to noise however. EEG sensors are used to monitor physiological processes reflected in
brain waves which are then translated into control signals for external devices or machines. EEG
sensors have been incorporated into gaming systems that enable a player to control what happens
onscreen with a headset, EEG-controlled exoskeletons translate users’ brain signals into movements,
and implanted electrodes enable patients to control bionic limbs. (F.L, Niederemeyer E. and da
Mental activities and their corresponding EEG correlates are termed as electrophysiological sources
of control (Nunez & Srinivasan, 1981). The main sources are listed below:
Mu and beta rhythms (8-12 Hz & 13-30 Hz respectively) originate in the primary sensorimotor
Cortex area of the brain and are more prominent when a person is not engaged in processing
desynchronization in the mu and beta bands which is termed ‘event related desynchronization’
(ERD). It begins in the contralateral rolandic region of the brain about 2 seconds prior to the onset of
a movement and becomes bilateral before execution of movement. After the movement, the power in
The imagination of motor movements, in particular limb movements is used in several BCIs which
identify the type of motor imagery (right/left hand/foot-movement) using a classification algorithm
that takes as features the power in the mu and beta bands at electrodes located over the primary
sensorimotor cortex. For convenience, we refer hereafter to BCIs relying on motor imagery as
10
2.4.2 P300
Infrequent significant auditory or visual stimuli, when interspersed with frequent stimuli typically
evokes a positive peak at about 300 milliseconds after stimulus presentation in the EEG over the
parietal cortex. This peak is called P300. A P300 appears in the user’s EEG when her/his selected
choice is highlighted. Detecting the choice for which a P300 was elicited allows the BCI to know the
user’s selected choice and execute the corresponding action. The P300 detection algorithm takes as
features the signal samples (after band-pass filtering and subsampling) at parietal electrode sites
When subjects are presented with repetitive visual stimuli at a rate greater than 5 Hz, a continuous
oscillatory response at the stimulation frequency and/or harmonics is elicited in the visual cortex.
This response is termed steady-state visual evoked potential (SSVEP). SSVEP based BCIs operate
by presenting the user with a set of repetitive visual stimuli at different frequencies which are
associated with actions. To select a desired action, the user needs to focus her/his attention on the
corresponding stimulus. The SSVEP corresponding to the focused stimulus is more prominent and
can be automatically detected by the BCI. Detection of SSVEPs in current BCIs relies on the
application of spatial filters (across electrodes) and temporal filters (Friman, et al., 2007).
SCPs are slow non-movement potential changes voluntarily generated by the subject. They reflect
changes in cortical polarization of the EEG lasting from 300ms up to several seconds. Operation of
11
SCP based BCIs is often of binary nature and relies on the subject’s ability to voluntarily shift
BCI systems based on non-movement mental tasks assume that different mental tasks (e.g., solving
a multiplication problem, imagining a 3D object, and mental counting) lead to distinct, task-specific
distributions of EEG frequency patterns over the scalp (Friman, et al., 2007).
The advance of technology has brought a new reality: Today, humans can use the electrical signals
from brain activity to interact with, influence, or change their environments. The emerging field of
BCI technology may allow individuals unable to speak and/or use their limbs to once again
communicate or operate assistive devices for walking and manipulating objects. Brain-computer
interface research is an area of high public awareness. Videos on YouTube as well as news reports in
the lay media indicate intense curiosity and interest in a field that hopefully one day soon will
dramatically improve the lives of many disabled persons affected by a number of different disease
processes.
A brain–computer interface (BCI), sometimes called a mind-machine interface (MMI), direct neural
enhanced or wired brain and an external device. BCIs are often directed at researching, mapping,
assisting, augmenting, or repairing human cognitive or sensory-motor functions. (Shih, et al., 2012)
Research on BCIs began in the 1970s at the University of California, Los Angeles (UCLA) under a
grant from the National Science Foundation, followed by a contract from DARPA (Vidal, 1973).
12
The papers published after this research also mark the first appearance of the expression brain–
computer interface in scientific literature (Vidal, 1977). The field of BCI research and development
has since focused primarily on neuroprosthetics applications that aim at restoring damaged hearing,
sight and movement. Thanks to the remarkable cortical plasticity of the brain, signals from
implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector
channels (Levine, et al., 2000). Following years of animal experimentation, the first neuroprosthetic
A BCI is a computer-based system that acquires brain signals, analyzes them, and translates them
into commands that are relayed to an output device to carry out a desired action. Thus, BCIs do not
use the brain's normal output pathways of peripheral nerves and muscles. This definition strictly
limits the term BCI to systems that measure and use signals produced by the central nervous system
BCI.
Furthermore, an electroencephalogram (EEG) machine alone is not a BCI because it only records
brain signals but does not generate an output that acts on the user's environment. Brain-computer
interfaces do not read minds in the sense of extracting information from unsuspecting or unwilling
users but enable users to act on the world by using brain signals rather than muscles. The user and
the BCI work together. The user often after a period of training generates brain signals that encode
intention, and the BCI, also after training, decodes the signals and translates them into commands to
an output device that accomplishes the user's intention. (Wolpaw & wolpaw, 2012)
A question was posed by Vidal in 1977; “Can observable electrical brain signals be put to work as
such as prostheses?” (Vidal, 1977). His Brain-Computer Interface Project was an early attempt to
13
evaluate the feasibility of using neuronal signals in a person-computer dialogue that enabled
computers to be a prosthetic extension of the brain. Although work with monkeys in the late 1960s
showed that signals from single cortical neurons can be used to control a meter needle, (Fetz, 1969)
Initial progress in human BCI research was slow and limited by computer capabilities and our own
knowledge of brain physiology. By 1980, (Elbert, et al., 1980) demonstrated that persons given
biofeedback sessions of slow cortical potentials in EEG activity can change those potentials to
control the vertical movements of a rocket image traveling across a television screen.
In 1988 (Farwell & Donchin, 1988) showed how the P300 event-related potential could be used to
allow normal volunteers to spell words on a computer screen. Since the 1950s, the mu and beta
rhythms (i.e., sensorimotor rhythms) recorded over the sensorimotor cortex were known to be
associated with movement or movement imagery (Gastaut, 1952). In the late 1970s (Kuhlman, 1978)
showed that the mu rhythm can be enhanced by EEG feedback training. Starting from this
information, (Wolpaw & wolpaw, 2012; Wolpaw, et al., 2002) trained volunteers to control
sensorimotor rhythm amplitudes and use them to move a cursor on a computer screen accurately in 1
or 2 dimensions.
In 2011, (Krusienski & Shih, 2011) demonstrated that signals recorded directly from the cortical
rapid rate, as evidenced by the number of peer-reviewed publications in this field over the past 10
years.
The purpose of a BCI is to detect and quantify features of brain signals that indicate the user's
intentions and to translate these features in real time into device commands that accomplish the
14
user's intent. To achieve this, a BCI system consists of 4 sequential components (Wolpaw, et al.,
2002): (1) signal acquisition, (2) feature extraction, (3) feature translation, and (4) device output.
These 4 components are controlled by an operating protocol that defines the onset and timing of
operation, the details of signal processing, the nature of the device commands, and the oversight of
performance. An effective operating protocol allows a BCI system to be flexible and to serve the
Signal acquisition is the measurement of brain signals using a particular sensor modality (e.g., scalp
or intracranial electrodes for electrophysiological activity, fMRI for metabolic activity). The signals
are amplified to levels suitable for electronic processing (and they may also be subjected to filtering
to remove electrical noise or other undesirable signal characteristics, such as 60-Hz power line
Feature extraction is the process of analyzing the digital signals to distinguish pertinent signal
characteristics (i.e., signal features related to the person's intent) from extraneous content and
representing them in a compact form suitable for translation into output commands.
These features should have strong correlations with the user's intent. Because much of the relevant
brain activity is either transient or oscillatory, the most commonly extracted signal features in
current BCI systems are time-triggered EEG or ECoG response amplitudes and latencies, power
within specific EEG or ECoG frequency bands, or firing rates of individual cortical neurons.
15
Environmental artifacts and physiologic artifacts such as electromyographic signals are avoided or
The resulting signal features are then passed to the feature translation algorithm, which converts the
features into the appropriate commands for the output device (i.e. commands that accomplish the
user's intent). For example, a power decrease in a given frequency band could be translated into an
upward displacement of a computer cursor, or a P300 potential could be translated into selection of
the letter that evoked it. The translation algorithm should be dynamic to accommodate and adapt to
spontaneous or learned changes in the signal features and to ensure that the user's possible range of
The commands from the feature translation algorithm operate the external device, providing
functions such as letter selection, cursor control, robotic arm operation, and so forth. The device
operation provides feedback to the user, thus closing the control loop.
16
Figure 2.1 basic design and operation of a BCI system. Source :( Levine, et al., 2000)
(Figure 2.1) shows the basic design and operation of a BCI system. Signals from the brain are
acquired by electrodes on the scalp or in the head and processed to extract specific signal features
(e.g. amplitudes of evoked potentials or sensorimotor cortex rhythms, firing rates of cortical
neurons) that reflect the user’s intent. These features are translated into commands that operate a
device (e.g. a simple word processing program, a wheelchair, or a neuroprosthesis). Success depends
on the interaction of two adaptive controllers, user and system (Wolpaw & wolpaw, 2012).
engineers, clinicians, and the general public. This excitement reflects the rich promise of BCIs. They
17
may eventually be used routinely to replace or restore useful function for people severely disabled
by neuromuscular disorders; they might also improve rehabilitation for people with strokes, head
At the same time, this exciting future can come about only if BCI researchers and developers engage
and solve problems in 3 critical areas: signal-acquisition hardware, BCI validation and
All BCI systems depend on the sensors and associated hardware that acquire the brain signals.
Improvements in this hardware are critical to the future of BCIs. Ideally, EEG-based (noninvasive)
BCIs should have electrodes that do not require skin abrasion or conductive gel (i.e., so-called dry
electrodes); be small and fully portable; have comfortable, convenient, and cosmetically acceptable
mountings; be easy to set up; function for many hours without maintenance; perform well in all
environments; operate by telemetry instead of requiring wiring; and interface easily with a wide
range of applications.
In principle, many of these needs could be met with current technology, and dry electrode options
are beginning to become available (e.g. from NeuroSky). The achievement of good performance in
As work progresses and BCIs begin to enter actual clinical use, two important questions arise: how
good a given BCI can get (e.g. how capable and reliable) and which BCIs are best for which
purposes. To answer the first question, each promising BCI should be optimized and the limits on
18
users' capabilities with it should be defined. Addressing the second question will require consensus
among research groups in regard to which applications should be used for comparing BCIs and how
performance should be assessed. The most obvious example is the question of whether the
performance of BCIs that use intracortical signals is greatly superior to that of BCIs that use ECoG
signals, or even EEG signals. For many prospective users, invasive BCIs will need to provide much
better performance to be preferable to noninvasive BCIs. It is not yet certain that they can do so. The
data to date do not give a clear answer to this key question (Shih, et al., 2012).
On the one hand, it may turn out that noninvasive EEG- or fNIR-based BCIs are used primarily for
basic communication, while ECoG- or neuron-based BCIs are used for complex movement control.
On the other hand, noninvasive BCIs may prove nearly or equally capable of such complex uses,
while invasive BCIs that are fully implantable (and thus very convenient to use) might be preferred
by some people even for basic communication purposes. At this point, many different outcomes are
possible, and the studies and discussions necessary to select among them have just begun.
The development of BCIs for people with disabilities requires clear validation of their real-life value
in terms of efficacy, practicality (including cost-effectiveness), and impact on quality of life. This
depends on multidisciplinary groups able and willing to undertake lengthy studies of real-life use in
complicated and often difficult environments. Such studies, which are just beginning are an essential
step if BCIs are to realize their promise. The validation of BCIs for rehabilitation after strokes or in
other disorders will also be demanding and will require careful comparisons with the results of
19
2.7.3 Reliability
The future of BCI technology certainly depends on improvements in signal acquisition, clear
validation studies and viable dissemination models. These issues pale next to those associated with
the problem of reliability. In all hands, no matter the recording method, the signal type, or the signal-
processing algorithm, BCI reliability for all but the simplest applications remains poor. BCIs suitable
for real-life use must be as reliable as natural muscle-based actions. Without major improvements,
the real-life usefulness of BCIs will, at best, remain limited to only the most basic communication
Solving this problem depends on recognizing and engaging 3 fundamental issues: the central role of
adaptive interactions in BCI operation; the desirability of designing BCIs that imitate the distributed
functioning of the normal CNS; and the importance of incorporating additional brain signals and
The acquisition and maintenance of BCI-based skills like reliable multidimensional movement
interface operation rests on the effective interaction of two adaptive controllers, the CNS and the
BCI. The BCI must adapt so that its outputs correspond to the user's intent. At the same time, the
BCI should encourage and facilitate CNS plasticity that improves the precision and reliability with
which the brain signals encode the user's intent. In summary, the BCI and CNS must work together
to acquire and maintain a reliable partnership under all circumstances. The work needed to achieve
this partnership has just begun. It involves fundamental neuro-scientific questions and may yield
Finally, current BCIs provide mainly visual feedback, which is relatively slow and often imprecise.
In contrast, natural muscle-based skills rely on numerous kinds of sensory input (e.g. proprioceptive,
cutaneous, visual, and auditory). Brain-computer interfaces that control applications involving high-
20
speed complex movements (e.g. limb movement) are likely to benefit from sensory feedback that is
faster and more precise than vision. Efforts to provide such feedback via stimulators in cortex or
elsewhere have begun. The optimal methods will presumably vary with the BCI, the application, and
the user's disability (e.g. peripheral inputs may often be ineffective in people with spinal cord
injuries).
In conclusion, many researchers throughout the world are developing BCI systems that a few years
ago were in the realm of science fiction. These systems use different brain signals, recording
methods, and signal-processing algorithms. They can operate many different devices, from cursors
on computer screens to wheelchairs to robotic arms. A few people with severe disabilities are
already using a BCI for basic communication and control in their daily lives. With better signal-
acquisition hardware, clear clinical validation, viable dissemination models, and, probably most
important, increased reliability, BCIs may become a major new communication and control
technology for people with disabilities—and possibly for the general population also.
21
CHAPTER THREE
METHODOLOGY
3.1 Introduction
This chapter states the methodology used in design and implementation of a BCI for directional
movements of a robot. It consists of the hardware and software phase which are explained below.
The major hardware used for this project include a computer system, a HC05 module, an EEG
The Arduino Uno is a microcontroller board based on the ATmega328 with a 32KB flash memory
and clock speed of 16MHz. It has 14 digital input/output pins which consists of 6 PWM outputs, 6
analog inputs. It also has on-board a 16 MHz ceramic resonator, a USB connection, a power jack,
and a reset button. It contains everything needed to support the microcontroller; simply connect it to
a computer with a USB cable or power it with an AC-to-DC adapter or a battery to get started.
Figure 3.1: An Arduino Uno Board showing its properties. Source: ( (SparkFun, 2016))
22
3.2.2 An EEG Headset
The EEG headset used in this project is provided by NeuroSky and it is called the Neurosky
Mindwave Mobile (NeuroSky, 2016). Measuring EEG activity has traditionally required complex
equipment costing a lot of money, but after a lot of research an embeddable biosensor has been
created by NeuroSky. This precisely accurate, portable and noise filtering EEG biosensor collects
electrical signals (not actual thoughts) from the brain to translate brain activity into action. This BCI
device digitizes and amplifies raw analog brain signals to deliver concise input. It features a direct
connect to dry electrode, one EEG channel (reference + ground), extremely low level signal
detection, advanced filter with high noise immunity, raw EEG transmission at 512MHz. This device
is also able to read low alpha, high alpha, low beta and high beta waves.
The device consists of eight main parts, ear clip, flexible ear arm, battery area, power switch,
adjustable head band, sensor tip, sensor arm and inside think gear chipset. Figure 1 presents the
device design. The principle of operation is quite simple. Two dry sensors are used to detect and
filter the EEG signals. The sensor tip detects electrical signals from the forehead of the brain. At the
same time, the sensor picks up ambient noise generated by human muscle, computers, light bulbs,
electrical sockets and other electrical devices. The second sensor, ear clip, is a grounds and
reference, which allows think gear chip to filter out the electrical noise. The device measures the raw
signal, power spectrum (alpha, beta, delta, gamma, theta), attention level, mediation level and blink
detection. The raw EEG data received at a rate of 512 Hz. Other measured values are made every
second. Therefore, raw EEG data is a main source of information on EEG signals using Mindwave.
23
Figure 3.2 A NeuroSky Headset (BCI device)
This is a Bluetooth serial module that is used for converting serial port to bluetooth. This module has
two modes; master and slave mode. The main function of Bluetooth serial module is replacing the
1. There are two MCUs (Microcontroller Unit) that want to communicate with each other. One
connects as Bluetooth master device while the other one connects as slave device. Their connection
can then be built once the pair is made. This Bluetooth connection is equivalently liked to a serial
port line connection including RXD, TXD signals. And they can be used by the Bluetooth serial
2. When the MCU is a Bluetooth slave module, it can communicate with Bluetooth adapter of
computers and smart phones. This will enable it create a virtual communicable serial port line
24
The Bluetooth devices in the market mostly are slave devices, such as Bluetooth printer, Bluetooth
GPS. The HC05 enables us to switch between master and slave to make pair and communicate with
slave devices. Communication between two Bluetooth modules requires at least two conditions: (1)
The communication must be between master and slave (2) The password must be correct.
The major software used for this project include Arduino IDE and LabVIEW.
3.3.1 LabVIEW
platform and development environment for a visual programming environment from National
Instruments. LabVIEW is commonly used for data acquisition, instrument control and industrial
LabVIEW ties the creation of user interfaces (called front panels) into the development cycle.
LabVIEW programs/subroutines are called virtual instruments (VIs). Each VI has three components:
a block diagram, a front panel and a connector panel. The last is used to represent the VI in the block
diagram of other calling VIs. The front panel is built using controls and indicators. Control are inputs
– they allow a user to supply information to the VI. Indicators are outputs – they indicate or display
the results based on the inputs given to the VI. The back panel, which is a block diagram contains
25
the graphical source code. All of the objects placed on the front panel will appear on the back panel
as terminals. The back panel also contains structures and functions which perform operations on
controls and supply data to indicators. The structures and functions are found on the functions palette
A VI can either be run as a program, with the front panel serving as a user interface or when
dropped as a node onto the block diagram, the front panel defines the inputs and outputs for the node
LabVIEW includes extensive support for interfacing to devices, instruments, cameras and other
devices. Users interface to hardware by either writing direct bus commands or using high level,
device-specific, drivers that provide native LabVIEW function nodes for controlling the device.
Figure 3.4 LabVIEW front panel used to interact with Mindwave headset. Source: (NI, 2016)
26
3.3.2 Arduino IDE
Arduino Integrated Development Environment makes it easy to write code and upload it to any
board. It runs on windows, Mac OS X and Linux. The environment is written in java and based on
processing and other open source software. The arduino IDE contains a text editor for writing code,
a message area, a text console, a toolbar with buttons for common functions and a series of menus. It
connects to the Arduino and Genuino hardware to upload programs and communicate with them.
Program written using arduino are called sketches. These sketches are written in the text editor and
are saved with the file extension [.ino]. The editor has features for cutting, pasting, searching and
replacing text. The message area gives feedback while saving, exporting and also displays errors.
The console displays text output by the Arduino software, including complete error messages and
other information. The bottom right-hand corner of the window displays the configured board and
serial port. The toolbar button allows users to verify and upload programs, create, open, save
sketches. It can also be used open the serial monitor. (Arduino Inc, 2016)
27
Figure 3.4 an Arduino IDE showing a sketch
The BCI system was created using two methods; 1) LabVIEW, 2) Arduino
1) LabVIEW Method
As described earlier, LabVIEW is a graphical programming environment that performs all sorts of
operations from data acquisition to industrial automation. After the installation of LabVIEW, the
next step is to create a block diagram (BD) which contains the graphical code and then the front
panel (FP) which allows a user to interact with a virtual instrument (VI). (Figure 3.3) above provides
a good visualization of neurofeedback which is a very useful tool for the user.
28
Figure 3.5 Blink occurring in the second waveform. Source: (NI, 2016)
After the Mindwave headset has been paired with the computer system, communication begins. A
specific dynamic library (dll) called think gear communication allows LabVIEW to communicate
with the headset by listening to a specific virtual port and acquire any information from that port;
this is the signal acquisition phase. This raw EEG data sent from the headset is a summation of
various waves, LabVIEW sees it as a cluster. Using the Neurosky driver for LabVIEW, it enables
this cluster to be broken down into smaller data; this is the feature extraction phase. Some notable
information that has been extracted include; alpha waves, beta waves and EoG signals (eye blinks).
The alpha waves are translated to meditation data and beta waves to attention data. Blinking causes
a spike in the raw EEG of appreciable magnitude as compared to base signal without the blink
artifact. This enables us to detect eye blinks by thresholding the EEG signal. This is the feature
translation phase. (Figure 3.5) shows the raw wave as opposed to when a blink occurred.
The translated data is then used as a control output which can then be sent to a device such as the
2) Arduino Method
29
This method first involves interfacing the Mindwave headset with the arduino board. In order for this
to happen, the arduino board needs a bluetooth functionality. This is where the HC05 serial
bluetooth module comes to play. The HC05 will be programmed to act as the master device while
the Mindwave headset will be programmed as the slave device. In order to pair this two devices, the
HC05 will initiate a bluetooth connection. This is done by sending certain AT commands to the
HC05 using arduino serial monitor. AT commands are instructions used to control a modem. (Figure
3.6) below shows the sketch upload to the arduino to program the HC05.
After the sketch is uploaded to the arduino, some of the following codes can then be used to give the
HC05 instructions:
AT+ROLE=0 [Set the module to be slave mode. The default mode is slave.]
AT+CMODE=0 [Set the module to make pair with the other random Bluetooth module (No
30
AT+CMODE=1 [Set the module to make pair with the other Bluetooth module (specified
address). This is the preferred mode. Then the module will search the last paired module until
AT+PSWD=XXXX [Set the module pair password. The password must be 4-bits.]
The HC05 can now initiate a bluetooth connection to the Mindwave headset and hence a successful
connection has been established. The arduino uno can now receive raw EEG data from the headset:
The arduino uno can now receive data but this data is unreadable by the arduino. For the data to be
processed, certain computation and programming is done in order to carry out feature extraction and
feature translation. The code that computes feature extraction has been given by (NeuroSky, 2016)
and is provided license free. Based on this sample code, an overall sketch was created to implement
the feature extraction and translation stage. This sketch would then be used for control output.
31
Figure 3.7 Sketch showing initial declaration of variables
(Figure 3.7) shows the initial declaration of variables needed for the whole BCI system.
(Figure 3.8) shows the exact code used in parsing the raw EEG data coming from the headset. This
code should ordinarily be in a library and then called from that library, but due to certain difficulties
the code was extracted from the library and then implemented directly in the arduino IDE.
32
Figure 3.8 Sketch showing exact code for feature extraction and translation
33
Figure 3.9 Sketch showing code for controlling arduino pins based on attention levels
(Figure 3.9) implements the device output stage in which the device used here was a DC motor. Due
DC MOTORS MICRO
CONTROLLER
34
START
If Power
switch on?
Yes STOP
EEG signal acquisition using No
35
CHAPTER FOUR
4.1 Introduction
This chapter discusses the various testing carried out in this project. Testing is an important part of
any project and is done to uncover any unwanted behavior of any sort. Both the hardware and
software were tested and the results of the tests are also discussed in this chapter.
The Mindwave headset was first setup using a computer system and the manufacturer’s software. It
was first paired with computer system over bluetooth, then some protocols followed based on the
manufacturer’s software manual in order to ensure optimized performance from the headset.
36
4.3 Testing of Arduino Uno
For total assurance of the arduino uno embedded system, it was first tested using the Arduino IDE, a
breadboard, a color led and a resistor. After all the necessary drivers have been installed on the
computer system, a simple sketch was then uploaded to the board to toggle the color led on and off.
A successful operation meant that the embedded system was ready for next set of operations to be
performed.
The HC05 module is a bluetooth serial module that is expected to provide bluetooth functionality to
the arduino uno so that it can interface with the Mindwave headset. The HC05 looks and functions
similarly as HC06 module, so appropriate tests took place to ensure that the right module was being
used. While the HC05 can switch between master and slave mode, the HC06 could only function in
slave mode.
The module was first programmed from the arduino so that it could accept AT commands. As AT
commands were being sent to it from the serial monitor, the device was not able to switch between
master mode and slave mode. This test showed that the wrong module was being used and not the
HC05. The HC05 was later acquired and after a series of tests and commands, it was able to switch
between master and slave mode. After the tests and series of AT commands sent to the HC05, it was
able to successfully pair with the Mindwave headset with ease. This proved the series of testing
protocols to be a very effective one. eSense values were then acquired successfully and is shown in
37
Figure 4.2 eSense readings from the arduino serial monitor
LabVIEW is a programming environment like the arduino IDE. In order to ensure it was up and
running some tests were carried out. LabVIEW was first made to perform basic signal generation
and processing tasks. After that was successful, the next test was ensuring that LabVIEW was able to
communicate with external devices. The Mindwave headset was first paired to the computer system
over bluetooth, this will create a virtual communication serial port then LabVIEW was expected to
listen to the data coming from that specified port. LabVIEW was not able to listen to the port and
therefore failed the test. After debugging and some research, it was discovered that the think gear
library had to be loaded inside LabVIEW and the NI-VISA (LabVIEW driver for communicating
with external devices) also had to be installed on the computer system. LabVIEW was later able to
38
interact with the Mindwave headset after correction of previous errors. The Neurosky driver for
LabVIEW was also tested to interact with the Arduino but it failed. This was because the arduino
uno was lacking the LabVIEW firmware. After the firmware was uploaded to the arduino, it became
very easy to interact with LabVIEW. This session of tests also proved to be very effective in
Five participants were reviewed and readings from the attention meter indicates that four of the
participants surpassed the minimum threshold of 50% for forward motion and 80% for backward
motion respectively. The participants are participant 1, 2, 4 and 5. Participant 3 was unable to attain
the required attention for the set threshold (see Appendix B for amplitude reading and attention
meter readings).
Both the DC and stepper motors were tested for precision and control using the arduino uno and an H-
bridge motor controller. A stepper motor was to be initially used but unlike the DC motor which just
required highs and lows on its control pins, the stepper motor presented to many difficulties in control
and hence was not a preferable choice of motor. The DC motors were tested using the arduino and a
motor controller.
39
Figure 4.3 DC motors with attached tires
After all the individual components have been tested, it was time to test all of them as a unit. The
arduino method of the BCI communication system was majorly implemented in this project. It
involves first interfacing the HC05 with the arduino uno to make one unit then interfacing that unit
with the Mindwave headset. Once the Mindwave headset is powered on, the arduino is powered on
and then HC05 initiates a connection to the headset using its unique identifier. The HC05 must blink
twice per second to ensure pairing while the Mindwave headset must constantly flash blue to ensure
both devices have paired. The TX terminal led on the arduino should start blinking to show ensure
that the headset is transmitting data to arduino. The arduino then controls the motors based on the
sketches that have been uploaded to the board. (Table 4.1) shows the thresholds at which the motors
40
Figure 4.4 Final Project Design
1 Attention 1 50 - 79 Forwards
Table 4.1 Mindwave eSense table for attention, meditation and poor quality based on final code
41
CHAPTER FIVE
5.1 Conclusion
This chapter concludes this dissertation. This project has been an interesting piece of work to
manage requiring a substantial amount of effort but through this, resulted in learning a lot of skill
sets which are essential for future purposes. This project demonstrates how to develop a fully
functional BCI system from the scratch and it demonstrated techniques such as embedded system
programming, wireless communication, graphical programming among others. This project work
Programming in LabVIEW was quite challenging. It takes a lot of time and energy to write
the final code to interact with the Mindwave headset. Downloading the setup was an even
Interfacing the Mindwave headset with LabVIEW was quite challenging as there was no
Interfacing the HC05 with the Mindwave headset was the most challenging of all. Firstly the
HC06 was acquired and tested to interface with the headset but it did not work. After some
testing it was realized that the HC05 was needed. Getting the HC05 was quite hard because it
was not available in any market in Nigeria. Another option was to use the RN-42 bluetooth
module but this was overseas and the shipping cost was even more expensive than the
module itself. The HC05 was fortunately acquired and it got to connect to the Mindwave
headset.
42
The Arduino code was constantly giving errors to the compiler. It took a while but after
much work and diligent research, the codes were compiled and used to achieve set
objectives.
Initial setup of Mindwave headset was a terrible experience. It was highly challenging and
frustrating. Firstly a fake Bluetooth dongle was acquired for the set computer system, and
after that it was an impossible task to pair the Mindwave headset with the PC. After that set
of frustrating events, a new Bluetooth dongle was acquired. It took a while but the Mindwave
A lot of resources were needed for this project to be carried out including time. Fake finger
batteries had to be bought every day to power the headset as that was the only available
option. Mobile data had to be bought to download the needed software due to the unavailable
wireless local area network. The motors also have quite an expensive price tag along with the
5.3 Recommendation
This project has substantial room for improvement, to make this project more robust and for more
Signal-acquisition hardware: All BCI systems depend on the sensors and associated
hardware that acquire the brain signals. Improvements in this hardware are critical to the
future of BCIs. Ideally, EEG-based (noninvasive) BCIs should have electrodes that do not
require skin abrasion or conductive gel (i.e., so-called dry electrodes); be small and fully
set up; function for many hours without maintenance; perform well in all environments;
operate by telemetry instead of requiring wiring; and interface easily with a wide range of
applications.
43
Reliability: The BCI to be used must be very reliable. The future of BCI technology certainly
depends on improvements in signal acquisition. It should be able acquire more precise data
Home automation BCI: The processed brain signals can be used as control signals for
Control for robot movement in 2D: More number of sensors can be used for acquiring brain
signals from different portion of the brain. These signals can then be processed in the similar
manner and mapped to the control signals to obtain robotic movement in the other dimension
REFERENCES
44
Akhtari, M., Bryant, C. H. & Mamelak, N. A., 2000. Conductivities of three-layer human skull.
Brain Topology, 13(1), pp. 29-42.
Allison, B. ,., Wolpaw, E. & Wolpaw, J., 2007. Brain computer interface systems: Progress and
prospects. British review of medical devices, 4(4), pp. 463-474.
Berger, H., 1929. Uber das electrenkephalogramm des menchen. Arch Psychiatr Nervenkr, Volume
87, pp. 527-570.
Coenen, Anton, Fine, E. & Zayachkivska, O., 2014. Pioneer In Electroencephalography. Journal of
the History of the neurosciences, Issue 23, pp. 276-286.
Elbert, T., B, R., W, L. & N, B., 1980. Biofeedback of slow cortical potentials.. 48(3), pp. 293-301.
Farwell, L. & Donchin, E., 1988. Talking off the top of your head: toward a mental prosthesis
utilizing event-related brain potentials. Electroencephalography; Clinical Neurophysiology, 70(6),
pp. 510-523.
Fetz, E. E., 1969. Operant conditioning of cortical unit activity. Science, 163(3870), pp. 955-958.
Friman, O., Volosyak, I., & & Graser, A., 2007. Multiple Channel Detection of Steady-State Visual
Evoked Potentials for Brain-Computer Interfaces.. IEEE Transactions on Biomedical Engineering,
54(4), pp. 742-750.
Gastaut, H., 1952. Electrocorticographic study of the reactivity of rolandic rhythm. Volume 87, p.
176–182.
Heinrich, Gevensleven, H. & & Strehl, U., 2007. Neurofeedback-train your brain to train behaviour.
Journal of Child Psychology and Psychiatry, 1(48), p. 3–16.
45
Hochberg, L., Serruya, M. D. & Friehs, G. M., 2006. Neuronal ensemble control of prosthetic
devices by a human with tetraplegia.. Nature, 442(7099), pp. 164-171.
Krusienski, D. J. & Shih, J. J., 2011. Control of a visual keyboard using an electrocorticographic
brain-computer interface. Neurorehabilitative Neural Repair, 25(4), pp. 323-331.
Kuhlman, W. ,., 1978. EEG feedback training: enhancement of somatosensory cortical activity..
Electroencephalography; Clinical Neurophysiology, 45(2), pp. 290-294.
Lang & Bradley, M. M., 2007. The handbook of emotion elicitation and assessment, England:
Oxford University Press.
Levine, S. et al., 2000. A direct brain interface based on event-related potentials. IEEE transactions
on rehabilitation engineering : a publication of the IEEE Engineering in Medicine and Biology
Society, 8(2), pp. 180-5.
Michele, B. & et al, 2008. system that allows monkeys to control robot arms via brain signals. s.l.,
Pitt chronicle.
Nunez, P. L. & Srinivasan, R., 1981. Electric fields of the brain: The neurophysics of EEG., s.l.:
Oxford University Press.
Schlögl, A., Slater, M. & Pfurtscheller, G., 2002. Presence research and EEG, s.l.: s.n.
46
Symphonic Mind LTD, 2009. What are Brainwaves?. [Online]
Available at: http://www.brainworksneurotherapy.com/what-are-brainwaves
Vidal, J., 1973. Toward direct brain-computer communication. Annual Review of Biophysics and
Bioengineering, 2(1), pp. 157-80.
Vidal, J., 1977. "Real-Time Detection of Brain Events in EEG". IEEE Proceedings, 65(5), pp. 633-
641.
Whittingstall, K., Logothetis & K., N., 2009. Frequency-Band Coupling in Surface EEG Reflects
Spiking Activity in Monkey Visual Cortex. 64(2), pp. 9-281.
Wolpaw, J. et al., 2002. Brain-computer interfaces for communication and control. Clinical
neurophysiology, 113(6), pp. 767-791.
Wolpaw, J. & wolpaw, E., 2012. Brain-computer interfaces: Something new under the sun.,
England: E.W Oxford unversity press.
APPENDIX A
Arduino program to acquire data from the Mindwave headset and send control output to respective
47
48
49
50
APPENDIX B
Diagrams showing attention analysis and their corresponding raw data for five different people
Participant 1
Participant 2
Participant 3
51
Participant 4
Participant 5
52