Академический Документы
Профессиональный Документы
Культура Документы
859878
Annals of Biomedical Engineering, Vol. 34, No. 5, May 2006 (
DOI: 10.1007/s10439-005-9055-7
College of Computing, Georgia Institute of Technology, Atlanta, Georgia; 2 Neil Squire Foundation, 220-2250 Boundary Road,
Burnaby, BC, Canada V5M 4L9; and 3 Department of Electrical and Computer Engineering,
The University of British Columbia, 2356 Main Mall, Vancouver, Canada V6T 1Z4
(Received 6 May 2005; accepted 13 October 2005; published online: 20 April 2006)
CLASSIFICATION TEMPLATE
INTRODUCTION
859
C 2006 Biomedical Engineering Society
0090-6964/06/0500-0859/0
Study
Synopsis
Assistive
Technology
Experimental
variable(s)
Study description
BI technology
design model
Description
Dependent
variable(s)
Independent
variable(s)
Subject class
Online/Offline
Objective(s)
Study Class
Study attributes
Attribute values
One of:
safety and tolerance study (of a new biorecording technology and application method)
exploratory study (of unknown properties of a signal measured from a biorecording technology,
e.g., exploring the timefrequency properties of a neurological phenomenon in EEG)
[controlled] technology design study (of technology design options)
[controlled] usability study (of technology designs)
[controlled] efficacy study (of impact of technology on people)
pre-commercialization studies (of product designs)
Statement of study objective(s). Examples are explore the signal characteristics of imagined
movement versus actual movement, demonstrate that a recording technique is safe and tolerable,
study the usability of a new technology under certain conditions
Biosignal source classone of:
online (evaluation of technology with live biosignals)
offline (evaluation of technology with pre-recorded biosignals)
Subject classone of:
human
animal
Name of variable 1: specific values
860
JACKSON et al.
Physical
environment
Relation to target
environment
BI Transducer
Location
Target application
Study attributes
Attribute values
Type
Target population
Table 1. Continued.
Study attributes
Neurological
phenomenon
Inputs
Biorecording technology
Attribute values
Table 1. Continued.
862
JACKSON et al.
Control
Interface
Study attributes
Attribute values
Table 1. Continued.
Control
Interface
Study attributes
Dimensionality
Control type
Semantic
translation
Output
Type of device
Input
Temporal control
paradigm
Attribute values
Table 1. Continued.
864
JACKSON et al.
Subject pool
Selection
Cuing mechanisms
Monitors
External
controllers
Assistive
device
Study attributes
written instructions
verbal instructions
displayed instructions
audio cues
displayed objects or icons
Description of the subjects used in the study
Description of criteria used to select the subject pool
Description of how subjects were attracted to the study, including methods of compensation
limb control
a) neuroprosthetic (FES system)
b) exoskeleton control
body function control FES system
appliance interface/remote control (wireless, IR, X10 to external devices)
speech synthesizer
visual display (for text or drawings)
cursor or pointer for referencing
wheelchair
other assistive device
followed by
specific details of the device
Same classes as those listed under Demo DeviceFeedback
Description of all external (non-integrated) State Controllers used to dynamically configure or adjust
the real-world BI system during an evaluation. (Many controllers are integrated into the
components as external control interfaces and are difficult to define. Those listed here are external
devices that manipulate the subject or objects during the experiment)
Description of the State Monitors used, and what data they report, e.g., eye-blink sensors, EMG
sensors, video cameras, or microphones
Description of the Cuing Mechanisms used to guide user responses. One of:
Attribute values
Chosen subjects
Selection criteria
Method of
recruitment
Screening methods Description of how the selection criteria were measured and applied
Feedback
Type
Temporal control
paradigm
Feedback
Table 1. Continued.
Experimental
protocol
Introductory
protocol
Test protocol
Structure
Preparation
Study attributes
Subject task(s)
Recording periods
Relation to target
population
Usage history
Training methods
Special conditioning/preparation
Equipment
customization
Subject grouping
single-group design
within-group (parallel) design
between-group (crossover) design
case-study (N-of-1 design)
other: specify
followed by
description of grouping methods and controls
Definition of the number and timing of sessions and trials in the study
Description of the methods used to orient the subjects to the equipment, tasks and environment,
completing consent forms and questionnaires
For non-interactive tasks (where feedback is not required), one of:
visual attention
attempted movement
language tasks (spelling)
mathematics tasks (counting, subtraction)
mental rotation
followed by
a description of the mental task(s) performed (e.g., visualizations, mental activities such as
arithmetic, or motor imagery)
For basic interaction tasks, one of:
guided,
single item selection (one selectable target per trial)
item selection (multiple selectable targets per trial)
positioning (i.e., move a pointer/object to a point or orientation in space specified by a fixed
target or target trajectory)
path following/tracing (i.e., move a pointer/object to follow a moving target or a visible path in
space)
self-guided,
Description of the subjects usage history (using terminology defined in Section . . .Usage History
Description of the training and/or practice methods
Description of special conditioningincludes medical procedures or other preparatory
conditioningrather than training
Description of methods used to customize the BI technology to the user
Attribute values
Table 1. Continued.
866
JACKSON et al.
Quantitative
Qualitative
Temporal
sequencing
Performance
feedback
Information
recorded
Relation to target
activity
Subject task
pacing
Attribute values
Description of the quantitative methods used to analyze data, such as metrics and statistical methods
Description of the qualitative methods used to collect and analyze data, such as surveys or interviews
The activities performed after the testing phase such as exit interviews
Note. Proposed attribute set elements are marked in bold type on the right-hand side.
Analysis
Debriefing
protocol
Qualitative
methods
Results and
interpretations
Theoretical
modeling
Quantitative
methods
Results and
interpretations
Theoretical
modeling
Study attributes
Table 1. Continued.
868
JACKSON et al.
Application
The following method was employed in order to interpret and classify the reported studies. Each journal paper
in the test set was divided into studies where a study was
defined to be an evaluation with a unique objective, which
if not clearly stated, was characterized by a unique set of
dependent and independent variables, subject pool, test protocols, and analysis methods. For example, a paper describing a mu-based BI technology that tested the same subject
Data collection
Study 1
Synopsis
Apparatus
Physical environment
Assistive Technology
Experimental
variable(s)
Study description
Control interface
Demo Device
Location
Target application
Dependent variable(s)
Description
BI technology design model
Online/Offline
Subject class
Independent variable(s)
Study Class
Objective(s)
Idle Support
All attributes . . .
Type of device
Output
Artifact processing
Stimulator
Feature extraction/translation
algorithms
Neurological phenomenon
Biorecording technology
Inputs
Type
Target activity
Target environment
Target population
Experimental protocol
Subject pool
Structure
Preparation
Selection
Assistive Device
External controllers
Monitors
Cuing mechanisms
Control type
Table 2. Continued.
870
JACKSON et al.
Quantitative
Qualitative
Theoretical modeling
Debriefing protocol
Qualitative methods
Results and interpretations
Theoretical modeling
Quantitative methods
Performance feedback
Information recorded
Subject task(s)
Note. Attribute values depicted in bold type on the right represent attribute set elements defined in the study review template.17
Analysis
Introductory protocol
Test protocol
Table 2. Continued.
872
JACKSON et al.
Comparing BI Studies
Characterizing the representative set of BI studies using
a common taxonomic framework resulted in several interesting findings. First, the results of many of the reviewed
studies could not be directly compared, because factors
in the experimental methods (whether subject related or
protocol related) were significantly different between studies. This observation emphasizes the important point that
the even though two studies use a common outcome measure, such as information rate, it is not sufficient to draw
conclusions; common methods are also required for meaningful comparison. Generally, there was little evidence of
common methods or controls used across groups, which
suggests a lack of synergy and communication between
research groups. Many studies reported performance using
the standard measure of information rate,27 but did not
fully report all of the other factors (such as selection space
size) that effect objective comparison. Studies often report
error rates, but again objective comparison can only be performed in the absence of bias inherent in control interface
designs. Another interesting finding was that none of the
studies tested the same technology, although many of them
used technology based on similar neurological phenomena
or produced the same type of output (such as a Transducer
with 2-state discrete output). Given the diversity of methods and metrics, it is difficult to make specific comparisons.
Synopsis Attributes
The Synopsis attributes represent the salient characteristics of a BI study, and consequently analysis of these
attributes can reveal high-level trends in the field. In the
following paragraphs, we report results for the significant
Synopsis attributes.
Study ClassAs shown in Fig. 1, all but three studies
were technical development studies (28.6%) and usability
studies (47.6%). Most of these studies (89.5%) were not
controlledthat is, the results were not compared to an
established technique. The other three studies were a safety
and tolerance study, an exploratory study and an efficacy
study. The focus on technology development evaluation
without established norms for comparison is an indicator
of the youth of the BI field.
Online/OfflineOur results showed that 28.6% of the
test set studies employed offline (pre-recorded) data versus
71.4% of the studies which employed online methods for
data recording. In the studies that focused solely on transducer design, the great majority (six out of seven) performed
offline testing. Only one Demonstration System was tested
offline, the majority were tested online. All full AT systems
were tested online.
873
Target Population
Transducer
Transducer
subcomponents
33%
52.4%
50.0%
10%
38.1%
40.0%
33.3%
Full AT
(2-component)
5%
30.0%
20.0%
(3-component)
10%
10.0%
Demo System
4.8%
0.0%
(2-component)
42%
fully-paralyzed
partiallyparalyzed
other severe
motor disabilities
Subject ClassEighty-one percent of our test set studied human subjects, with the remaining 19% reporting animal subjects. This bias towards human subjects may reflect
that current BI approaches utilize mostly non-implanted
electrodes and non-invasive recording techniques or may
be an artifact of the modest test set size used in this
demonstration. As recorded in the Special Conditioning/Preparation attribute (under Data CollectionSubject
PoolPreparation), two of the studies reported invasive
(requiring surgery) recording techniques in humans. All of
the animal studies reported invasive recording techniques.
BI AT Design ModelTwo-component Demonstration
Systems were most commonly tested (42.0%). Transducers
(i.e., the components that translate measured brain activity
into logical control signals) were the second most commonly tested components (33.3%), and full AT systems
(two-component 5% and three-component 10%). There
were two studies that focused on the Transducer subcomponents (electrode designs and elemental signal processing
techniques) (Fig. 2).
This result gives us insight into the real world contribution of our field. Only one-sixth of these studies are for
Target Activity
35.0%
33.3%
28.6%
30.0%
28.6%
25.0%
19.0%
20.0%
19.0%
15.0%
10.0%
4.8%
5.0%
0.0%
0.0%
comm w
people
not reported
control
control
appliances virtual obj's
control of
para limbs
control of
body fn
personal
mobility
not reported
874
JACKSON et al.
personal mobility devices in one study (4.8%). Over onequarter (28.6%) of the studies did not state their target
activity.
Target EnvironmentThe great majority (87.5%) of
the studies did not report their target environment.
Of those that did, 9.5% reported a restricted living environment (such as a hospital room) and 4.8% reported a
general living environment (such as a home). There were
no studies that addressed general target environments, such
as the outdoors.
Data Collection Attributes
The Data Collection category consists of attributes that
describe the details of the BI technology being tested and
the experimental methods and protocols used to collect
the data for analysis. This section presents the results of
attribute value distribution in this category.
BI TransducerNeural Phenomenon: Movement Related Potentials (MRP), both movements attempts (MA)
and imagined movements (IM), represented the largest
number of studies (combined 33.3%), with neural firing
rate (23.8%) second. Mu rhythm power, and P300 were
other popular approaches (14% each). Only single studies
(5%) focused on SSVEP, SCPs, cognitive task differences
and other phenomena. The focus on MRPs may indicate
that motor control is better understood in neurology than
other brain signals. Studies examining neural firing rates are
currently limited to implanted electrodes correlated with
invasive technique studies (Fig. 5).
BI TransducerOutput: Figure 6 illustrates that there
is a wide variety of transducers that have been tested in
BI studies. The most ubiquitous are discrete transducers,
with two-state, N-state together making up 45% of the
transducers tested. Continuous transducers make up 31%
Neurological Phenomenon
23.8%
25.0%
19.0%
20.0%
15.0%
14.3%
14.3%
14.3%
10.0%
5.0%
4.8%
4.8%
SCPs
cog task
4.8%
4.8%
SSVEP
other
0.0%
MRP-MA MRP-IM mu/Beta
P300
firing rate
in implanted
cortical electrodes
FIGURE 5. Distribution of neural mechanism. MRP-MA: movement-related potentialsmovement attempt, MRP-IM: movementrelated potentialsimagined movement, SCPs: slow cortical potentials, cog task: cognitive tasks, SSVEP: steady-state visual
evoked potentials.
875
Transducer Types
2D spatial ref
21%
discrete 2-state
(all IC)
21%
3D
5%
distant approx
5%
approximation
31%
(1 NC state)
11%
rel continuous 2D .
5%
1D
21%
discrete N-state
(all IC)
11%
(1 NC state)
0%
(1 unknown state)
5%
representative
14%
not reported
18%
virtual vehicle
10%
robotic arm
10%
cursor
30%
virtual world
20%
menu system
30%
876
JACKSON et al.
Subject Task
visual attent
17%
mental tasks
9%
auditory imagery
5%
item selection
5%
imagined movement
14%
positioning
14%
attempted
movement
14%
path following
0%
complex task
5%
Basic Cognitive
Tasks
Interaction Tasks
Reporting Practices
The review process also provided a sample of the reporting practices in the field. Figure 10 illustrates a summary of
study completeness metric for all reviewed studies taking
into consideration only required attributes, as described in
Section 3.2. Study completeness ranged from 72 to 93%.
The fields that were reported less than 50% of the time often
related to stating target populations, target activities, and
target environments. The lack of target reporting precludes
reviewers from determining how well results generalize.
The least-reported apparatus attributes were On Mechanism (0%) and Idle Support (10.5%), but these were understandable because most researchers assume a manual on
mechanismgiven the level of technologyand Idle Support is a relatively new concept14 derived from the concept
Analysis Elements
Every one of the 21 studies in our test set included
a quantified methods analysis. Only four of the papers
(12.9%) included a qualitative analysis as well. Out of all
of the studies, only one included any theoretical modeling.
These observations further support the claim that BI
research currently exists in a technological development
phase, where the focus is on performance and accuracy.
As the technology matures, the focus on subjective
rg
sb
e
Tr
ej
o
W
es
or
Ta
yl
ru
ya
Se
r
ai
er
rm
m
an
be
O
illa
n
d
la
n
N
eu
cF
ar
Ki
pk
e
Le
vi
ne
dy
ne
tt
Ke
n
ao
G
ar
re
G
rtz
C
in
co
tt i
C
ur
ra
n
De
lo
rm
e
D
on
ch
in
ke
Bi
rc
Bl
an
lis
s
Ba
y
Al
l is
on
93.8%
100.0%
90.7%
88.5%
88.7% 87.3% 87.2% 87.8%
87.3% 90.5%
85.1%
84.8% 84.6%
82.4%
90.0% 81.1% 83.0%
81.3%
79.2%
76.9%
74.5%
73.5% 77.1%
80.0%
70.0%
60.0%
50.0%
40.0%
30.0%
20.0%
10.0%
0.0%
of User Control.16 Other omissions centered on subject reporting (selection criteria, recruitment and screening methods, usage history, and debriefing all under 50% reported,
with Method of Recruitment the lowest (5.9%)). Other attributes, such as Objective(s) (under SynopsisStudy Description), were not omitted, but neither were they explicitly
stated. They had to be discerned or implied from the text.
Improving consistency in reporting these areas will aid in
interpretation, useful comparisons, and study repeatability.
877
CONCLUSIONS
We have introduced a new method for comparing studies of BI technology based on the theoretical models and
taxonomy proposed by Mason, Moore, and Birch.15 The
proposed method was shown to be an effective approach
for interpreting and comparing the 21 BI studies in our test
set. It allowed us to 1) identify the salient characteristics
of each study, 2) identify what was reported and what was
omitted, 3) facilitate a complete and objective comparison
with other studies, and 4) identify trends, areas of inactivity
and reporting practices. Future studies will be required to
confirm these findings on larger test sets.
Our demonstration presented samples of the types of
analyses that might be performed using this method. Even
though the test set was an approximation to the set of all
BI studies, some interesting comments can be made about
observed trends, areas of inactivity and reporting practices.
For example, the analysis revealed a profound lack of common experimental methods and measures. This situation
precludes direct comparison of research findings. The diversity of methods and measures also inhibits study replication and external validation of results. Many of the results
confirm that BI technology is nascent and in an early technological developmental stage. Target populations, environments, and activities are most often approximated rather
than using the stated targets. This is attributed partly to
the youth of the field but also to the difficulty of working
with subjects with severe disabilities. Research teams often
must transport large amounts of sensitive equipment, and
incur the time and expense of traveling to subjects homes
or hospital rooms. This issue is inherent in the BI field,
but may be eased as BI recording devices become smaller,
more robust, and more portable.
As a precursor to this work, we reviewed the literature
and proposed attribute values for specific attributes. These
attribute values themselves represent a major contribution
to the field, providing labels to assign and compare study
attributes. As general classifications of attribute descriptions, these values effectively extend Mason, Moore, and
Birchs BI Study taxonomy with more detail. Of course, the
proposed values are only an initial set that will hopefully
evolve with use and community input.
As the first application of Mason, Moore, and Birchs
formalisms, the successful application of the method is a
The representative test set used for method demonstration is summarized in Table A1. For space efficiency,
we will use the following abbreviations: TNSRE, IEEE
Transactions on Neural Systems and Rehabilitation Engineering and TRE, IEEE Transactions on Rehabilitation
Engineering.
TABLE A1.
Paper
Allison
Bayliss
Birch
Blankertz
Cincotti
Curran
DeLorme
Donchin
Garrett
Gao, X
Kennedy
Kipke
Levine
McFarland
Millan
Neumann
Obermaier
Serruya
Taylor
Trejo
Wessberg
Ref. (1)
Ref. (2)
Ref. (3)
Ref. (4)
Ref. (5)
Ref. (6)
Ref. (7)
Ref. (8)
Ref. (10)
Ref. (9)
Ref. (11)
Ref. (12)
Ref. (13)
Ref. (17)
Ref. (18)
Ref. (19)
Ref. (20)
Ref. (21)
Ref. (22)
Ref. (23)
Ref. (25)
ACKNOWLEDGMENTS
This work was supported by the National Science Foundations Universal Access program, under NSF Project
number 0118917, the Canadian Institutes of Health Research grant MOP-62711, the Natural Sciences and Engineering Research Council of Canada, grant 90278-02.
We would like to thank Gordon Handford, Adriane Davis,
Brendan Allison, Jaimie Borisoff and Regi Bohringer for
their insightful feedback during the preparation of the
manuscript.
878
JACKSON et al.
REFERENCES
1