Вы находитесь на странице: 1из 9

Electroencephalography and clinical Neurophysiology 106 (1998) 118126

Computer-based electroencephalography: technical basics, basis for new applications, and potential pitfalls
David E. Blum*
Barrow Neurological Institute, Phoenix, AZ 85013, USA Accepted for publication: 3 October 1997

Abstract EEG has been recorded on paper-based analog systems for over 50 years. In the past 5 years, computer-based digital systems have become more widely used. Digital systems eliminate some artifacts that plagued analog recordings but introduce subtle new problems including aliasing and dynamic range. Digital systems allow reformatting of the same EEG segment using different gain, lter and montage settings. The digital signal allows for measurement and computations on the EEG, leading to applications such as power spectrum, topographic mapping, and spike or seizure detection. 1998 Elsevier Science Ireland Ltd. Keywords: Computer; Electroencephalography; Epilepsy; Technology

1. Introduction Data management in electroencephalography has changed from analog to digital. The original EEGs in the latter part of the 19th century were displayed by changing the position of a hanging mirror in direct response to a recorded voltage (Brazier, 1986). The rst permanent records were made on a revolving smoked glass drum, scratched by a wire deected by a galvanometer. Until recently, all EEGs were recorded by fundamentally similar analog techniques: an input voltage is ltered and amplied, then passed into a galvanometer that deects a mechanical device which in turn writes a paper record. Digital methods have entirely changed the internal design of EEG machines. This change allows the elimination of many major drawbacks of oldstyle EEG recordings, provides advantages in terms of exibility and cost, and opens up new vistas in data analysis. Simultaneously, this change presents a potential for numerous new types of pitfalls and errors unknown in the world of analog EEG.

2. Drawbacks of paper EEG Analog EEG presents a number of obvious drawbacks. Some of these result directly from the recording technology, such as problems with paper and ink. Other problems arise indirectly, such as difculties in handling, storing, and transporting bulky records. Some problems were not recognized until comparison with the greater capabilities of digital equipment. In addition to the disadvantages presented in Table 1 of Swartz (1998), we note the following. 2.1. Paper-and-ink recording system The physical recording material, paper, is prone to tears and folds. Fanfold paper tends to rip at the fold lines, and a thick pile of folded paper, dropped to the oor, becomes unmanageable. If the paper traction device in an EEG machine is not well maintained, the paper may feed at a angle, ultimately catching an edge and crumpling. The second weak link in a paper-and-ink recording system is ink. Ink tends to jam in the ne pens of an EEG machine, causing the trace to disappear at inopportune times. Inkblots are unpleasant for the people handling the EEG record. Ink failures occur roughly once in every 5 paper recordings (unpublished data). Both blank lines and inkblots are best viewed as lost data. No digital EEG system would be

* Tel.: +1 602 4063886; fax: +1 602 4067161; e-mail: dblum@mha.chw.edu

0013-4694/98/$19.00 1998 Elsevier Science Ireland Ltd. All rights reserved PII S0013-4694 (97 )0 0114-4

EEG 32

D.E. Blum / Electroencephalography and clinical Neurophysiology 106 (1998) 118126

119

acceptable if it randomly lost 1 s worth of data out of every 100 000 s. The pens used in analog EEG produce another set of problems. Pen arc is a distortion of the recording introduced by the circular motion of the pen tip around the galvanometer post. Long pen arms alleviate pen arc except for the highest amplitude of signals. Unfortunately, it is in the analysis of the highest amplitude signals that accurate recognition of phase reversals is essential. At the extremes of pen arc, pens block against each other with very high input voltages. This prevents appreciation of the actual input voltage, allowing only statements such as spikes of at least 300 mV. The mechanical devices must be maintained in accurate alignment. Errors in horizontal and vertical positioning of the galvanometer post give rise to errors in mechanical baseline and time alignment, respectively. Non-zero deections of the galvanometer when the input voltage is zero produce errors in the electrical baseline. Wear and tear on the bearings and accumulation of old ink and grime lead to loss of responsiveness of the pen system, effectively blunting the high frequency response. All of these aws can be avoided in a properly designed digital EEG system. The recording parameters set in analog EEG are essentially permanent. Subtle slowing may be missed if it occurs only when the high-pass lter is set too high or the gain set low. High amplitude activity may cause pen blockage if recorded at high gain. Focal transients may be missed if the montage does not include all appropriate channels. Even a recording collected using multiple montages and lter/gain settings may not offer sufcient protection for recording rare transients.

3. Data storage Storage requirements differ between digital and analog EEG. Paper is a bulky recording material. A half-hour EEG weighs almost 2 lb., and occupies approximately 1 cubic foot. EEG storage requirements vary depending on the age of the patient. Neonatal EEG may need to be stored in excess of 21 years. Few laboratories sort stored EEG based on expiration date; doing so requires technologist time and invites administrative error. Most centers store all EEG for the maximum duration necessary, imposing rapidly mounting storage costs. Microlm can reduce storage costs but imposes its own burdens, both in the expense of producing the microlm and the inconvenience of reviewing old records on a microlm reader. Digital EEG can be stored on any digital medium, which is both compact and cheap. Moreover, as the amount of data that can be kept online increases every year, it is easy to keep both recent and not-so-recent EEG readily available at all times. For example, modern CD-ROMs hold 650 megabytes, enough to easily hold 30 standard EEGs on a single disk. A 20 disk jukebox changer is available for under $1000, so several

months of EEG can be maintained online relatively easily. Even when not stored online, the compactness of digital media allows keeping years worth of studies in a laboratory rather than in off-site storage. There are also drawbacks to digital storage of EEG. Digital media may not be physically permanent. Video tape, often used to record both EEG and video in epilepsy monitoring units, decays after several years in storage and decays when replayed often. Read-write magneto-optical disks, advertised as nearly permanent by equipment manufacturers, fail at unacceptable rates. CD-ROM and other pure optical encoding technologies may provide better permanence but are not yet in standard use. Even if the media is physically persistent, the data are stored in a specic format which might not be legible in the future. Threats to the accessibility of archived data come in many forms. The company producing the software may go out of business. The software used to produce a le may be strongly dependent on a particular type of hardware. Future versions of the same companys software may be unable to recognize old le formats. Changes in computer operating systems may make old directory structures unusable (even if the program is able to handle the data). A given laboratory may change equipment vendors and choose not to maintain old digital equipment, rendering prior les inaccessible. Some of the storage issues above are addressed by the ASTM (1992) standard which lays out a proposed standard for conversion of EEG data from the native format to a standard ASCII format. The ASTM standard itself has a number of drawbacks. First, it is by design an inefcient standard as data is encoded in ASCII (812 bytes per data point rather than 2). Mechanical write time is proportional to the volume of data, and most computer disk systems could not keep up with the demands of recording EEG in ASTM format. Furthermore, the ASTM method of encoding comments in the middle of the EEG le, interspersed with data, makes it impossible to enter comments in real-time as a le is recorded in ASTM format. Therefore, ASTM can be used strictly as a converted format: les are recorded in one form (the native binary format determined by the manufacturer) and later converted for storage into ASTM. To date, very few EEG machines support either encoding EEG into ASTM format or reading back records stored in ASTM format. An additional inconvenience is that, under ASTM, equal time epochs do not necessarily equate to equal numbers of stored bytes. This eliminates the ability to jump randomly to a desired time in a le. ASTM records must be read sequentially, and only in the forward direction. ASTM does not help resolve storage issues resulting from physical decay of recording media or from changes in computer operating systems.

4. Portability Related to storage issues is the issue of portability. This is

120

D.E. Blum / Electroencephalography and clinical Neurophysiology 106 (1998) 118126

one of the few areas where digital EEG is actually less convenient than paper recordings. While it may be difcult to mail out a paper EEG for a second opinion, it is at least possible. Digital EEG cannot be ported meaningfully if the receiving laboratory does not have compatible EEG equipment. Lack of standardization between manufacturers, and the slow acceptance of the ASTM standard, are a great hindrance to exchange of EEG data. When second opinions are needed, most laboratories today print out sections of the EEG and mail those instead of the whole record. Despite this major drawback, which ASTM or some other standard may resolve, there are portability advantages to digital EEG. When a paper EEG is sent out of the laboratory for a remote reading, it is unavailable to the original laboratory. It may get lost. Shipping bulky heavy paper recordings is expensive. Digital EEG allows easy shipping: the le(s) can be copied to disk, which is cheap to mail. Transfer of EEG electronically is even easier. Computer to computer transmission via modem is secure, but transmission over open networks such as Internet e-mail systems creates problems of data security and condentiality that have not yet been addressed.

5. Technical pitfalls for digital EEG All current digital EEG systems represent numbers internally as binary integers, which step discretely through a range of values rather than varying continuously. Most voltages are not represented exactly but are rounded off to the nearest discrete value. The fundamental building block of a digital EEG system is analog to digital (A-D) conversion. A-D conversion takes as input a continuously varying

(analog) signal and reduces it to a series of discrete numbers, evenly spaced in time. D-A converters have several important physical properties. The rst is dynamic range, or gain. This refers to the ratio between the lowest recordable positive voltage (represented in the digital system as +1) and the highest recordable voltage. Most current systems use 12 bit A-D conversion. One bit is used to encode sign data (positive/negative), leading to 11 useful bits of data. If the lowest recordable voltage is 1 mV then the highest recordable voltage would be 4095 mV. This is clearly enough to encode the clinically useful range of EEG signals. Some older digital EEG instruments used 8 bit technology to realize gains in data throughput and storage volumes (computers are organized in 8 bit bytes, so a 12 bit value occupies twice the storage and twice the transfer time as an 8 bit value). With one bit reserved for sign, an 8 bit system with a minimum sensitivity of 1 mV can only record up to 255 mV. To produce a wider range of recordable voltages, early 8 bit systems either sacriced accuracy (leading to blocky-looking EEG) or provided a technologist selectable recording gain, sacricing the exibility of after the fact reformatting. The second important property of the A-D converter is its sampling rate. An analog EEG system continuously alters the pen position to reect the input voltage. A digital system, however, can only acquire voltage data during a narrow time window. Digital systems must, by design, ignore any voltages that occur between samples. If the sample interval is faster than the rate of change of the data, this is adequate. For an epileptic spike or other rapidly changing signal, there is a chance that the sample points will be too widely spaced to provide an accurate picture of the waveform. The peak amplitude of the spike will only be approximated by the

Fig. 1. Chirp signal. (Top) This signal was recorded with a commercial digital EEG machine, using a signal generator to smoothly vary the frequency from 10 Hz to above 500 Hz over 10 s (shown). The amplitude of the input signal remained constant. Aliasing causes the appearance of bizarre waveforms as the frequency exceeds the Nyquist frequency. (Bottom) Same signal, sampled simultaneously, with the input ltered by a simple two component R-C lter. Note the decimation of frequencies above the Nyquist frequency. Analog input lters are necessary to remove signals that may generate nonsense data.

D.E. Blum / Electroencephalography and clinical Neurophysiology 106 (1998) 118126

121

amplitude at the nearest sampled time point. There are reports that the exact morphology of a spike may provide clinically signicant data (Frost et al., 1986; Blum, 1993). The shape of a spike will not be adequately dened with fewer than about 510 points sampled throughout its brief duration. For a 20 ms spike, this implies a sample rate of at least 250 samples/s. Very fast activity, such as muscle contamination, will show up only as random noise. A greater danger is presented by regularly repeating signals such as 60 cycle artifact. When the sample rate is less than twice the frequency of a signal, the signal will not be recognizable. This cut-off rate is called the Nyquist frequency. Sample rates above the Nyquist frequency may demonstrate the signal but with peculiar distortions. Aliasing is the process of producing the appearance of a false signal from input signals above the Nyquist frequency. This is shown in Fig. 1. The solution to aliasing is to apply an analog lter to the input EEG signal prior to digitization. Appropriate selection of lter components involves consideration of trade-offs between other features of the A-D converter and the environment in which the EEG is recorded. Simply requiring a sampling rate of two, or 3, times the shoulder frequency of the low-pass lter may not be adequate if the environment contains very high amplitude noise at higher frequencies (American Electroencephalographic Society, 1994). Filter roll-off must be adequate to decimate any environmental noise above the Nyquist frequency to a level below the minimal amplitude resolution of the A-D converter. A low lter roll-off rate in a noisy environment forces a faster digitization rate. Very high roll-off rates are expensive to implement and may not be necessary if no appreciable high frequency noise is present in the recording environment. In addition, very fast lter roll-off rates may induce phase distortions of the lower frequency components. One nal concern about aliasing is that there are several steps involved in displaying digital data. Not only must the continuously varying voltages be sampled rapidly in the data stored to disk, but also the data must be displayed in adequate resolution. Most current computer screens are limited to 1024 display points in the horizontal direction. Display of 10 s of data is therefore limited. Allowing for montage labels (typically written to the left of the EEG data) this allows for approximately 100 data points displayed per second. Either the painstakingly acquired data must be subjected to a moving average of adjacent data points, or aliasing will occur. One manufacturer takes the approach of displaying the most extreme voltage in a set of adjacent points which may at times produce an articially spiky appearance to the EEG. EEG display systems should be able to display data at widened time bases, such as 1 or 2 s per page, to allow diagnosis of these problems when the suspicion arises. EEG reports mentioning sharply contoured theta may need to be taken with a grain of salt. A peculiar distortion may arise when the number of horizontal display pixels is not an exact divisor of the number of data points acquired in the displayed interval. Then, as data is

scrolled across the screen, waveforms will appear to mysteriously change shape in subtle ways as the display samples different points from the acquired dataset. Moving average lters solve this minor problem, but are not always used. For research purposes it may be desirable to record very high frequencies to detect kilohertz activity at the onset of seizures (Allen et al., 1992; Fisher et al., 1992). In multichannel systems, the D-A converter may introduce slew errors. Many older digital EEG machines were prone to this aw. A digital EEG machine samples the voltage in channel one, then channel two, and so on. After reaching the last channel of data, scanning restarts with the rst channel. In a 16 channel system sampling at 200 points/s, the scan of the last follows the scan of the rst channel by a consistent 4.7 ms. This time axis distortion is called slew, and would not be signicant routinely but might mislead when applied to phase-coherence analysis for resolving small time differences between EEG signals (Gotman, 1983). Dipole source analysis may also be systematically erroneous if all samples contralateral to the source are taken a small time interval after samples taken ipsilateral to the source. This error is made possible by the practice of alternating left and right pins in the input jackbox, and would be even worse if all left-sided pins are digitally sampled rst and right-sided pins sampled second. One way around this problem is burst mode sampling: set the D-A unit to scan very rapidly through the channels, then pause until the perchannel sampling interval has elapsed. Sample-and-hold technology is superior. A sample-and-hold device allows simultaneous sampling of all channels, replicating exactly the analog voltage in each input channel, and holds this sampled voltage until the A-D converter completes its sequential conversions on all channels. Many, but not all, commercial digital EEG systems now feature sample-andhold. Slew errors are probably not signicant in routine clinical practice. Calibration is another area in which digital EEG differs from analog EEG. In paper-based systems, a mechanical calibration page displays the response of the system to a xed DC step in voltage (typically 50 mV). This input step signal contains components of all frequencies (see discussion of Fourier transforms, below). The step signal appears to decay from a peak by virtue of the action of the high-pass lter, and appears minimally rounded at its peak due to the action of the low-pass lter. In digital EEG, when lters are applied after data acquisition, this input signal cannot be adequately sampled and displayed on a computer screen. The A-D converter may miss the input peak, leading to a ragged appearance and measured calibration voltages that vary from their expected value. To use the familiar step input with accuracy, the step signal should be time-locked to the digitizer clock. Another approach is to calibrate gain using a smoothly varying input signal of a single frequency, such as 10 Hz. This is valid only if the ampliers have identical gains throughout their input frequency range. This aspect of calibration is ignored in every commercially

122

D.E. Blum / Electroencephalography and clinical Neurophysiology 106 (1998) 118126

available digital system. Calibration using a chirp signal (see Fig. 1) may be preferable. If a digital EEG is recorded and stored in a referential montage, then the reviewer may have the exibility to reformat the displayed EEG into any appropriate montage. By subtracting one channel from another, common activity in an active reference electrode cancels out. More sophisticated arithmetic manipulations allow the calculation of average reference montages using customized aggregates of channels contributing to the reference (Jayakar et al., 1991), or a Laplacian montage which computes derivatives of the source voltage in multiple directions (Klein, 1993; Lagerlund et al., 1995). Reviewer-selected montages and

customizable montages are features on essentially all digital EEG machines. Filters can also be computed on the y during the review of an EEG. The easiest lter to implement is a moving average lter. The more points added together, the lower the lters effective cut-off frequency. The main advantage of this lter is that it is computationally efcient and does not slow down the review speed of the EEG system. However, simple moving average lters introduce distortion of waveforms (Press et al., 1992; Reid and Passin, 1992). The general idea of a lter that multiplies a series of sequential voltage measurements by predetermined coefcients and adds up the results is a nite impulse response (FIR) lter.

Fig. 2. (a) Time domain graph of various windows used in Fourier analysis. (b) Effect of differing windows on EEG demonstrated in time (top row) and frequency (bottom row) domain. In the top row the 3 boxes show the same data with different windowing methods (not all to same scale). The bottom row shows the impact of windowing method on the resulting power spectra.

D.E. Blum / Electroencephalography and clinical Neurophysiology 106 (1998) 118126

123

A lter that adds a raw data point to the previous output of the lter is an innite impulse response (IIR) lter. FIR lters are ideal as low-pass lters. IIR lters act as efcient high-pass (detrending) lters but if improperly implemented can introduce oscillations and instability in the resulting waveforms (Reid and Passin, 1992). Non-distorting FIR lters include Savitsky-Golay lters (Press et al., 1992). Another optimization that can be obtained with FIR lters is increased rate of fall-off, which requires increased numbers of lter coefcients. A 6 pole digital lter appropriate for removing muscle artifact has been described (Ives and Schomer, 1988). The ideal digital EEG program allows the user to construct customizable FIR lter coefcients. A special form of lter uses coefcients that change over time, in response to the characteristics of the input waves. Adaptive lters may be useful in EEG but only a few studies have been done (Penczek et al., 1987; Chui and Chen, 1991). Another form of data processing that can be performed in the time domain is averaging. The zero time point can be recognized automatically, potentially by a spike recognition system, or manually, by placing a cursor. Once multiple events have been identied, the computer can average together the EEG signal, as is done in evoked potentials. In the case of suspected cortical myoclonus, event averaging that includes times preceding the event can reveal cortical potentials (Shibasaki et al., 1986; Guerit et al., 1994). This technique can improve the signal-to-noise ratio of low amplitude spikes (Thickbroom et al., 1986). Improvements in signal-to-noise ratio are important in applications such as dipole source localization (Tseng et al., 1995). Placement of multiple cursors and averaging of signals based on cursor position should be a standard feature of EEG systems.

6. Spectral analysis The ability to compute power spectra is an important advantage of digital over analog EEG. In analog EEG systems, spectral analysis is done by visual inspection. This process can be made exact in computer-based systems. An epoch of EEG of a given length is selected. Which epoch is selected is based on the judgment of the electroencephalographer. Epoch selection is the weakest link in spectral analysis (Nuwer, 1988). Errors include selection of drowsy rather than waking sections, and of sections containing artifacts, spikes (Hughes et al., 1991) or normal variants. Typical epoch lengths are 14 s. Let L be the length in seconds of the epoch. N is the number of data points in the epoch, determined by L and the sampling rate. The EEG in that epoch can be considered as a one-dimensional array, or vector, of voltage values. Let Vi be the voltage at the ith data point. A linear transformation of Vi can be dened as: Vi = An sin
n=0 N 1

2pi n + Phn N

(1)

This is a form of the Fourier transform. For each value n in the sum, An is the contribution of waves having frequency n/L. The power contributed by waves of any given frequency is the square of the amplitude. Pn is the phase of that component. Typically, Fourier transforms (and the resultant power spectra) are computed using an algorithm called the fast Fourier transform (FFT; Press et al., 1992). FFT techniques require that the number of points, N, be a power of two, introducing an odd quirk into power spectrum analysis. If the digitizing rate is a round number, say 200 points/s, then the epoch lengths will be 2.56 or some similar uneven length, and the spectral components will relate to uneven frequencies. It will not be possible to exactly compute power in the alpha band, for example. An important artifact arises from the FFT calculation. When an epoch is selected, the preceding and following EEG are simply dropped from consideration. Mathematically, this view of the EEG is through a rectangular window. Rectangular windows introduce frequency leakage artifacts. Side-band leakage produces the appearance in a power spectrum of components that are not truly present in the signal but arise from the windowing process. A number of elegant window designs (Hann, Welch, and Bartlett are common examples; Press et al., 1992) are available that partly alleviate these problems. Quantitative frequency analysis may produce different results when different windows are used (Fig. 2). An application of FFT techniques that has not found widespread use is the construction of custom lters. Almost all lters in clinical use are based on time-domain calculations. These lters, as discussed above, invariably involve trade-offs between phase distortion and roll-off rate. Exact ltering based on frequency-domain calculation can be done via the FFT: an epoch is selected, the FFT applied, and undesirable frequency components discarded. Then the inverse FFT calculation reconstructs the remaining signal. There is no phase distortion or roll-off. The Fourier transform is a specic example of a group of linear transformations that convert N time domain data points into N components in another domain. Other linear transformations may be useful. Principal component transformations (also called singular value decompositions) can separate out individual events from a section of data, and may be useful for removing eyeblinks (often the largest principal component of an EEG) or identifying spikes with multiple components (Freeman, 1987; Koles et al., 1995). Another linear transformation with untapped potential is the wavelet transform (Chui, 1992). Wavelets are time-limited brief transients which can form a complete basis set in a fashion similar to the sine waves used in Fourier transformation. Wavelet transforms encode not only frequency but also time information. Applications to EEG have just begun to appear (Senhadji et al., 1995). The phase components of the Fourier transform contain information that is thrown away in power spectrum analysis. If two channels of EEG contain components that are consistently

124

D.E. Blum / Electroencephalography and clinical Neurophysiology 106 (1998) 118126

separated by the same phase off-set, then there is a relationship between them. The magnitude of the phase off-set indicates a time delay between the generators for the two locations. This method has been used to measure small time differences between EEG signals and may be useful in secondary bilateral synchrony (Gotman, 1983; Harris et al., 1994), or in analyzing patterns of sequential brain activation related to specic tasks (Gevins, 1987). As a caution, if some of the EEG signal recorded at widely separated electrode sites is generated by the eld of a single deep dipole source, then coherence analysis will generate a misleading false positive result.

8. Mapping and topographic display The phrase brain mapping is used in 3 separate contexts in neurology. Most recently, it refers to localization of brain functions by non-invasive neuroimaging techniques. An earlier, still prevalent, use refers to localization of brain function by invasive electrophysiologic testing, in which an electric current is passed through subdural electrodes into the brain at specic locations, during testing of specic functions. In the context of digital EEG, brain mapping refers to a less specic application. Broadly dened, brain mapping in this context is the reduction of data from multiple one-dimensional sets of values plotted on an x-y polygraph (one line per channel, each line with numerous data points) to a two-dimensional map containing only one point per channel. A neglected aspect of this process is the selection of an appropriate set of x-y coordinates for the map. Simply by picking a round versus an oblong head map, one can change the apparent eld extent of the mapped activity. Brain mapping may or may not involve interpolation, which is the process of lling in map points in between the x-y coordinates of actual electrodes with interpolated (or, near the edges of the map, extrapolated) values. The appearance of a brain map can be markedly altered by choice of interpolation methods and by choice of reference (vertex, ears or average reference) (Nuwer, 1988). Brain mapping can be used with any type of numerical value. Raw data, without processing, results in an instantaneous voltage map (subject to digitizer slew, and suffering a lower signal-to-noise ratio than processed data). Intermediate data reduction can be performed, and many brain maps plot power in a particular frequency band, coherence, or even more abstract quantities such as number of spikes recorded at each position. Some systems forego plotting actual data and instead plot the results of statistical analysis, in the form of Z-scores, comparing the reduced data to a normative database. Few statisticians would accept the concept of interpolated Z-scores. Since most cerebral activity has a eld, it is likely that any given feature may involve adjacent electrodes to some degree. Therefore assumptions of independence that underlie the corrections for multiple statistical tests are not valid. This mistake is likely to lead to an excess of false positive ndings which unfortunately are used only rarely to treat patients but more often to support otherwise questionable claims of brain damage or disability in court (Epstein, 1994). Far and away the greatest problem with brain mapping is the loss of detail involved in the data reduction process. The greater the degree of data reduction, the harder it is to recognize a meaningless result (Hughes et al., 1991). Just as inappropriate epoch selection can ruin the validity of a power spectrum, any manner of imperfection in technique can produce a meaningless brain map. Brain mapping is a useful technique for visually communicating the complex content of an EEG, but it does not by itself add anything to the value of the data. The points illustrated in a brain map are valid only to the extent that the underlying

7. Spike and seizure detection An important form of computer analysis of EEG is automatic recognition of epileptic events, both interictal and ictal (Ktonas, 1987). Detection methods can function in the time domain, by extracting features such as amplitude, slope, sharpness (second derivative), and peak-to-peak wave duration. To detect an event, comparisons can be made against xed thresholds or against the previously observed background patterns. Spikes can be detected, for example, when the sharpness and amplitude both exceed 5 SD above the average for those parameters over the past 5 10 s (Frost, 1985; Kofer and Gotman, 1985). This method has a high sensitivity but a low specicity, as it triggers on vertex waves as well as some muscle and movement artifacts. Improved performance results from adjusting thresholds based on sleep state (Gotman and Wang, 1991). Seizures can be recognized by their rhythmicity. Analysis of rhythmicity of the EEG can be made by taking the average and standard deviation of each of two key parameters: wave duration and amplitude. Rhythmic events are recognizable by low standard deviations. Unfortunately, arousals with rhythmic theta or alpha as well as many normal background patterns also demonstrate rhythmicity. Thresholds can be set dynamically, with respect to ongoing background (Gotman, 1985; Kofer and Gotman, 1985; Murro et al., 1991). Neural network approaches, which are not necessarily based on feature extraction, have also been used (Gabor and Seyal, 1992). The complexity of the EEG has led many to study the EEG as the output of complex systems (Basar, 1990). Chaos analysis may reveal differences between wake and sleep states (Roschke et al., 1993). An often used measure is the dimension of the EEG, which is roughly a measure of complexity (Pritchard and Duke, 1995). Changes in the interchannel behavior of the Lyapunov exponent (Iasemidis et al., 1994) may predict the onset of seizures. Non-linear measures of average amounts of mutual information may supplement phase-coherence analysis as a measure of time delays between EEG channels (Pijn, 1990). Overall, though, chaos analysis has led mostly to chaotic results and there have been few reproducible studies in this area.

D.E. Blum / Electroencephalography and clinical Neurophysiology 106 (1998) 118126

125

EEG is properly selected and the data reduction technique (spectrum, principal component, or other transformation) is appropriate to the problem at hand. Improperly applied, brain mapping can provide convincing demonstrations of erroneous analysis (Binnie and MacGillivray, 1992). A brain map by itself should never be relied upon in decision-making and should always be accompanied by the raw data as well as the full results of the intermediate data reduction calculations (Nuwer, 1997).

charges of high amplitude, localized by dipole methods to deep structures, may occur when depth electrodes reveal the deep structures to be silent (Alarcon et al., 1994). Some studies have validated source localization against concurrent electrocorticography (Nakasato et al., 1994).

10. Summary Computer-based EEG has emerged from the status of an impractical research tool to a cheap, powerful replacement for older analog systems. While solving some problems associated with analog systems, digital EEG creates new technical pitfalls. Changing the nature of EEG from visual interpretation of a polygraph page to visual interpretation supplemented by numerical analysis expands the potential diagnostic power of the technique. Many forms of numerical analysis are possible, and the results can be presented in new ways made possible by computer displays. The clinical utility of most of these methods remains to be proven. Proper interpretation of emerging studies of EEG require familiarity with the theoretical basis of signal analysis.

9. Source localization All electroencephalographers are occasionally given to fanciful hypothesizing about the sources of brain activity. Source localization calculations attempt to quantify such inquiries (Fender, 1987; Ebersole and Wade, 1990; Baumgartner, 1994). The inverse problem of EEG is ultimately unsolvable, but, given some simplifying assumptions and constraints on the form of the solution, it is possible to nd a source model that best explains the data (see Koles, 1998). Underlying all source localization methods must be a model of how the brain generates electrical elds and how these elds are propagated to the scalp (Scherg and Ebersole, 1993). The most commonly used models assume that all electrical activity emanates from a single point in the brain, with a eld modeled by a dipole located in a multilayered spherical head (Zhou and van Oosterom, 1992; de Munck and Peters, 1993). Anisotropic models account for varying conduction of brain in different directions (Zhang, 1995). Most models ignore the breach effects of holes in the skull located anterior to the temporal lobe (the eyes), inferior to the temporal lobe (the foramen ovale) and lateral to the temporal lobe (the auditory canal). Once a theoretical model is assumed and parameters dened (orientation and magnitude of the dipole), it is possible to predict the resultant EEG. The predicted values are subtracted from the observed values, and the differences are used to change the model parameters. The process of subtracting observed from predicted and adjusting the model is repeated until sufcient accuracy is obtained. How the model is changed with each step is determined by the choice of numerical methods used. In a non-unique system, the choice of inversion methods affects which of the many possible solutions may be chosen (Gill et al., 1981). Without reverting to iteration, it is possible to use linear inversion methods to model an EEG as composed of combinations of multiple preselected sources of varying amplitudes (Scherg and Ebersole, 1994). As a caution, just as a single deep dipole source can throw off coherence analysis, two remote but linked sources can also throw off source localization analysis. For example, activity generated by simultaneous discharge of both temporal lobes may conceivably be modeled as arising from a single source near the midline. Deep sources in the mesial temporal lobe are probably of insufcient amplitude to be recordable on the scalp, and conversely scalp-recorded dis-

Acknowledgements Dr. J. Drazkowski read the manuscript and provided editorial suggestions. The author receives royalties from Nicolet Biomedical Corp. derived from sales of the Bridger computer EEG program, and is the recipient of a grant from Medtronics Corp. for development of computer EEG technology. References
Alarcon, G., Guy, C.N., Binnie, C.D., Walker, S.R., Elwes, R.D. and Polkey, C.E. Intracerebral propagation of interictal activity in partial epilepsy: implications for source localisation. J. Neurol. Neurosurg. Psychiatry, 1994, 57: 435449. Allen, P.J., Fish, D.R. and Smith, S.J. Very high-frequency rhythmic activity during SEEG suppression in frontal lobe epilepsy. Electroenceph. clin. Neurophysiol., 1992, 82: 155159. American Electroencephalographic Society. Guideline fourteen: guidelines for recording clinical EEG on digital media. J. Clin. Neurophysiol., 1994, 11: 114115. ASTM. Standard specication for transferring digital neurophysiological data between independent computer systems. ASTM E146792. ASTM, Philadelphia, PA, 1992. Basar, E. Chaos in Brain Function, Springer-Verlag, Berlin, 1990. Baumgartner, C. EEG dipole localization: discussion. Acta Neurol. Scand., 1994, 152 (S): 3132. Binnie, C.D. and MacGillivray, B.B. Brain mappinga useful tool or a dangerous toy?. J. Neurol. Neurosurg. Psychiatry, 1992, 55: 527529. Blum, D. Effect of carbamazepine and side of seizure onset on the morphology of interictal spikes. Proc. Am. Electroenceph. Soc., 1993: 141. Brazier, M.A.B. The emergence of electrophysiology as an aid to neurology. In: M.J. Aminoff (Ed.), Electrodiagnosis in Clinical Neurology, 2nd edn. Churchill Livingstone, New York, 1986, pp. 119. Chui, C.K. An Introduction to Wavelets. Academic Press, Inc., Boston, 1992.

126

D.E. Blum / Electroencephalography and clinical Neurophysiology 106 (1998) 118126 Koles, J.K. Trends in EEG source localization. Electroenceph. clin. Neurophysiol., 1998, 106: 127137. Koles, Z.J., Lind, J.C. and Soong, A.C. Spatio-temporal decomposition of the EEG: a general approach to the isolation and localization of sources. Electroenceph. clin. Neurophysiol., 1995, 95: 219230. Ktonas, P.Y. Automated spike and sharp wave detection. In: A.S. Gevins and A. Remond (Eds.), Methods of Analysis of Brain Electrical and Magnetic Signals [EEG Handbook Revised Series, Vol. 1. Elsevier, Amsterdam, 1987, pp. 211242. Lagerlund, T.D., Sharbrough, F.W., Busacker, N.E. and Cicora, K.M. Interelectrode coherences from nearest-neighbor and spherical harmonic expansion computation of laplacian of scalp potential. Electroenceph. clin. Neurophysiol., 1995, 95: 178188. Murro, A.M., King, D.W., Smith, J.R., Gallagher, B.B., Flanigin, H.F. and Meador, K. Computerized seizure detection of complex partial seizures. Electroenceph. clin. Neurophysiol., 1991, 79: 330333. Nakasato, N., Levesque, M.F., Barth, D.S., Baumgartner, C., Rogers, R.L. and Sutherling, W.W. Comparisons of MEG, EEG, and ECoG source localization in neocortical partial epilepsy in humans. Electroenceph. clin. Neurophysiol., 1994, 91: 171178. Nuwer, M.R. Quantitative EEG: I. Techniques and problems of frequency analysis and topographic mapping. J. Clin. Neurophysiol., 1988, 5: 1 43. Nuwer, M., Assessment of digital EEG, quantitative EEG, and EEG brain mapping: report of the American Academy of Neurology and the American Clinical Neurophysiology Society. Neurology, 1997, 49: 277292. Penczek, P., Grochulski, W., Grzyb, J. and Kowalczyk, M. The use of a multi-channel Kalman lter algorithm in structural analysis of the epileptic EEG. Int. J. Bio-Med. Comput., 1987, 20: 135151. Pijn, J.P.M. Quantitative Evaluation of EEG Signals in Epilepsy. University of Amsterdam, Amsterdam, 1990. Press, W.H., Teukolsky, S.A., Vetterling, W.T. and Flannery, B.P. Numerical Recipes: The Art of Scientic Programming, 2nd edn. Cambridge University Press, Cambridge, 1992. Pritchard, W.S. and Duke, D.W. Measuring chaos in the brain: a tutorial review of EEG dimension estimation. Brain Cognit., 1995, 27: 353 397. Reid, C.E. and Passin, T.B. Signal Processing in C. Wiley, New York, 1992. Roschke, J., Fell, J. and Beckmann, P. The calculation of the rst positive Lyapunov exponent in sleep EEG data. Electroenceph. clin. Neurophysiol., 1993, 86: 348352. Scherg, M. and Ebersole, J.S. Models of brain sources. Brain Topogr., 1993, 5: 419423. Scherg, M. and Ebersole, J.S. Brain source imaging of focal and multifocal epileptiform EEG activity. Neurophysiol. Clin., 1994, 24: 5160. Senhadji, L., Dillenseger, J.L., Wendling, F., Rocha, C. and Kinie, A. Wavelet analysis of EEG for three-dimensional mapping of epileptic events. Ann. Biomed. Eng., 1995, 23: 543552. Shibasaki, H., Yamashita, Y., Tobimatsu, S. and Neshige, R. Electroencephalographic correlates of myoclonus. Adv. Neurol., 1986, 43: 357 372. Swartz, B.E. The advantages of digital over analog recording techniques. Electroenceph. clin. Neurophysiol., 1998, 106: 113117. Thickbroom, G.W., Davies, H.D., Carroll, W.M. and Mastaglia, F.L. Averaging, spatio-temporal mapping and dipole modelling of focal epileptic spikes. Electroenceph. clin. Neurophysiol., 1986, 64: 274277. Tseng, S.Y., Chong, F.C., Chen, R.C. and Kuo, T.S. Source localization of averaged and single EEG spikes using the electric dipole model. Med. Eng. Phys., 1995, 17: 6470. Zhang, Z. A fast method to compute surface potentials generated by dipoles within multilayer anisotropic spheres. Phys. Med. Biol., 1995, 40: 335349. Zhou, H. and van Oosterom, A. Computation of the potential distribution in a four-layer anisotropic concentric spherical volume conductor. IEEE Trans. Biomed. Eng., 1992, 9: 154158.

Chui, C.K. and Chen, G. Kalman Filtering with Real-Time Applications, 2nd edn. Springer-Verlag, Berlin, 1991. de Munck, J.C. and Peters, M.J. A fast method to compute the potential in the multisphere model. IEEE Trans. Biomed. Eng., 1993, 40: 1166 1174. Ebersole, J.S. and Wade, P.B. Spike voltage topography and equivalent dipole localization in complex partial epilepsy. Brain Topogr., 1990, 3: 2134. Epstein, C.M. Computerized EEG in the courtroom. Neurology, 1994, 44: 15661569. Fender, D.H. Source localization of brain electrical activity. In: A.S. Gevins and A. Remond (Eds.), Methods of Analysis of Brain Electrical and Magnetic Signals [EEG Handbook Revised Series, Vol. 1]. Elsevier, Amsterdam, 1987, pp. 355405. Fisher, R.S., Webber, W.R., Lesser, R.P., Arroyo, S. and Uematsu, S. High-frequency EEG activity at the start of seizures. J. Clin. Neurophysiol., 1992, 9: 441448. Freeman, W.J. Analytic techniques used in the search for the physiological basis of the EEG. In: A.S. Gevins and A. Remond (Eds.), Methods of Analysis of Brain Electrical and Magnetic Signals [EEG Handbook Revised Series, Vol. 1]. Elsevier, Amsterdam, 1987, pp. 583664. Frost, J.D. Jr. Automatic recognition and characterization of epileptiform discharges in the human EEG. J. Clin. Neurophysiol., 1985, 2: 231249. Frost, J.D. Jr., Kellaway, P., Hrachovy, R.A., Glaze, D.G. and Mizrahi, E.M. Changes in epileptic spike conguration associated with attainment of seizure control. Ann. Neurol., 1986, 20: 723726. Gabor, A.J. and Seyal, M. Automated interictal EEG spike detection using articial neural networks. Electroenceph. clin. Neurophysiol., 1992, 83: 271280. Gevins, A.S. Correlation analysis. In: A.S. Gevins and A. Remond (Eds.), Methods of Analysis of Brain Electrical and Magnetic Signals [EEG Handbook Revised Series, Vol. 1]. Elsevier, Amsterdam, 1987, pp. 171194. Gill, P.E., Murray, W. and Wright, M.H. Practical Optimization. Academic Press, London, 1981. Gotman, J. Measurement of small time differences between EEG channels: method and application to epileptic seizure propagation. Electroenceph. clin. Neurophysiol., 1983, 56: 501514. Gotman, J. Seizure recognition and analysis. Electroenceph. clin. Neurophys. Suppl., 1985, 37: 133145. Gotman, J. and Wang, L.Y. State-dependent spike detection: concepts and preliminary results. Electroenceph. clin. Neurophysiol., 1991, 79: 11 19. Guerit, J.M., van den Bergh, P., Gobiet, Y. and Laterre, E.C. Somatosensory evoked potentials and jerk-locked EEG back-averaging in myoclonic epilepsy. Eur. Neurol., 1994, 34 (Suppl. 1): 4954. Harris, B., Gath, I., Rondouin, G. and Feuerstein, C. On time delay estimation of epileptic EEG. IEEE Trans. Biomed. Eng., 1994, 41: 820 829. Hughes, J.R., Taber, J.E. and Fino, J.J. The effect of spikes and spike-free epochs on topographic brain maps. Clin. Electroenceph., 1991, 22: 150 160. Iasemidis, L.D., Olson, L.D., Savit, R.S. and Sackellares, J.C. Time dependencies in the occurrences of epileptic seizures. Epilepsy Res., 1994, 17: 8194. Ives, J.R. and Schomer, D.L. A 6-pole lter for improving the readability of muscle contaminated EEGs. Electroenceph. clin. Neurophysiol., 1988, 69: 486490. Jayakar, P., Duchowny, M.S., Resnick, T.J. and Alvarez, L.A. Localization of epileptogenic foci using a simple reference-subtraction montage to document small interchannel time differences. J. Clin. Neurophysiol., 1991, 8: 212215. Klein, S.A. Inverting a Laplacian topography map. Brain Topogr., 1993, 6: 7982. Kofer, D.J. and Gotman, J. Automatic detection of spike-and-wave bursts in ambulatory EEG recordings. Electroenceph. clin. Neurophysiol., 1985, 61: 165180.

Вам также может понравиться