Download Automated EEG-Based Diagnosis of Neurological Disorders - Inventing the Future of Neurology - H. Adeli, et. al., (CRC, 2010) WW PDF

TitleAutomated EEG-Based Diagnosis of Neurological Disorders - Inventing the Future of Neurology - H. Adeli, et. al., (CRC, 2010) WW
TagsMedical
LanguageEnglish
File Size7.2 MB
Total Pages419
Table of Contents
                            Cover
Title Page
ISBN 9781439815311
Preface
Acknowledgments
About the Authors
List of Figures
List of Tables
Contents
I. Basic Concepts
	1. Introduction
	2. Time-Frequency Analysis: Wavelet Transforms
		2.1 Signal Digitization and Sampling Rate
		2.2 Time and Frequency Domain Analyses
		2.3 Time-Frequency Analysis
			2.3.1 Short Time Fourier Transform (STFT)
			2.3.2 Wavelet Transform
		2.4 Types of Wavelets
		2.5 Advantages of the Wavelet Transform
	3. Chaos Theory
		3.1 Introduction
		3.2 Attractors in Chaotic Systems
		3.3 Chaos Analysis
			3.3.1 Measures of Chaos
			3.3.2 Preliminary Chaos Analysis - Lagged Phase Space
			3.3.3 Final Chaos Analysis
	4. Classifier Designs
		4.1 Data Classification
		4.2 Cluster Analysis
		4.3 k-Means Clustering
		4.4 Discriminant Analysis
		4.5 Principal Component Analysis
		4.6 Artificial Neural Networks
			4.6.1 Feed forward Neural Network and Error Backpropagation
			4.6.2 Radial Basis Function Neural Network
II. Automated EEG-Based Diagnosis of Epilepsy
	5. Electroencephalograms and Epilepsy
		5.1 Spatio-Temporal Activity in the Human Brain
		5.2 EEG: A Spatio-Temporal Data Mine
		5.3 Data Mining Techniques
		5.4 Multi-Paradigm Data Mining Strategy for EEGs
			5.4.1 Feature Space Identification and Feature EnhancementUsing Wavelet-Chaos Methodology
			5.4.2 Development of Accurate and Robust Classifiers
		5.5 Epilepsy and Epileptic Seizures
	6. Analysis of EEGs in an Epileptic Patient Using Wavelet Transform
		6.1 Introduction
		6.2 Wavelet Analysis of a Normal EEG
		6.3 Characterization of the 3-Hz Spike and Slow Wave Complex in Absence Seizures Using Wavelet Transforms
			6.3.1 Daubechies Wavelets
			6.3.2 Harmonic Wavelet
			6.3.3 Characterization
		6.4 Concluding Remarks
	7. Wavelet-Chaos Methodology for Analysis of EEGs and EEG Sub-Bands
		7.1 Introduction
		7.2 Wavelet-Chaos Analysis of EEG Signals
		7.3 Application and Results
			7.3.1 Description of the EEG Data Used in the Research
			7.3.2 Data Preprocessing and Wavelet Decomposition of EEG into Sub-Bands
			7.3.3 Results of Chaos Analysis for a Sample Set of Unfiltered EEGs
			7.3.4 Statistical Analysis of Results for All EEGs
		7.4 Concluding Remarks
	8. Mixed-Band Wavelet-Chaos Neural Network Methodology
		8.1 Introduction
		8.2 Wavelet-Chaos Analysis: EEG Sub-Bands and Feature Space Design
		8.3 Data Analysis
		8.4 Band-Specific Analysis: Selecting Classifiers and Feature Spaces
			8.4.1 k-Means Clustering
			8.4.2 Discriminant Analysis
			8.4.3 RBFNN
			8.4.4 LMBPNN
		8.5 Mixed-Band Analysis: Wavelet-Chaos-Neural Network
		8.6 Concluding Remarks
	9. Principal Component Analysis-Enhanced Cosine Radial Basis Function Neural Network
		9.1 Introduction
		9.2 Principal Component Analysis for Feature Enhancement
		9.3 Cosine Radial Basis Function Neural Network: EEG Classification
		9.4 Applications and Results
			9.4.1 Neural Network Training
			9.4.2 Output Encoding Scheme
			9.4.3 Comparison of Classifiers
			9.4.4 Sensitivity to Number of Eigenvectors
			9.4.5 Sensitivity to Training Size
			9.4.6 Sensitivity to Spread
		9.5 Concluding Remarks and Clinical Significance
III. Automated EEG-Based Diagnosis of Alzheimer's Disease
	10. Alzheimer's Disease and Models of Computation: Imaging, Classification, and Neural Models
		10.1 Introduction
		10.2 Neurological Markers of Alzheimer's Disease
		10.3 Imaging Studies
			10.3.1 Anatomical Imaging versus Functional Imaging
			10.3.2 Identification of Region of Interest (ROI)
			10.3.3 Image Registration Techniques
			10.3.4 Linear and Area Measures
			10.3.5 Volumetric Measures
		10.4 Classification Models
		10.5 Neural Models of Memory and Alzheimer's Disease
			10.5.1 Approaches to Neural Modeling
			10.5.2 Hippocampal Models of Associative Memory
			10.5.3 Neural Models of Progression of AD
	11. Alzheimer's Disease: Models of Computation and Analysis of EEGs
		11.1 EEGs for Diagnosis and Detection of Alzheimer's Disease
		11.2 Time-Frequency Analysis
		11.3 Wavelet Analysis
		11.4 Chace Analysis
		11.5 Concluding Remarks
	12. A Spatio-Temporal Wavelet-Chaos Methodology for EEG- Based Diagnosis of Alzheimer's Disease
		12.1 Introduction
		12.2 Methodology
			12.2.1 Description of the EEG Data
			12.2.2 Wavelet Decomposition of EEG into Sub-Bands
			12.2.3 Chaos Analysis and ANOVA Design
		12.3 Results
			12.3.1 Complexity and Chaoticity of the EEG: Results of the Three-Way Factorial ANOVA
			12.3.2 Global Complexity and Chaoticity
			12.3.3 Local Complexity and Chaoticity
			12.4.1 Chaoticity versus Complexity
			12.4.2 Eyes Open versus Eyes Closed
		12.5 Concluding Remarks
IV. Third Generation Neural Networks: Spiking Neural Networks
	13. Spiking Neural Networks: Spiking Neurons and Learning Algorithms
		13.1 Introduction
		13.2 Information Encoding and Evolution of Spiking Neurons
		13.3 Mechanism of Spike Generation in Biological Neurons
		13.4 Models of Spiking Neurons
		13.5 Spiking Neural Networks (SNNs)
		13.6 Unsupervised Learning
		13.7 Supervised Learning
			13.7.1 Feedforward Stage: Computation of Spike Times and Network Error
			13.7.2 Backpropagation Stage: Learning Algorithms
	14. Improved Spiking Neural Networks with Application to EEG Classification and Epilepsy and Seizure Detection
		14.1 Network Architecture and Training
			14.1.1 Number of Neurons in Each Layer
			14.1.2 Number of Synapses
			14.1.3 Initialization of Weights
			14.1.4 Heuristic Rules for SNN Learning Algorithms
		14.2 XOR Classification Problem
			14.2.1 Input and Output Encoding
			14.2.2 SNN Architecture
			14.2.3 Convergence Criteria
			14.2.4 Type of Neuron (Excitatory or Inhibitory)
			14.2.5 Convergence Results for a Simulation Time of 50 ms
			14.2.6 Convergence Results for a Simulation Time of 25 ms
			14.2.7 Summary
		14.3 Fisher Iris Classification Problem
			14.3.1 Input Encoding
			14.3.2 Output Encoding
			14.3.3 SNN Architecture
			14.3.4 Convergence Criteria: MSE and Training Accuracy
			14.3.5 Heuristic Rules for Adaptive Simulation Time and SpikeProp Learning Rate
			14.3.6 Classification Accuracy and Computational Efficiency versus Training Size
			14.3.7 Summary
		14.4 EEG Classification Problem
			14.4.1 Input and Output Encoding
			14.4.2 SNN Architecture and Training Parameters
			14.4.3 Convergence Criteria: MSE and Training Accuracy
			14.4.4 Classification Accuracy versus Training Size and Number of Input Neurons
			14.4.5 Classification Accuracy versus Desired Training Accuracy 301
			14.4.6 Summary
		14.5 Concluding Remarks
	15. A New Supervised Learning Algorithm for Multiple Spiking Neural Networks
		15.1 Introduction
		15.2 Multi-Spiking Neural Network (MuSpiNN) and Neuron Model
			15.2.1 MuSpiNN Architecture
			15.2.2 Multi-Spiking Neuron and the Spike Response Model
		15.3 Multi-SpikeProp: Backpropagation Learning Algorithm for MuSpiNN
			15.3.1 MuSpi NN Error Function
			15.3.2 Error Backpropagation for Adjusting Synaptic Weights
			15.3.3 Gradient Computation for Synapses Between a Neuron in the Last Hidden Layer and a Neuron in the Output Layer
			15.3.4 Gradient Computation for Synapses Between a Neuronin the Input or Hidden Layer and a Neuron in the Hidden Layer
	16. Applications of Multiple Spiking Neural Networks: EEG Classification and Epilepsy and Seizure Detection
		16.1 Parameter Selection and Weight Initialization
		16.2 Heuristic Rules for Multi-Spike Prop
		16.3 XOR Problem
		16.4 Fisher Iris Classification Problem
		16.5 EEG Classification Problem
		16.6 Discussion and Concluding Remarks
	17. The Future
Bibliography
Index
Back Page
                        
Document Text Contents
Page 2

AUTOMATED
EEG BASED DIAGNOSIS

OF
NEUROLOGICAL

DISORDERS
Inventing the Future of Neurology

K10784_FM.indd 1 12/18/09 12:24:43 PM

Page 209

178 Automated EEG-Based Diagnosis of Neurological Disorders

80

82

84

86

88

90

92

94

96

98

60 90 120 150 180 210 240

Training Size (No. of EEGs)

C
la

s
s
if
ic

a
ti
o
n
A

c
c
u
ra

c
y
(

%
)

FIGURE 9.6
Variations in classification accuracy of PCA-enhanced cosine RBFNN with
training size

96.7%. This value is not reported in Table 9.1 because using the large train-

ing size k = 240 EEGs results in a small testing dataset of only 60 EEGs. A

training size of k = 150 (50 EEGs from each group) is deemed sufficient to

model this problem. At the same time, the plateau in classification accuracy

for larger training sizes indicates that the model stabilizes beyond a certain

training size.

9.4.6 Sensitivity to Spread

The spread p of the RBF usually plays an important role in determining

the classification accuracy of RBFNN (Ghosh-Dastidar et al., 2007). In such

networks, a specific input is supposed to excite only a limited number of

nodes in the hidden layer. When the spread is too large, all hidden nodes

respond to a given input, which results in loss of classification accuracy. On

the other hand, when the spread is too small, each node responds only to a

Page 210

PCA-Enhanced Cosine RBFNN 179

60

65

70

75

80

85

90

95

100

0 10 20 30 40 50 60 70 80 90 100

Spread

C
la

s
s
if
ic

a
ti
o
n
A

c
c
u
ra

c
y
(

%
)

FIGURE 9.7
Classification accuracy versus spread for RBFNN

very specific input and therefore is unable to classify any new input accurately.

The effect of varying the spread from 10 to 100 in increments of 5 on the

classical RBFNN is illustrated in Fig. 9.7. It is observed that the classification

accuracy is very low for p = 10 and increases rapidly until p = 25. The

maximum classification accuracy (95.1%) is obtained in the range p ∈ (25, 60)

and then slowly decreases to 93.0% for p = 100. The classification accuracy

continues to decrease for values of p larger than 100 (not shown in Fig. 9.7).

Figure 9.7 shows clearly the classification accuracy depends on the choice

of the spread significantly. Two different approaches can be used to attack

this problem: 1) developing an algorithm to compute the optimum value of p

automatically for a given training input set and 2) designing a classifier that

is less sensitive to the spread. An inherent disadvantage of the first approach

is that the spread computation is dependent on the training data which makes

it sensitive to data outliers. Moreover, this approach relies on retrospective

data. In a prospective clinical application, the testing input is unknown and

Page 418

C:/Users/sam.ANS/Desktop/Book/Book/Adeli_GD.dvi


Index 387

convergence, 278, 279, 285, 292,
299, 332, 337

error function, 265, 317
feedforward, 264
multiple synapses, 259
performance, 274
radial basis function, 257
supervised learning, 258
type of neuron, 278
unsupervised learning, 256
weight initialization, 272, 330

Spiking neuron, 201, 246, 252
internal state, 264, 312, 315, 320
multiple spike, 306, 340
single spike, 257, 305
time to first spike, 305

Sub-band analysis, 83, 120, 128, 140,
144, 223, 226

Supervised Hebbian learning, 344
Supervised learning, 164, 245

training size, 164
Surrogate data, 218
Synapse, 58
Synaptic weight, 58
Synchronization likelihoods, 347

Takens estimator, 46
Theta neuron, 261
Time series, 5, 29, 41, 145
Time step, 262, 282
Time-domain analysis, 7
Time-frequency analysis, 11, 208
Time-frequency markers, 208
Training size, 177
Trajectory divergence, 46
Translation parameter, 15
Triangular basis function, 67, 170, 171
True neighbor, 44

Vanishing moments, 24

Wavelet
center frequency, 20
Coiflct, 24
Daubechies, 20, 24, 93, 96
Haar, 20, 24

harmonic wavelet., 24, 97
Wavelet analysis, 93, 96, 120, 212

decomposition level, 19, 96, 97
Wavelet coefficients, 18
Wavelet filter, 18
Wavelet function, 14, 17, 18, 24
Wavelet transform, 12, 89

continuous wavelet transform, 14
discrete wavelet transform, 14,

16, 17, 93
inverse, 15
residual error, 18

Wavelet-chaos methodology, 79, 84,
122, 144, 223

XOR problem, 275, 332

Similer Documents