Eeg Signals Classification Using Committee Neural Network Biology Essay

Index Terms- Autoregressive Coefficients, Artificial Neural Network, Discrete Wavelet Transform.

Introduction

Eis the recording of self-generated electrical activity of the encephalon over a little period of clip. It is believed that EEG signals non merely represent the encephalon signal but besides the position of the whole organic structure. This gives a premier motive to use the advanced Digital Signal Processing Techniques to the EEG signal processing. Use of EEG signal has assorted advantages e.g. , it has high temporal declaration, it measures electrical activity straight and it follows a non-invasive process. EEG signal is used for assorted applications such as diagnosing of neurological diseases, qualifying the ictuss for the intent of intervention and to supervise the deepness of anaesthesia. Many research workers diverted their attending towards this subject due to the diagnostic application of EEG signals. This paper is besides based on the diagnostic application of EEG signals. In this paper we have presented a new sort of categorization method utilizing Committee Neural Network holding better categorization truth.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Fig. 1 [ 1 ] shows the basic block diagram of EEG signal processing. For the categorization intent we have used we have taken the natural EEG signals available at [ 2 ] .

From the five informations sets available we have selected 3 sets ( put A, set D, set E ) . In set A EEG signals were recorded from healthy voluntaries. In set D recordings were taken from within epiletogenic zone but during seizure free interval while set E contained merely seizure activity [ ] . For each of these informations we extracted the characteristics utilizing distinct ripple transform and Auto-Regressive coefficients method. After characteristic extraction the different informations sets were classified utilizing commission nervous web trained with back extension algorithm.

To cut down the dimensionality of the characteristics we have used Fisher ‘s ratio based optimisation technique.

In subdivision 2 the characteristics extraction methods are discussed, so in subdivision 3 we have discussed our proposed technique for the categorization intent and F-ratio based technique to cut down computational complexness. In subdivision 4 experimental consequences are shown. Finally in the last subdivision we will reason our work.

FIG. 1. Block diagram for EEG signal processing

Theoretical Background

2.1 Feature Extraction utilizing Wavelet Transform

The ripple transform gives us multi-resolution description of a signal. It addresses the jobs of non-stationary signals and hence is peculiarly suited for feature extraction of EEG signals. At high frequences it provides a good clip declaration and for low frequences it provides better frequence declaration, this is because the transform is computed utilizing a female parent ripple and different footing maps which are generated from the female parent ripple through grading and interlingual rendition operations. Hence it has a varying window size which is wide at low frequences and narrow at high frequences, therefore supplying optimum declaration at all frequences.

The uninterrupted ripple transform is defined

as

Where ten ( T ) is the signal to be analyzed, ? ( T ) is the female parent ripple or the footing map ? is the interlingual rendition parametric quantity and s is the scale parametric quantity.

The calculation of CWT consumes a batch of clip and resources and consequences in big sum of informations, hence Discrete ripple transform, which is based on sub-band cryptography is used as it gives a fast calculation of ripple transform. In DWT the time-scale representation of the signal can be achieved utilizing digital filtering techniques. The attack for the multi-resolution decomposition of a signal ten [ n ] is shown below. The DWT is computed by consecutive low base on balls and high base on balls filtering of the signal x [ n ] . Each measure consists of two digital filters and two downsamplers by 2. The high base on balls filter g [ . ] is the distinct female parent ripple and the low base on balls filter H [ . ] is its mirror version. At each degree the downsampled end products of the high base on balls filter produce the item coefficients and that of low base on balls filter gives the estimate coefficients. The estimate coefficients are farther decomposed and the process is continued as shown.

The spectral analysis of EEG signals utilizing WT can compact a figure of information points into few characteristics which characterize its behaviour. This is important for acknowledgment and diagnostic intents.

2.2 Feature Extraction utilizing AR Coefficients

We have found the Autoregressive ( AR ) Power spectral denseness appraisal of the EEG signals of set A, set B and set C. The Power spectral denseness is the distribution of power with regard to the frequence. Power spectral denseness of the random stationary signal can be expressed by multinomials A ( omega ) and B ( omega ) holding roots that fall inside the unit circle in the z-plane [ ub ] . Autoregressive coefficients are really of import characteristics as they represent the PSD of the signal which is really common. The AR coefficients can be obtained after work outing the additive equations of the system [ df ] . AR theoretical account is a theoretical account of a stationary stochastic procedure. The AR theoretical account of a signal { ten ( n ) } can be written as

where ? ( n ) is the distribution of stationary stochastic procedure holding zero average and discrepancy. The coefficients ai are the autoregressive parametric quantities of the theoretical account, and L is the theoretical account order. Since the method characterizes the input informations utilizing an all-pole theoretical account, the right pick of the theoretical account order P is of import. We can non take the value of theoretical account order excessively big or excessively little as it gives hapless appraisal of PSDs. We can pattern any stochastic procedure utilizing AR theoretical account. The spectrum of the stochastic procedure can be given as

There are assorted methods available for AR patterning such as traveling norm ( MA ) theoretical account, autoregressive traveling norm ( ARMA ) theoretical account, Burg ‘s algorithm [ 7 ] . ARMA method of AR theoretical account is usually used to acquire good truth. Burg algorithm estimates the contemplation coefficient Army Intelligence. we can utilize Burg method to suit a pth order autoregressive ( AR ) theoretical account to the input signal, x, by minimising ( least squares ) the forward and backward anticipation mistakes while restraining the AR parametric quantities to fulfill the Levinson-Durbin recursion [ 8 ] .

The Burg method is a recursive procedure.

In this paper we have followed the Burg ‘s method to happen the AR coefficients. In the present paper the theoretical account order is taken to be equal to 10.

2.3 Neural Network

Artificial nervous webs have been successfully used in pattern acknowledgment in assorted subjects. It performs this by first undergoing a preparation session in which the preparation form which is represented as a characteristic vector is repeatedly fed into the web along with the category to which each specific form belongs. The web learns from the preparation vector and generalizes by sorting the forms non encountered during the preparation stage. The multilayer perceptron web has been the most popular nervous construction for categorization intents since they can sort non-linearly dissociable categories by developing them in a supervised mode with a extremely popular algorithm known as mistake back-propagation algorithm which is based on mistake rectification regulation. The basic construction of a multilayer perceptron is shown in fig. .It fundamentally consists of an ( 1 ) input bed which has beginning nodes that receive the activation forms, ( 2 ) one or more concealed bed holding hidden nerve cells which extract higher order statistics ( 3 ) end product bed which provide the overall response of the web to the input vectors. The end products of each bed are fed to the following bed.

The back extension algorithm involves two stairss. First the input form is applied to the web and this signal is propagated through different beds and the end product is computed for each bed. The ensuing end product at the end product bed is compared with the mark value ensuing in an mistake signal at each end product unit.

ej ( n ) =dj ( ) n-yj ( n ) , at jth end product node dj ( ) is the coveted end product and yj ( ) is the ouput of the web. The cost map is given as

The aim is to set the free parametric quantities of the web to minimise. This is the forward base on balls and the synaptic weights remain unchanged. In the backward base on balls the mistake signal is passed in the backward way bed by bed calculating the local gradient at each bed. This recursive procedure permits synaptic weights of the web undergo alterations in conformity with the delta regulation. Both the base on ballss are iteratively repeated till the public presentation of the web achieves the needed end.

Proposed Technique

3.1 Committee Neural Network

Committee nervous web is an attack that reaps the benefits of its single members. It has a parallel construction that produces a concluding end product [ ] by uniting consequences of its member nervous web. The proposed technique consists of 3 stairss ( 1 ) choice of appropriate inputs for the single member of the commission ( 2 ) preparation of each member ( 3 ) determination doing based on bulk sentiment.

The available information is divided into preparation and proving informations. From the preparation informations characteristics were extracted utilizing ripple transform and AR coefficients. The input characteristic set is divided every bit among all the nervous webs for developing intent. The different webs have different nerve cells and initial weights. After the preparation stage is completed the webs are tested with proving informations. All the nervous webs were trained utilizing gradient descent back extension algorithm utilizing MATLAB package bundle. Out of the different webs employed for the initial preparation phase the best acting three webs were selected to organize the commission. For the categorization purpose the bulk determination of the commission formed the concluding end product. Fig. shows the block diagram of the commission nervous web.

3.2 F-Ratio based optimisation technique

F-Ratio is a statistical step which is used in the comparing of statistical theoretical accounts that have been fit to informations set to place the theoretical account that best fits the population from which the informations were sampled [ wiki ] . We can see a multi bunch informations as shown in fig 2. F-ratio can be formulated as

F-ratio = Variance of mean between the bunchs / Average discrepancy within the bunch

Suppose there are thousand Numberss of bunchs each holding n figure of informations points. If xij is an ith component of the jth category so the mean of the jth category µj can be expressed as

The mean of all µj is called the planetary mean of the informations and can be expressed as µ0

The f-ratio can be expressed as

If the f-ratio additions so the bunchs move off from each other or the bunch size psychiatrists. We can use this f-ratio based optimisation technique in instance of EEG signals to cut down the dimensionality of the characteristic vector.

Fig. 2. Diagram for multi-cluster informations

Experimental Consequence

4.1 Feature Extraction

Consecutive No.

Coefficient No.

Coefficients F-ratio

Consecutive No.

Coefficient No.

Coefficients F-ratio

1

21

1.082

17

1

0.226

2

22

0.9316

18

13

0.1972

3

12

0.5243

19

14

0.1958

4

8

0.4748

20

31

0.1634

5

24

0.4714

21

28

0.1555

6

9

0.4464

22

27

0.0459

7

6

0.4073

23

20

0.0444

8

10

0.401

24

26

0.0407

9

23

0.3981

25

17

0.0036

10

4

0.3915

26

18

0.0023

11

25

0.3708

27

19

0.0011

12

5

0.3699

28

11

0.0004

13

30

0.365

29

15

0.0003

14

16

0.301

30

3

0.0002

15

29

0.2875

31

7

0.0001

16

2

0.2766

From the informations available [ ] a rectangular window of length 256 distinct informations was selected to organize a individual EEG section. For analysis of signals utilizing WT choice of the appropriate ripple and figure of decomposition degree is really of import. The ripple coefficients were computed utilizing daubechies ripple of order 2 because its smoothing characteristics are more suited to observe alterations in EEG signal. In the present survey, the EEG signals were decomposed into inside informations D1-D4 and one estimate A4. For the four item coefficients we get 247 coefficients ( 129+66+34+18 ) and 18 for the estimate coefficient. So a sum of 265 coefficients were obtained for each section.

To cut down the figure of characteristics following statistics were used:

1. Maximum of ripple coefficients in each bomber set.

2. Minimum of ripple coefficients in each bomber set

3. Mean of ripple coefficients in each bomber set

4. Standard Deviation of ripple coefficients in each bomber set

therefore the dimension of DWT coefficients is 20. AR coefficients are obtained by utilizing MATLAB tool chest. Since the theoretical account order is 10 we have 11 AR coefficients which are appended in the DWT coefficients Now the entire characteristic dimension is 31 ( 20+11 ) .

4.2 Application of commission nervous web to EEG signals

The commission nervous web was formed by three independent members each trained with different characteristic sets. Prior to recruitment in the commission many webs incorporating different concealed nerve cells and initial weights were trained and the best acting three were selected. The determination merger was obtained utilizing bulk vote. In order to make a steadfast bulk determination odd figure of webs is required. The ensuing truth of single and the commission is shown in table [ 1 ] .

Table 1. Accuracy utilizing Committee nervous web

ANN

Accuracy

NN1

93.28

NN2

94.52

NN3

92.76

CNN

95

CNN ( after mark add-on )

95.36

4.3 Decrease in Feature dimensions by F-Ratio

We can happen the F-Ratio corresponding to each characteristic as shown in table 2.

Table 2 F-Ratio

Now in order to cut down the dimension of the characteristic vector we deleted some characteristics holding less F-Ratio. In the table 3 we have shown that the truth after the omission of characteristic.

Table 3 Deleted Features

Consecutive No.

No. of coefficients taken

Network construction

Accuracy %

1

31

31-93-3

95.31

2

30

30-90-3

94.91

3

29

29-87-3

95.00

4

28

28-84-3

94.71

5

27

27-81-3

94.79

6

26

26-78-3

95.03

7

25

25-75-3

95.32

8

24

24-72-3

95.02

9

23

23-69-3

95.12

10

22

22-66-3

95.37

11

21

21-63-3

94.95

12

20

20-60-3

95.16

13

19

19-57-3

95.45

14

18

18-54-3

95.29

15

17

17-51-3

95.70

16

16

16-48-3

95.83

17

15

15-45-3

95.15

18

14

14-42-3

95.04

19

13

13-39-3

94.35

Hence on the footing of F-Ratio based optimisation technique we have deleted the 18 characteristics. Hence we have reduced the computational complexness without impacting the truth.

Decision