IRJET- Deep Neural Network for the Automated Detection and Diagnosis of Seizu...
Stefan_Dukic
1. TRINITY CENTRE FOR BIOENGINEERING
Aims
The aim of this study is to develop a method that uses EEG recordings
and features of audio stimuli to monitor in real-time listeners’
engagement in speech.
Introduction
Many work environments and daily activities require our full attention
for a long period of time. These types of tasks are usually
characterized by long monotonous observations, which lead to
progressive decline in attention. Using systems that can monitor
sustained attention in real-time, we could address problems caused by
this, so called, vigilance decrement.
Attention decoding
Current methods use Event-related potentials and presence of
different brain waves [1] [2]. However, this approach doesn’t allow
real-time monitoring.
Solution: Recently, a method was successfully used in decoding
attention from an unaveraged 60s long EEG recording [3]. As it was
previously shown that the envelope frequencies between 2 and 8 Hz,
are linearly relatable to the EEG.
Results to date
Behavioural analysis shows that there is a significant
difference between performance in the 20 most successful
trials and the 20 least successful ones.
It is expected that the accuracy with which the speech can be
reconstructed will index a subject’s engagement with audio
stimuli.
Plan for June, July, August
The next step would be to analyse, compare and contrast the
temporal and spatial properties of the brain processes involved
in selective attention with other studies.
The same analysis could be done for other speech features, such
as, spectrograms and phonemes.
References
1. Martel, A., Dahne, S. & Blankertz, B., 2014. EEG predictors of covert
vigilant attention. Journal of Neural Engineering, 11(3),
2. Eichele, H., Juvodden, H. T., Ullsperger, M. & Eichele, T., 2010. Mal-
adaptation of event-related EEG responses preceding performance errors.
Frontiers in Human Neuroscience, Vol. 4.
3. O’Sullivan, J. A. et al., 2014. Attentional Selection in a Cocktail Party
Environment Can Be Decoded from Single-Trial EEG. Cerebral Cortex,
5(25), pp. 1697-1706.
4. Alan J. Power et al., 2012. At what time is the cocktail party? A late locus
of selective attention to natural speech. European Journal of
Neuroscience, vol. 35, pp. 1497–1503.
Methodology
In the experiment a visual stimulus and an audio stimulus are
presented simultaneously. The visual task – the Mackworth clock –
has two levels: easy and hard. Since the subject is instructed to attend
to both tasks simultaneously, these two levels of visual task are
expected to disengage the subject’s attention from the audio task
differently.
Real-time monitoring of a listener's engagement with speech
78.20%
37.13%
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%
The most successful
trials
The least successful
trials
Figure 3. Average performance of all subjects for the audio task;
only 20 trails with the best performance versus 20 trials with the
worst performance where included.
Figure 2. Illustration of decoding strategy
Stefan Dukic and Edmund C. Lalor
Figure 5. The effect of attention. Spatial distribution shows the activation
for (A) the left and (B) the right attended stories, in the interval 195–230
ms [4].
Figure 4. Reconstruction accuracy of the audio stimuli using
envelope as a speech feature. As expected, the reconstruction
accuracy drops with the audio task performance. On the x-axis
are the trails ordered from the trail with the best audio
performance to the trial with the worst performance.
Figure 1. Example of the vigilance decrement