Parameter Tuning for Enhancing Inter-Subject Emotion Classification in Four Classes for VR-EEG Predictive Analytics

  • azmi Sofian Suhaimi, James Mountstephens, Jason Teo

Abstract

The following research describes the potential in classifying emotions using wearable EEG headset while using a virtual environment to stimulate the responses of the users. Current developments on emotion classification have always steered towards the use of a clinical-grade EEG headset with a 2D monitor screen for stimuli evocations which may introduce additional artifacts or inaccurate readings into the dataset due to users unable to provide their full attention from the given stimuli even though the stimuli presentated should have been advantageous in provoking emotional reactions. Furthermore, the clinical-grade EEG headset requires a lengthy duration to setup and avoiding any hindrance such as hairs hindering the electrodes from collecting the brainwave signals  or electrodes coming loose thus requiring additional time to work to fix the issue. With the lengthy duration of setting up the EEG headset, the user may expereince fatigue and become incapable of responding naturally to the emotion being presented from the stimuli. Therefore, this research introduces the use of a wearable low-cost EEG headset with dry electrodes that requires only a trivial amount of time to set up and a Virtual Reality (VR) headset for the presentation of the emotional stimuli in an immersive VR environment which is paired with earphones to provide the full immersive experience needed for the evocation of the emotion. The 360 video stimuli are designed and stitched together according to the arousal-valence space (AVS) model with each quadrant having an 80-second stimuli presentation period followed by a 10-second rest period in between quadrants. The EEG dataset is then collected through the use of a wearable low-cost EEG using four channels located at TP9, TP10, AF7, AF8. The collected dataset is then fed into the machine learning algorithms, namely KNN, SVM and Deep Learning with the dataset focused on inter-subject test approaches using 10-fold cross-validation. The results obtained found that SVM using Radial Basis Function Kernel 1 achieved the highest accuracy at 85.01%. This suggests that the use of a wearable low-cost EEG headset with a significantly lower resolution signal compared to clinical-grade equipment which utilizes only a very limited number of electrodes appears to be highly promising as an emotion classification BCI tool and may thus spur up open up myriad practical, affordable and cost-friendly solutions in applying to the medical, education, military, and entertainment domains. 

Published
2020-04-14
How to Cite
azmi Sofian Suhaimi, James Mountstephens, Jason Teo. (2020). Parameter Tuning for Enhancing Inter-Subject Emotion Classification in Four Classes for VR-EEG Predictive Analytics. International Journal of Advanced Science and Technology, 29(6s), 1483 - 1491. Retrieved from http://sersc.org/journals/index.php/IJAST/article/view/9288