论文标题

一个两阶段的高效3-D CNN框架,用于基于脑电图的情绪识别

A Two-Stage Efficient 3-D CNN Framework for EEG Based Emotion Recognition

论文作者

Qiao, Ye, Alnemari, Mohammed, Bagherzadeh, Nader

论文摘要

本文提出了一个新颖的两阶段框架,用于使用脑电图数据,以优于最先进的模型,同时保持模型尺寸较小且在计算上有效。该框架由两个阶段组成;第一阶段涉及构建名为EEGNET的高效模型,该模型的灵感来自最先进的有效体系结构,并采用了包含深度可分离卷积层的倒置块。价和唤醒标签上的EEGNET模型的平均分类精度分别为90%,96.6%和99.5%,分别仅为6.4K,14K和25K参数。在准确性和存储成本方面,这些模型的表现优于先前的最新最高最多9%。在第二阶段,我们将这些模型进行二进制以进一步压缩它们并轻松地在边缘设备上部署。二进制神经网络(BNNS)通常会降低模型的精度。在本文中,我们通过引入三种新型方法并实现了比基线二进制模型的20 \%改进来改善EEGNET二进制模型。拟议的二进制EEGNET模型的精度分别为81%,95%和99%,存储成本分别为0.11Mbit,0.28mbits和0.46mbits。这些模型有助于在边缘环境中部署精确的人类情感识别系统。

This paper proposes a novel two-stage framework for emotion recognition using EEG data that outperforms state-of-the-art models while keeping the model size small and computationally efficient. The framework consists of two stages; the first stage involves constructing efficient models named EEGNet, which is inspired by the state-of-the-art efficient architecture and employs inverted-residual blocks that contain depthwise separable convolutional layers. The EEGNet models on both valence and arousal labels achieve the average classification accuracy of 90%, 96.6%, and 99.5% with only 6.4k, 14k, and 25k parameters, respectively. In terms of accuracy and storage cost, these models outperform the previous state-of-the-art result by up to 9%. In the second stage, we binarize these models to further compress them and deploy them easily on edge devices. Binary Neural Networks (BNNs) typically degrade model accuracy. We improve the EEGNet binarized models in this paper by introducing three novel methods and achieving a 20\% improvement over the baseline binary models. The proposed binarized EEGNet models achieve accuracies of 81%, 95%, and 99% with storage costs of 0.11Mbits, 0.28Mbits, and 0.46Mbits, respectively. Those models help deploy a precise human emotion recognition system on the edge environment.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源