论文标题
自我知识蒸馏和选择性批次采样
Siamese Sleep Transformer For Robust Sleep Stage Scoring With Self-knowledge Distillation and Selective Batch Sampling
论文作者
论文摘要
在本文中,我们提出了一个暹罗睡眠变压器(SST),该睡眠变压器有效地从单渠道原始脑电图信号中提取特征,以实现健壮的睡眠阶段评分。尽管过去几年的睡眠阶段得分取得了重大进展,但其中大多数主要集中于模型性能的增长。但是,其他问题仍然存在:数据集中标签的偏差以及通过重复培训通过重复培训的不稳定性。为了减轻这些问题,我们提出了SST,这是一种新型的睡眠阶段评分模型,具有选择性的批次采样策略和自我知识蒸馏。为了评估该模型对标签的偏见的鲁棒性,我们使用了不同的数据集进行培训和测试:睡眠心脏健康研究和Sleep-EDF数据集。在这种情况下,SST在睡眠阶段评分中表现出竞争性表现。此外,我们证明了选择性批处理采样策略的有效性,并通过重复训练降低了性能的标准偏差。这些结果可能表明,SST提取了有效的学习特征,以抵制数据集中标签的偏差,并且选择性批处理采样策略可用于训练中的模型鲁棒性。
In this paper, we propose a Siamese sleep transformer (SST) that effectively extracts features from single-channel raw electroencephalogram signals for robust sleep stage scoring. Despite the significant advances in sleep stage scoring in the last few years, most of them mainly focused on the increment of model performance. However, other problems still exist: the bias of labels in datasets and the instability of model performance by repetitive training. To alleviate these problems, we propose the SST, a novel sleep stage scoring model with a selective batch sampling strategy and self-knowledge distillation. To evaluate how robust the model was to the bias of labels, we used different datasets for training and testing: the sleep heart health study and the Sleep-EDF datasets. In this condition, the SST showed competitive performance in sleep stage scoring. In addition, we demonstrated the effectiveness of the selective batch sampling strategy with a reduction of the standard deviation of performance by repetitive training. These results could show that SST extracted effective learning features against the bias of labels in datasets, and the selective batch sampling strategy worked for the model robustness in training.