论文标题
使用卷积神经网络对无人机的基于RF的低SNR分类
RF-Based Low-SNR Classification of UAVs Using Convolutional Neural Networks
论文作者
论文摘要
本文研究了以低信噪比(SNR)制度从射频(RF)指纹的无人机(RF)指纹分类的分类问题。我们使用卷积神经网络(CNN),该神经网络(CNN)都使用RF时间序列图像和15个不同的现成无人机控制器RF信号的频谱图。使用时间序列信号图像时,CNN从信号瞬态和包膜中提取特征。随着SNR的降低,此方法会大大失败,因为瞬态中的信息在噪声中丢失,并且信封会严重变形。与使用频谱图的RF信号的时间序列表示相反,只能专注于所需的频率间隔,即2.4 GHz ISM频段,并过滤掉此频段外的任何其他信号组件。这些优点在基于时间序列的信号方法方面具有显着的性能改善。为了进一步提高基于频谱图的CNN的分类准确性,我们通过将它们截断为有限的光谱密度间隔来确定光谱图图像。使用噪声信号的频谱图创建单个模型并调整CNN模型参数,我们达到的分类精度从SNR的92%到100%不等,从-10 dB到30 dB,这显着超过了现有方法,从而对我们最佳知识的最佳知识表现出了明显的表现。
This paper investigates the problem of classification of unmanned aerial vehicles (UAVs) from radio frequency (RF) fingerprints at the low signal-to-noise ratio (SNR) regime. We use convolutional neural networks (CNNs) trained with both RF time-series images and the spectrograms of 15 different off-the-shelf drone controller RF signals. When using time-series signal images, the CNN extracts features from the signal transient and envelope. As the SNR decreases, this approach fails dramatically because the information in the transient is lost in the noise, and the envelope is distorted heavily. In contrast to time-series representation of the RF signals, with spectrograms, it is possible to focus only on the desired frequency interval, i.e., 2.4 GHz ISM band, and filter out any other signal component outside of this band. These advantages provide a notable performance improvement over the time-series signals-based methods. To further increase the classification accuracy of the spectrogram-based CNN, we denoise the spectrogram images by truncating them to a limited spectral density interval. Creating a single model using spectrogram images of noisy signals and tuning the CNN model parameters, we achieve a classification accuracy varying from 92% to 100% for an SNR range from -10 dB to 30 dB, which significantly outperforms the existing approaches to our best knowledge.