论文标题
基于ATCNN的水下声学目标的特征提取和识别方法
An Features Extraction and Recognition Method for Underwater Acoustic Target Based on ATCNN
论文作者
论文摘要
面对复杂的海洋环境,使用船辐射噪声进行水下声学目标识别(UATR)非常具有挑战性。受听觉感知的神经机制的启发,本文提供了一个新的深神经网络,该网络由原始的水下声学信号训练,该信号具有深度可分离的卷积(DWS)和时间删除的卷积神经网络,称为听觉感知启发了时间散布的卷积卷积神经网络(ATCNN),然后对水下检测和分类进行水下表明,并在水上进行了分类。所提出的ATCNN模型由受听觉感知启发的可学习功能提取器和集成层组成,以及受语言模型启发的时间删除的卷积。本文将原始的时域船舶辐射信号分解为具有深度可分离卷积过滤器的不同频率组件,然后根据听觉感知提取信号特征。深度特征集成在集成层上。时间删除的卷积用于长期上下文建模。结果,例如语言模型,课堂内和类间信息可以完全用于UATR。对于UATR任务,分类精度达到90.9%,这是对比实验最高的。实验结果表明,ATCNN具有提高UATR分类性能的巨大潜力。
Facing the complex marine environment, it is extremely challenging to conduct underwater acoustic target recognition (UATR) using ship-radiated noise. Inspired by neural mechanism of auditory perception, this paper provides a new deep neural network trained by original underwater acoustic signals with depthwise separable convolution (DWS) and time-dilated convolution neural network, named auditory perception inspired time-dilated convolution neural network (ATCNN), and then implements detection and classification for underwater acoustic signals. The proposed ATCNN model consists of learnable features extractor and integration layer inspired by auditory perception, and time-dilated convolution inspired by language model. This paper decomposes original time-domain ship-radiated noise signals into different frequency components with depthwise separable convolution filter, and then extracts signal features based on auditory perception. The deep features are integrated on integration layer. The time-dilated convolution is used for long-term contextual modeling. As a result, like language model, intra-class and inter-class information can be fully used for UATR. For UATR task, the classification accuracy reaches 90.9%, which is the highest in contrast experiment. Experimental results show that ATCNN has great potential to improve the performance of UATR classification.