论文标题
中风限制了在线手写数学表达识别的注意力网络
Stroke Constrained Attention Network for Online Handwritten Mathematical Expression Recognition
论文作者
论文摘要
在本文中,我们提出了一个新型的中风限制的注意力网络(SCAN),该网络将中风视为基于编码器的在线手写数学表达识别(HMER)的基本单元。与以前使用跟踪点或图像像素作为基本单元的方法不同,扫描充分利用中风级信息以更好地对齐和表示。所提出的扫描可以在单模式(在线或离线)和多模式HMER中采用。对于单模式HMER,扫描首先采用CNN-GRU编码器从在线模式下从输入轨迹中提取点级特征,并采用CNN编码器从离线模式下输入图像中提取Pixel级功能,然后使用Stroke受约束的信息将其转换为在线和离线式式式式式式式功能。使用中风级特征可以显式地分组点或属于同一中风的像素,因此通过注意机制通过解码器降低了符号分割和识别的难度。对于多模式hmer,除了在解码器中融合多模式信息外,扫描还可以通过利用在线和离线方式之间的基于中风的对齐方式来融合编码中的多模式信息。编码器融合是组合多模式信息的更好方法,因为它在解码器融合之前实现了信息交互,以便在训练编码器模型时可以更早地利用多种模态的优势。在Crohme竞赛中发布的基准测试中,该拟议的扫描实现了最新的性能。
In this paper, we propose a novel stroke constrained attention network (SCAN) which treats stroke as the basic unit for encoder-decoder based online handwritten mathematical expression recognition (HMER). Unlike previous methods which use trace points or image pixels as basic units, SCAN makes full use of stroke-level information for better alignment and representation. The proposed SCAN can be adopted in both single-modal (online or offline) and multi-modal HMER. For single-modal HMER, SCAN first employs a CNN-GRU encoder to extract point-level features from input traces in online mode and employs a CNN encoder to extract pixel-level features from input images in offline mode, then use stroke constrained information to convert them into online and offline stroke-level features. Using stroke-level features can explicitly group points or pixels belonging to the same stroke, therefore reduces the difficulty of symbol segmentation and recognition via the decoder with attention mechanism. For multi-modal HMER, other than fusing multi-modal information in decoder, SCAN can also fuse multi-modal information in encoder by utilizing the stroke based alignments between online and offline modalities. The encoder fusion is a better way for combining multi-modal information as it implements the information interaction one step before the decoder fusion so that the advantages of multiple modalities can be exploited earlier and more adequately when training the encoder-decoder model. Evaluated on a benchmark published by CROHME competition, the proposed SCAN achieves the state-of-the-art performance.