论文标题

使用VQ-VAE编码神经形态性具有体积数据

Neuromorphologicaly-preserving Volumetric data encoding using VQ-VAE

论文作者

Tudosiu, Petru-Daniel, Varsavsky, Thomas, Shaw, Richard, Graham, Mark, Nachev, Parashkev, Ourselin, Sebastien, Sudre, Carole H., Cardoso, M. Jorge

论文摘要

深度学习体系结构的效率和紧凑性以及硬件改进的效率和紧凑性,使得在更高分辨率下对医疗容量数据进行了复杂而高维的建模。最近,已经提出了载体定量的变分自动编码器(VQ-VAE)作为一种有效的生成无监督的学习方法,可以将图像编码为其初始大小的一小部分,同时保留其解码的保真度。在这里,我们显示了一个受VQ-VAE启发的网络可以有效地编码完整的3D大脑体积,在保持图像保真度的同时,将数据压缩到原始大小的0.825 \%$,并显着超过了先前的最先前的先前。然后,我们证明了VQ-VAE解码图像通过基于体素的形态和分割实验来保留原始数据的形态特征。最后,我们证明可以预先训练此类模型,然后在不同的数据集上进行微调,而不会引入偏差。

The increasing efficiency and compactness of deep learning architectures, together with hardware improvements, have enabled the complex and high-dimensional modelling of medical volumetric data at higher resolutions. Recently, Vector-Quantised Variational Autoencoders (VQ-VAE) have been proposed as an efficient generative unsupervised learning approach that can encode images to a small percentage of their initial size, while preserving their decoded fidelity. Here, we show a VQ-VAE inspired network can efficiently encode a full-resolution 3D brain volume, compressing the data to $0.825\%$ of the original size while maintaining image fidelity, and significantly outperforming the previous state-of-the-art. We then demonstrate that VQ-VAE decoded images preserve the morphological characteristics of the original data through voxel-based morphology and segmentation experiments. Lastly, we show that such models can be pre-trained and then fine-tuned on different datasets without the introduction of bias.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源