论文标题

分支量子卷积神经网络

Branching Quantum Convolutional Neural Networks

论文作者

MacCormack, Ian, Delaney, Conor, Galda, Alexey, Aggarwal, Nidhi, Narang, Prineha

论文摘要

基于神经网络的算法在冷凝物理物理学中获得了相当大的关注,因为它们可以从非常高维数据集中学习复杂模式的能力,以分类复杂的多体量子系统中复杂的纠缠和相关​​性。小型量子计算机已经显示出在大量子和非常大的经典数据集上学习任务的潜在收益。量子卷积神经网络(QCNN)是一类特别有趣的算法,可以通过在量子问题的非平凡阶段执行二进制分类任务来学习量子数据集的特征。受这一诺言的启发,我们提出了QCNN的概括,分支量子卷积神经网络或BQCNN的概括,具有较高的表现性。 BQCNN的一个关键特征是,它利用中路(中间)测量结果,可在当前的捕获离子系统上实现,在池层中获得,以确定在电路的后续卷积层中使用哪些参数。这会导致分支结构,从而允许在给定电路深度中更多可训练的变异参数。这特别用于当前NISQ设备,其中电路深度受门噪声的限制。我们概述了ANSATZ结构和缩放,并提供了与QCNN相比提高表达性的证据。使用人工构造的大型培训状态数据集作为概念验证,我们证明了训练任务的存在,其中BQCNN远远超过了普通的QCNN。最后,我们提出未来的方向,在BQCNN中,经典的分支结构和可训练参数的密度增加将特别有价值。

Neural network-based algorithms have garnered considerable attention in condensed matter physics for their ability to learn complex patterns from very high dimensional data sets towards classifying complex long-range patterns of entanglement and correlations in many-body quantum systems. Small-scale quantum computers are already showing potential gains in learning tasks on large quantum and very large classical data sets. A particularly interesting class of algorithms, the quantum convolutional neural networks (QCNN) could learn features of a quantum data set by performing a binary classification task on a nontrivial phase of quantum matter. Inspired by this promise, we present a generalization of QCNN, the branching quantum convolutional neural network, or bQCNN, with substantially higher expressibility. A key feature of bQCNN is that it leverages mid-circuit (intermediate) measurement results, realizable on current trapped-ion systems, obtained in pooling layers to determine which sets of parameters will be used in the subsequent convolutional layers of the circuit. This results in a branching structure, which allows for a greater number of trainable variational parameters in a given circuit depth. This is of particular use on current-day NISQ devices, where circuit depth is limited by gate noise. We present an overview of the ansatz structure and scaling, and provide evidence of its enhanced expressibility compared to QCNN. Using artificially-constructed large data sets of training states as a proof-of-concept we demonstrate the existence of training tasks in which bQCNN far outperforms an ordinary QCNN. Finally, we present future directions where the classical branching structure and increased density of trainable parameters in bQCNN would be particularly valuable.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源