论文标题
使用带有“超级压缩编码”的穿着量子网络的监督学习:算法和基于量子硬件的实现
Supervised Learning Using a Dressed Quantum Network with "Super Compressed Encoding": Algorithm and Quantum-Hardware-Based Implementation
论文作者
论文摘要
众所周知,嘈杂的中间量子量子(NISQ)设备上的变异量子机学习(QML)算法的实施是与所需的量子数量大的问题以及与多Qubit Gates相关的噪声有关的问题。在本文中,我们建议使用穿着的量子网络来解决这些问题,提出了一种变性QML算法。使用我们在此处遵循的“超级压缩编码”方案,在将输入输入到变量量子电路之前,我们着装的网络中的经典编码层会大大缩小输入维度。因此,我们的量子电路中所需的Qubits数量急剧下降。同样,与大多数其他现有QML算法不同,我们的量子电路仅由单量门门组成,使其可抵抗噪声。这些因素使我们的算法适合在NISQ硬件上实现。为了支持我们的论点,我们对真正的NISQ硬件实施了算法,从而使用流行的机器学习数据集显示了准确的分类,例如Fisher's Iris,Wisconsin的乳腺癌(WBC)和Abalone。然后,为了为算法的工作提供直观的解释,我们演示了量子状态的聚类,量子状态与Bloch Sphere上的不同输出类别的输入样本相对应(使用WBC和MNIST数据集)。这种聚类是由于我们算法遵循的训练过程而进行的。通过此基于Bloch-Sphere的表示,我们还通过经典编码层的可调节参数和变分量子电路的可调节参数显示了(在训练中)扮演的不同角色。通过损失最小化训练期间,对这些参数进行了迭代的调整。
Implementation of variational Quantum Machine Learning (QML) algorithms on Noisy Intermediate-Scale Quantum (NISQ) devices is known to have issues related to the high number of qubits needed and the noise associated with multi-qubit gates. In this paper, we propose a variational QML algorithm using a dressed quantum network to address these issues. Using the "super compressed encoding" scheme that we follow here, the classical encoding layer in our dressed network drastically scales down the input-dimension, before feeding the input to the variational quantum circuit. Hence, the number of qubits needed in our quantum circuit goes down drastically. Also, unlike in most other existing QML algorithms, our quantum circuit consists only of single-qubit gates, making it robust against noise. These factors make our algorithm suitable for implementation on NISQ hardware. To support our argument, we implement our algorithm on real NISQ hardware and thereby show accurate classification using popular machine learning data-sets like Fisher's Iris, Wisconsin's Breast Cancer (WBC), and Abalone. Then, to provide an intuitive explanation for our algorithm's working, we demonstrate the clustering of quantum states, which correspond to the input-samples of different output-classes, on the Bloch sphere (using WBC and MNIST data-sets). This clustering happens as a result of the training process followed in our algorithm. Through this Bloch-sphere-based representation, we also show the distinct roles played (in training) by the adjustable parameters of the classical encoding layer and the adjustable parameters of the variational quantum circuit. These parameters are adjusted iteratively during training through loss-minimization.