论文标题
Crossbeam:学习在自下而上的程序合成中搜索
CrossBeam: Learning to Search in Bottom-Up Program Synthesis
论文作者
论文摘要
许多程序合成的方法在巨大的程序中进行搜索,以找到满足给定规范的程序。先前的工作使用神经模型来指导组合搜索算法,但是这种方法仍然探索搜索空间的很大一部分,并随着所需程序的大小增加而迅速变得棘手。为了驯服搜索空间的爆炸,我们建议培训神经模型,以学习自下而上综合的动手搜索策略,而不是依靠组合搜索算法。我们的方法称为Crossbeam,使用神经模型选择如何将先前探索的程序组合到新程序中,并考虑到搜索历史记录和部分程序的执行。通过在学习搜索的结构化预测方面的工作,Crossbeam是使用从其自身自下而上搜索培训任务中提取的数据进行训练的。我们在两个截然不同的域(字符串操纵和逻辑编程)中评估横梁。我们观察到,Crossbeam学会了有效搜索,与最先进的探索程序相比,探索程序空间的较小部分。
Many approaches to program synthesis perform a search within an enormous space of programs to find one that satisfies a given specification. Prior works have used neural models to guide combinatorial search algorithms, but such approaches still explore a huge portion of the search space and quickly become intractable as the size of the desired program increases. To tame the search space blowup, we propose training a neural model to learn a hands-on search policy for bottom-up synthesis, instead of relying on a combinatorial search algorithm. Our approach, called CrossBeam, uses the neural model to choose how to combine previously-explored programs into new programs, taking into account the search history and partial program executions. Motivated by work in structured prediction on learning to search, CrossBeam is trained on-policy using data extracted from its own bottom-up searches on training tasks. We evaluate CrossBeam in two very different domains, string manipulation and logic programming. We observe that CrossBeam learns to search efficiently, exploring much smaller portions of the program space compared to the state-of-the-art.