论文标题
通过操作员学习,液态金属喷气添加剂制造中的零件尺度仿真
Accelerating Part-Scale Simulation in Liquid Metal Jet Additive Manufacturing via Operator Learning
论文作者
论文摘要
预测添加剂制造(AM)过程的零件质量需要对偏微分方程(PDES)的高保真数值模拟,以最低限度的可制造功能的规模来处理过程多物理。这使得零件尺度的预测在计算要求上要求,尤其是当它们需要许多小规模的模拟时。我们将按需液态金属喷射(LMJ)视为这种计算复杂性的例子。描述LMJ液滴合并的模型可能包括耦合的不可压缩流体流,传热和相变方程。在模拟构建过程的整个零件中,由数千至数百万滴的零件模拟构建过程时,求解这些方程式变得非常昂贵。已经构建了基于神经网络(NN)或K-Nearest邻居(KNN)算法的减少阶模型(ROM),以替换原始的基于物理的求解器,并且可以在零件级别的模拟上进行计算处理。但是,它们的快速推理能力通常以准确性,鲁棒性和概括性为代价。我们采用操作员学习(OL)方法来学习液滴合并过程的初始和最终状态的映射,以实现快速,准确的零件尺度构建模拟。初步结果表明,OL所需的数据点比KNN方法要少,并且在实现类似的预测误差的同时,可以推广到训练集外。
Predicting part quality for additive manufacturing (AM) processes requires high-fidelity numerical simulation of partial differential equations (PDEs) governing process multiphysics on a scale of minimum manufacturable features. This makes part-scale predictions computationally demanding, especially when they require many small-scale simulations. We consider drop-on-demand liquid metal jetting (LMJ) as an illustrative example of such computational complexity. A model describing droplet coalescence for LMJ may include coupled incompressible fluid flow, heat transfer, and phase change equations. Numerically solving these equations becomes prohibitively expensive when simulating the build process for a full part consisting of thousands to millions of droplets. Reduced-order models (ROMs) based on neural networks (NN) or k-nearest neighbor (kNN) algorithms have been built to replace the original physics-based solver and are computationally tractable for part-level simulations. However, their quick inference capabilities often come at the expense of accuracy, robustness, and generalizability. We apply an operator learning (OL) approach to learn a mapping between initial and final states of the droplet coalescence process for enabling rapid and accurate part-scale build simulation. Preliminary results suggest that OL requires order-of-magnitude fewer data points than a kNN approach and is generalizable beyond the training set while achieving similar prediction error.