论文标题
随机化有效计算参数降低的订单模型
Randomization for the Efficient Computation of Parametric Reduced Order Models for Inversion
论文作者
论文摘要
非线性参数逆问题出现在许多应用中。在这里,我们专注于医学成像中的弥漫性光学断层扫描(DOT),以恢复使用数学(正向)模型的未知图像,例如在给定培养基中的癌组织。 DOT中的正向模型是光子通量的扩散吸收模型。这些问题中的主要瓶颈是对大型前向模型的重复评估。对于DOT,这对应于在每个优化步骤中求解每个源和频率的大型线性系统。此外,牛顿型方法通常是选择的方法,需要与伴随的其他线性求解来计算衍生信息。新兴技术允许大量来源和检测器,使这些问题过高地昂贵。降低的订单模型(ROM)已被用来大大减少每个优化步骤中的系统大小,同时准确地解决逆问题。但是,对于大量的来源和探测器,仅在ROM投影空间的候选基础上构建候选基础就会产生大量成本,因为插值模型中的完整参数梯度矩阵匹配,需要大量的线性求解,所有源和频率,所有检测器以及每个参数插入点的频率和频率。由于该候选基础在数值上的排名较低,因此该构造之后是远程分解,通常会大大减少候选媒介的数量。我们建议使用随机化来近似此基础,以大幅度减少的大型线性溶液数量。我们还为我们感兴趣的问题提供了候选基础的低排列结构的详细分析。即使我们专注于点问题,提出的想法也与许多其他大规模的逆问题和优化问题有关。
Nonlinear parametric inverse problems appear in many applications. Here, we focus on diffuse optical tomography (DOT) in medical imaging to recover unknown images of interest, such as cancerous tissue in a given medium, using a mathematical (forward) model. The forward model in DOT is a diffusion-absorption model for the photon flux. The main bottleneck in these problems is the repeated evaluation of the large-scale forward model. For DOT, this corresponds to solving large linear systems for each source and frequency at each optimization step. Moreover, Newton-type methods, often the method of choice, require additional linear solves with the adjoint to compute derivative information. Emerging technology allows for large numbers of sources and detectors, making these problems prohibitively expensive. Reduced order models (ROM) have been used to drastically reduce the system size in each optimization step, while solving the inverse problem accurately. However, for large numbers of sources and detectors, just the construction of the candidate basis for the ROM projection space incurs a substantial cost, as matching the full parameter gradient matrix in interpolatory model reduction requires large linear solves for all sources and frequencies and all detectors and frequencies for each parameter interpolation point. As this candidate basis numerically has low rank, this construction is followed by a rank-revealing factorization that typically reduces the number of vectors in the candidate basis substantially. We propose to use randomization to approximate this basis with a drastically reduced number of large linear solves. We also provide a detailed analysis for the low-rank structure of the candidate basis for our problem of interest. Even though we focus on the DOT problem, the ideas presented are relevant to many other large scale inverse problems and optimization problems.