论文标题
通过可区分的光学模型重新思考数据驱动点扩展功能建模
Rethinking data-driven point spread function modeling with a differentiable optical model
论文作者
论文摘要
在天文学中,即将到来的带有宽场光学仪器的空间望远镜具有空间变化的点扩散函数(PSF)。具体的科学目标需要在未直接测量PSF的目标位置对PSF进行高保真估计。即使在视野(FOV)的某些位置上可以观察到PSF的观察结果,但它们的采样,嘈杂,并将其集成到仪器的Passband中。 PSF建模代表了一个具有挑战性的问题,因为它需要从降级观察结果中构建模型,该模型可以在FOV中的任何波长和位置上推断出超级分辨的PSF。我们的模型创造的Wavediff提出了望远镜的数据驱动建模的范式变化。我们通过在建模框架中添加可区分的光向模型,将数据驱动的建模空间从像素更改为波前。这种变化允许将复杂性从仪器响应转移到正向模型中。提出的模型依靠随机梯度下降来估计其参数。我们的框架为建立不需要特殊校准数据的强大,身体动机的模型铺平了道路。本文在简化的空间望远镜设置中演示了Wavediff模型。提出的框架代表了有关现有的最新数据驱动方法的性能突破。在观测分辨率下,像素重建误差减少了6倍,3倍超分辨率下降了44倍。椭圆度误差至少减少了20次,尺寸误差降低了250次以上。通过仅使用嘈杂的宽带焦点内观测,我们成功地捕获了由于衍射引起的PSF色素变化。可在https://github.com/tobias-liaudat/wf-psf上找到代码。
In astronomy, upcoming space telescopes with wide-field optical instruments have a spatially varying point spread function (PSF). Specific scientific goals require a high-fidelity estimation of the PSF at target positions where no direct measurement of the PSF is provided. Even though observations of the PSF are available at some positions of the field of view (FOV), they are undersampled, noisy, and integrated into wavelength in the instrument's passband. PSF modeling represents a challenging ill-posed problem, as it requires building a model from degraded observations that can infer a super-resolved PSF at any wavelength and position in the FOV. Our model, coined WaveDiff, proposes a paradigm shift in the data-driven modeling of the point spread function field of telescopes. We change the data-driven modeling space from the pixels to the wavefront by adding a differentiable optical forward model into the modeling framework. This change allows the transfer of complexity from the instrumental response into the forward model. The proposed model relies on stochastic gradient descent to estimate its parameters. Our framework paves the way to building powerful, physically motivated models that do not require special calibration data. This paper demonstrates the WaveDiff model in a simplified setting of a space telescope. The proposed framework represents a performance breakthrough with respect to the existing state-of-the-art data-driven approach. The pixel reconstruction errors decrease 6-fold at observation resolution and 44-fold for a 3x super-resolution. The ellipticity errors are reduced at least 20 times, and the size error is reduced more than 250 times. By only using noisy broad-band in-focus observations, we successfully capture the PSF chromatic variations due to diffraction. Code available at https://github.com/tobias-liaudat/wf-psf.