论文标题

多视图RGB-D手术室图像的动态深度监督NERF

Dynamic Depth-Supervised NeRF for Multi-View RGB-D Operating Room Images

论文作者

Gerats, Beerend G. A., Wolterink, Jelmer M., Broeders, Ivo A. M. J.

论文摘要

手术室(OR)是开发传感系统的感兴趣的环境,使人们能够发现人们,物体及其语义关系。由于OR中的频繁阻塞,这些系统通常依赖于多个相机的输入。虽然增加相机数量通常会增加算法性能,但在OR中的摄像机的数量和位置存在严重限制。神经辐射场(NERF)可用于从任意相机位置呈现综合视图,从而实际上扩大了数据集中的相机数量。在这项工作中,我们探讨了NERF在OR中查看动态场景的合成的使用,并显示了RGB-D传感器数据深度监督的正则化,从而提高了图像质量。我们优化了一个动态深度监督的NERF,最多可以使用六台同步相机,该相机在膝盖替换手术前后以五个不同的阶段捕获手术场。我们定性地检查由虚拟摄像机呈现的视图,该虚拟摄像机以不同的时间值在外科手术场周围移动180度。从数量上讲,我们根据PSNR,SSIM和LPIPS的颜色通道以及MAE的MAE和错误百分比评估视图综合的视图综合。我们发现NERF可用于生成几何一致的视图,也可以从插值相机位置和以插值时间间隔生成。视图是由平均PSNR为18.2的看不见的相机姿势产生的,深度估计误差为2.0%。我们的结果表明,动态NERF在OR中观察合成的潜力,并在临床环境中强调深度监督的相关性。

The operating room (OR) is an environment of interest for the development of sensing systems, enabling the detection of people, objects, and their semantic relations. Due to frequent occlusions in the OR, these systems often rely on input from multiple cameras. While increasing the number of cameras generally increases algorithm performance, there are hard limitations to the number and locations of cameras in the OR. Neural Radiance Fields (NeRF) can be used to render synthetic views from arbitrary camera positions, virtually enlarging the number of cameras in the dataset. In this work, we explore the use of NeRF for view synthesis of dynamic scenes in the OR, and we show that regularisation with depth supervision from RGB-D sensor data results in higher image quality. We optimise a dynamic depth-supervised NeRF with up to six synchronised cameras that capture the surgical field in five distinct phases before and during a knee replacement surgery. We qualitatively inspect views rendered by a virtual camera that moves 180 degrees around the surgical field at differing time values. Quantitatively, we evaluate view synthesis from an unseen camera position in terms of PSNR, SSIM and LPIPS for the colour channels and in MAE and error percentage for the estimated depth. We find that NeRFs can be used to generate geometrically consistent views, also from interpolated camera positions and at interpolated time intervals. Views are generated from an unseen camera pose with an average PSNR of 18.2 and a depth estimation error of 2.0%. Our results show the potential of a dynamic NeRF for view synthesis in the OR and stress the relevance of depth supervision in a clinical setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源