论文标题

用户引导的域适应性域可从用户互动中快速注释:病理肝分段研究

User-Guided Domain Adaptation for Rapid Annotation from User Interactions: A Study on Pathological Liver Segmentation

论文作者

Raju, Ashwin, Ji, Zhanghexuan, Cheng, Chi Tung, Cai, Jinzheng, Huang, Junzhou, Xiao, Jing, Lu, Le, Liao, ChienHung, Harrison, Adam P.

论文摘要

基于掩模的医学图像注释,尤其是对于3D数据,是开发可靠的机器学习模型的瓶颈。使用最小劳动的用户交互(UIS)来指导注释是有希望的,但是在最佳协调掩码预测与UIS的挑战中仍然存在挑战。为了解决这个问题,我们提出了用户引导的域适应性(UGDA)框架,该框架使用基于预测的对抗域适应性(PADA)来建模UIS和掩码预测的合并分布。然后将UI用作指导和对齐掩码预测的锚。重要的是,UGDA都可以从未标记的数据中学习,也可以对不同UI背后的高级语义含义进行建模。我们使用927个患者研究的临床全面数据集对UGDA进行注释的病理肝脏。仅使用极端UI,我们获得了96.1%(94.9%)的平均(最差)性能,而深度点(DEXTR)为93.0%(87.0%)。此外,我们还表明,即使只看到一小部分可用的UI,UGDA也可以保留这种最先进的性能,这表明具有稳健且可靠的UI指导细分的能力,并且劳动力要求极低。

Mask-based annotation of medical images, especially for 3D data, is a bottleneck in developing reliable machine learning models. Using minimal-labor user interactions (UIs) to guide the annotation is promising, but challenges remain on best harmonizing the mask prediction with the UIs. To address this, we propose the user-guided domain adaptation (UGDA) framework, which uses prediction-based adversarial domain adaptation (PADA) to model the combined distribution of UIs and mask predictions. The UIs are then used as anchors to guide and align the mask prediction. Importantly, UGDA can both learn from unlabelled data and also model the high-level semantic meaning behind different UIs. We test UGDA on annotating pathological livers using a clinically comprehensive dataset of 927 patient studies. Using only extreme-point UIs, we achieve a mean (worst-case) performance of 96.1%(94.9%), compared to 93.0% (87.0%) for deep extreme points (DEXTR). Furthermore, we also show UGDA can retain this state-of-the-art performance even when only seeing a fraction of available UIs, demonstrating an ability for robust and reliable UI-guided segmentation with extremely minimal labor demands.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源