论文标题
音频深击检测是否会概括?
Does Audio Deepfake Detection Generalize?
论文作者
论文摘要
当前的文本到语音算法会产生人类声音的逼真的假货,从而使DeepFake检测成为急需的研究领域。尽管研究人员提出了各种检测音频欺骗的技术,但通常不清楚这些体系结构是成功的原因:在相关工作中,预处理步骤,超参数设置和微调程度并不一致。哪些因素有助于成功,哪些因素是偶然的?在这项工作中,我们解决了这个问题:我们通过重新实施并统一评估相关工作的体系结构来系统化音频欺骗检测。我们确定了成功的音频深击检测的总体功能,例如使用CQTSPEC或LOGSPEC功能而不是MelsPec功能,这将性能平均提高了37%EER,所有其他因素都恒定。此外,我们评估了概括功能:我们收集和发布了一个新的数据集,该数据集由37.9小时的名人和政客的录音组成,其中17.2小时是深击。我们发现,相关工作在此类现实世界数据(高达一千%的绩效降解)上的性能很差。这可能表明,社区对其解决方案的量身定制到了普遍的ASVSPOOF基准,并且在实验室外面检测到比以前想象的要困难得多。
Current text-to-speech algorithms produce realistic fakes of human voices, making deepfake detection a much-needed area of research. While researchers have presented various techniques for detecting audio spoofs, it is often unclear exactly why these architectures are successful: Preprocessing steps, hyperparameter settings, and the degree of fine-tuning are not consistent across related work. Which factors contribute to success, and which are accidental? In this work, we address this problem: We systematize audio spoofing detection by re-implementing and uniformly evaluating architectures from related work. We identify overarching features for successful audio deepfake detection, such as using cqtspec or logspec features instead of melspec features, which improves performance by 37% EER on average, all other factors constant. Additionally, we evaluate generalization capabilities: We collect and publish a new dataset consisting of 37.9 hours of found audio recordings of celebrities and politicians, of which 17.2 hours are deepfakes. We find that related work performs poorly on such real-world data (performance degradation of up to one thousand percent). This may suggest that the community has tailored its solutions too closely to the prevailing ASVSpoof benchmark and that deepfakes are much harder to detect outside the lab than previously thought.