论文标题

Storseismic:地震处理深度学习的新范式

StorSeismic: A new paradigm in deep learning for seismic processing

论文作者

Harsuko, Randy, Alkhalifah, Tariq

论文摘要

机器在地震数据上学习的任务通常是依次和单独训练的,即使它们利用了数据的相同特征(即几何)。我们提出了Storseismic,作为地震数据处理的框架,由神经网络预训练和微调程序组成。具体来说,我们将神经网络用作预处理模型来存储特定数据集的地震数据特征,以实现任何下游任务。预训练后,可以通过微调程序在稍后使用有限的额外培训来执行任务。在自然语言处理(NLP)中经常使用,最近在视觉任务中,bert(变压器的双向编码器表示)是变压器模型的一种形式,为该框架提供了一个最佳平台。 BERT的注意机制在此处应用于镜头收集的一系列痕迹,能够捕获和存储地震数据的关键几何特征。我们在自我监督的步骤中对现场数据以及合成生成的步骤进行训练。然后,我们使用标记的合成数据以监督的方式微调预训练的网络,以执行各种地震处理任务,例如Denoising,速度估计,首次到达选择和NMO。最后,微调模型用于获得现场数据的满意推理结果。

Machine learned tasks on seismic data are often trained sequentially and separately, even though they utilize the same features (i.e. geometrical) of the data. We present StorSeismic, as a framework for seismic data processing, which consists of neural network pre-training and fine-tuning procedures. We, specifically, utilize a neural network as a preprocessing model to store seismic data features of a particular dataset for any downstream tasks. After pre-training, the resulting model can be utilized later, through a fine-tuning procedure, to perform tasks using limited additional training. Used often in Natural Language Processing (NLP) and lately in vision tasks, BERT (Bidirectional Encoder Representations from Transformer), a form of a Transformer model, provides an optimal platform for this framework. The attention mechanism of BERT, applied here on a sequence of traces within the shot gather, is able to capture and store key geometrical features of the seismic data. We pre-train StorSeismic on field data, along with synthetically generated ones, in the self-supervised step. Then, we use the labeled synthetic data to fine-tune the pre-trained network in a supervised fashion to perform various seismic processing tasks, like denoising, velocity estimation, first arrival picking, and NMO. Finally, the fine-tuned model is used to obtain satisfactory inference results on the field data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源