论文标题

基于深度学习的计算机视觉方法,用于分割球传递和跟踪板球

Deep-Learning-Based Computer Vision Approach For The Segmentation Of Ball Deliveries And Tracking In Cricket

论文作者

Abbas, Kumail, Saeed, Muhammad, Khan, M. Imad, Ahmed, Khandakar, Wang, Hua

论文摘要

最近,板球采用技术的采用大幅增长。这种趋势造成了在类似的基于计算机视觉的研究工作中进行重复工作的问题。我们的研究试图通过使用深度学习模型,Mobilenet和Yolo在板球广播中分段传播来解决这些问题之一,从而使研究人员能够将我们的工作用作其研究的数据集。板球教练和球员可以使用我们的研究的输出来分析比赛期间播放的球。本文介绍了一种方法,用于细分和提取视频镜头,其中只能输送球。视频拍摄是一系列连续框架,构成了视频的整个场景。在正确提取视频拍摄方面,应用对象检测模型可以达到高度准确性。提出了用于构建大型球传递视频镜头数据集的概念证明,这为进一步处理这些语义的镜头铺平了道路。这些视频镜头中的球跟踪也是使用单独的视网膜模型作为所提出数据集有用性的示例进行的。通过沿Y轴跟踪球来提取球上降落球的位置。然后,视频拍摄被归类为完整的,良好的或短的交付。

There has been a significant increase in the adoption of technology in cricket recently. This trend has created the problem of duplicate work being done in similar computer vision-based research works. Our research tries to solve one of these problems by segmenting ball deliveries in a cricket broadcast using deep learning models, MobileNet and YOLO, thus enabling researchers to use our work as a dataset for their research. The output from our research can be used by cricket coaches and players to analyze ball deliveries which are played during the match. This paper presents an approach to segment and extract video shots in which only the ball is being delivered. The video shots are a series of continuous frames that make up the whole scene of the video. Object detection models are applied to reach a high level of accuracy in terms of correctly extracting video shots. The proof of concept for building large datasets of video shots for ball deliveries is proposed which paves the way for further processing on those shots for the extraction of semantics. Ball tracking in these video shots is also done using a separate RetinaNet model as a sample of the usefulness of the proposed dataset. The position on the cricket pitch where the ball lands is also extracted by tracking the ball along the y-axis. The video shot is then classified as a full-pitched, good-length or short-pitched delivery.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源