Summarization of Sports Videos

  • M. Vaidhehi, Shriya Kesarwani, Utkarsh Chopra


Video Summarization of sports has always been a matter of great concern as it underlines the interesting moment or highlights of the game. Recognition of Actions in videos has always been a task of critical predicament due to dynamic and cluttered background. This paper introduces novel video summarization methods that use player’s actions or the audio in the background as a cue to resolve the highlights or the interesting moments of the original video. A convolution neural network (CNN) based approach is used to extract body joint features and holistic features from the video which helps us to find the interesting frames for highlights generation. The audio analysis approach uses the threshold value of the background audio in order to find the interesting parts of the video. Most of the times there is loud cheer in case of an interesting moment which helps us to pick the interesting parts from the video. The proposed methods are applied to sports in which games consist of a series of actions. As there are large number of videos generated now a days which are never viewed because of its lengthiness. This can be a convenient way for sharing or reviewing of the videos. The presentation of both the proposed techniques are contrasted and a few blends of various highlights, and the experimental outcomes shows that it beats past rundown strategies.

Keywords: Convolution Neural Network (CNN), Sound Analysis, User Generated Sports Videos (UGSV).

How to Cite
M. Vaidhehi, Shriya Kesarwani, Utkarsh Chopra. (2020). Summarization of Sports Videos. International Journal of Advanced Science and Technology, 29(06), 2820 - 2828. Retrieved from