TIIS (Çѱ¹ÀÎÅͳÝÁ¤º¸ÇÐȸ)
Current Result Document :
ÇѱÛÁ¦¸ñ(Korean Title) |
CNN-based Visual/Auditory Feature Fusion Method with Frame Selection for Classifying Video Events |
¿µ¹®Á¦¸ñ(English Title) |
CNN-based Visual/Auditory Feature Fusion Method with Frame Selection for Classifying Video Events |
ÀúÀÚ(Author) |
Giseok Choe
Seungbin Lee
Jongho Nang
|
¿ø¹®¼ö·Ïó(Citation) |
VOL 13 NO. 03 PP. 1689 ~ 1701 (2019. 03) |
Çѱ۳»¿ë (Korean Abstract) |
|
¿µ¹®³»¿ë (English Abstract) |
In recent years, personal videos have been shared online due to the popular uses of portable devices, such as smartphones and action cameras. A recent report[1] predicted that 80% of the Internet traffic will be video content by the year 2021. Several studies have been conducted on the detection of main video events to manage a large scale of videos. These studies show fairly good performance in certain genres. However, the methods used in previous studies have difficulty in detecting events of personal video. This is because the characteristics and genres of personal videos vary widely. In a research, we found that adding a dataset with the right perspective in the study improved performance. It has also been shown that performance improves depending on how you extract keyframes from the video. we selected frame segments that can represent video considering the characteristics of this personal video. In each frame segment, object, location, food and audio features were extracted, and representative vectors were generated through a CNN-based recurrent model and a fusion module. The proposed method showed mAP 78.4% performance through experiments using LSVC[2] data.
|
Å°¿öµå(Keyword) |
Multimedia
Computer Vision Systems
Aritifical Intelligence
Video Classification
|
ÆÄÀÏ÷ºÎ |
PDF ´Ù¿î·Îµå
|