期刊
MULTIMEDIA TOOLS AND APPLICATIONS
卷 80, 期 7, 页码 11079-11094出版社
SPRINGER
DOI: 10.1007/s11042-020-10157-4
关键词
Deep learning; DNA sequence; Searching; Event summarization; Local alignment; Text query; Video
This study proposes an efficient summarization technique for multi-view videos, utilizing deep learning framework and local alignment to capture inter-view dependencies. The technique can effectively handle the rapid growth of multimedia data, allowing for event summarization and search in the cloud.
In the digital era, the growth of multimedia data is increasing at a rapid pace, which demands both effective and efficient summarization techniques. Such advanced techniques are required so that the users can quickly access the video content, recorded by multiple cameras for a certain period. At present, it is very challenging to manage and search a huge amount of multiview video data, which contains the inter-views dependencies, significant illumination changes, and many low-active frames. This work highlights an efficient summarization technique to summarize and then search the events in such multi-view videos over cloud through text query. Deep learning framework is employed to extract the features of moving objects in the frames. The inter-views dependencies among multiple views of the video are captured via local alignment. Parallel Virtual Machines (VMs) in the Cloud environment have been used to process the multiple video clip independently at a time. Object tracking is applied to filter the low-active frames. Experimental Results indicate that the model successfully reduces the video content, while preserving the momentous information in the form of the events. A computing analysis also indicates that it meets the requirement of real-time applications.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据