期刊
DIGITAL SIGNAL PROCESSING
卷 104, 期 -, 页码 -出版社
ACADEMIC PRESS INC ELSEVIER SCIENCE
DOI: 10.1016/j.dsp.2020.102801
关键词
Video; Super-resolution; Convolutional neuronal networks; Generative adversarial networks; Perceptual loss functions
资金
- Sony 2016 Research Award Program Research Project
- Spanish Ministry of Economy and Competitiveness [DPI2016-77869-C2-2-R]
- Visiting Scholar program at the University of Granada
- Spanish Ministry of Science, Innovation and Universitiesthrough the FPU program
The popularity of high and ultra-high definition displays has led to the need for methods to improve the quality of videos already obtained at much lower resolutions. A large amount of current CNN-based Video Super-Resolution methods are designed and trained to handle a specific degradation operator (e.g., bicubic downsampling) and are not robust to mismatch between training and testing degradation models. This causes their performance to deteriorate in real-life applications. Furthermore, many of them use the Mean-Squared-Error as the only loss during learning, causing the resulting images to be too smooth. In this work we propose a new Convolutional Neural Network for video super resolution which is robust to multiple degradation models. During training, which is performed on a large dataset of scenes with slow and fast motions, it uses the pseudo-inverse image formation model as part of the network architecture in conjunction with perceptual losses and a smoothness constraint that eliminates the artifacts originating from these perceptual losses. The experimental validation shows that our approach outperforms current state-of-the-art methods and is robust to multiple degradations. (C) 2020 Elsevier Inc. All rights reserved.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据