The aim of this challenge is to solicit original contributions addressing restoration of mobile videos that will help improve the quality of experience of video viewers and advance the state-of-the-art in video restoration. Although quality degradation of videos occurs in various phases (e.g., capturing, encoding, storage, transmitting, etc.), we simplify the problem in this challenge as a post-processing problem.
Mobile internet has been infiltrating and changing people’s lives. For instance, capturing and sharing videos has became a new trend. In fact, huge amount of video is being generated and consumed every day. However, compared with videos generated by professionals, mobile videos may suffer from low quality since they can be generated by unskilled amateurs with low-end capture devices under poor shooting condition. Also, contents being shared across different platforms or social networks may be transcoded multiple times, which may result in serious quality degradation.
A set of degraded videos captured by mobile devices will be given to participants, where the degradation may come from multiple sources. Partial decoding information will also be provided and we encourage the participants to utilize the information. Participants are requested to recover the high-quality version of degraded videos. We are interested in all kinds of mobile video restoration algorithms: from traditional signal processing to deep learning approaches.
Restoration results of each proposal will be evaluated using objective (e.g., PSNR/SSIM) and subjective (MOS) quality metrics. The speed of each algorithm given limited resources will also be measured. The alogrithm will be run on the machine of GeForce GTX 1080 Ti or Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz, on which is in favor of your alogorithm.
We emphasize both the the quality of the restored videos and the usablity of the algorithm in real world. So we will sort the submitted propsals in two categories. In category one, we will rank the proposals based only on restored video qualities. In category two, we will rank the proposals based on both qualities and also speeding/efficiency (e.g., 720p@30 fps on our daily server set-up mentioned above).
We will award $1500 to the solution which achieves the best restored video quality, and award $1500 to the solution which achieves the best balance between visual quality and speed.
Submissions should provide following items:
1. Source code (compiliable with necessary descriptions)
2. Restoration results of corresponding degraded videos (Parameters/models should NOT be set manually for each video)
3. Training/inferencing scripts and models
4. Any other necessary files if applicable to reproduce the results
5. Detailed technical description and complexity analysis in the form of a short paper are also required for evaluation
Participants are encouraged to submit a paper on their proposal (not mandatory) following ICIP 2019 guidelines. The authors of the companion papers will be notified after a technical review process and the authors of the accepted papers need to prepare a camera-ready version so that their papers can be published by IEEEXplore under the name of "International Conference on Image Processing Challenges (ICIPC)". For the exact formatting guidelines (e.g., template, page limit, etc.), refer to the ICIP 2019 Website (http://2019.ieeeicip.org).
Yunfei Zheng, Kwai Inc., USA （firstname.lastname@example.org)
Bing Yu, Kuaishou, China (email@example.com)
Xing Wen, Kwai Inc., USA (firstname.lastname@example.org)
Jiajie Zhang, Kuaishou, China (email@example.com)
If you have any questions or requests, or need further clarifications, please contact the organizers.