Grand Challenges

Content-Based Video Relevance Prediction

Organizers: Yin Zheng, Bangsheng Tang, Xiaohui Xie, Hanning Zhou (Hulu LLC)

The goal of this challenge is to explore the solution for content-based video relevance prediction in recommender systems.
 Video streaming services, such as Hulu, depend heavily on the video recommender system to help its users discover videos they would enjoy. Most existing recommender systems compute the video relevance based on users’ implicit feedbacks, e.g. watch and search behaviors. For example, one can use collaborative-filter-based methods to model the user-to-video preference, and compute the video-to-video relevance scores. However, when a new video is added to the library, the recommender system needs to solve the content cold-start problem, i.e. to bootstrap the video relevance score with very little user behavior w.r.t. the newly added video. This is a task that the traditional collaborative-filter-based methods perform poorly.

One promising approach to solve the content cold-start problem is content-based video relevance prediction, where one predicts the video relevance by analyzing the audio-visual features and the metadata of the videos. In this grand challenge, the participants will be given the necessary materials from a video library and the task is to predict video-to-video relevance table based on the materials. The ground truth is a video-to-video relevance table learned from user behaviors.

For more details, please visit the CBVRP grand challenge website:

Light Field Image Coding

Organizers: Touradj Ebrahimi (EPFL), Fernando Pereira (IT), Peter Schelkens (VUB/imec), Siegfried Foessel (FhG IIS)

Tremendous progress has been achieved in the way consumers and professionals capture, store, deliver, display and process visual content. Emerging cameras and displays allow for the capture and visualization of new and rich forms of visual data. A new activity of the JPEG Standardization Committee, called JPEG Pleno, intends to provide a standard framework to facilitate capture, representation and exchange of plenoptic content in omnidirectional, depth-enhanced, point cloud, light field, and holographic imaging modalities. It aims to define new tools for improved compression while providing advanced functionalities at the system level.

The ICIP 2017 Grand Challenge on Light Field Image Coding solicits technical contributions that demonstrate efficient coding of light field image content either in lenslet or in high density array of cameras. The challenge is designed to be in sync with JPEG Pleno call for proposals on light field coding and will use the same content, evaluation methodologies and deadlines. The intention is to allow academic organizations who do not contribute to standardization to have an opportunity to compare their light field coding algorithm to those submitted for standardization and likewise to allow actors in standardization to inform academic conferences of their submitted algorithms.

For more details and specific instructions please visit:

The schedule is available at:

Use of Image Restoration for Video Coding Efficiency Improvement

Organizers: Zoe Liu and Debargha Mukherjee (Google)

In this grand challenge, we call for image restoration schemes that can be used for video compression efficiency enhancement as a post-processing tool within or out of the prediction loop after a standard deblocking filter has been applied. Unlike the traditional blind use-case for image restoration, when used for compression, side information may be extracted and compressed at the encoder and transmitted together with the bit-stream to facilitate restoration at the decoder, thereby improving the decoded video quality according to accepted metrics. Note that tools such as Sample-Adaptive-Offsets in HEVC as well as Wiener filter based schemes already belong to this category. In order to keep the barrier to entry low and encourage researchers in image restoration to get involved in video compression, we focus on out-of-loop schemes independent of the specific codec used. Participants will be provided with a compressed video test set with pre-compressed HEVC and VP9  bit-streams at a variety of bit­-rates, and the corresponding source and decoded sequences. Participants will be required to design a restoration scheme and a side­ information layer driving the proposed scheme, such that when applied to decoded frames, it will serve to reduce their distortions to the source. BDRATE will be computed comparing the new rate-­distortion points for each test sequence considering the side information and improved fidelity to the original. To keep things simple, the metric used for BDRATE computation will be based on PSNR, computed in a universally accepted manner. Tools including executable files for both the encoder and the decoder together with PSNR and BDRATE metric calculators in MATLAB will be provided.

For more details and specific instructions, please refer to:

Video Compression Technology

Organizers: David Bull and Angeliki Katsenou (University of Bristol), Patrick Le Callet (University of Nantes), Jens-Rainer Ohm and Mathias Wien (RWTH Aachen University)

This challenge intends to identify technology that improves compression beyond the current state of the art in video compression, the most recent standard HEVC. Video compression continues to be one of the most important areas in image and signal processing, with more efficient binary representation needed due to the ever increasing amount and resolution of video data.

Participants will be asked to deliver bitstreams with pre-defined maximum target rates for a given set of sequences, and a decoder executable for reconstructing the decoded videos. Objective (such as PSNR, SSIM, VQM) criteria will be computed over the entire set of data. Additionally, subjective tests will be run for selected test cases. A paper for publication in the proceedings should also be submitted. The best performers will have the opportunity to present a summary of the underlying technology during the ICIP session where the results will be presented.

For more information, please refer to: