Spatio-temporal Disocclusion Filling Using Novel Sprite Cells

Abstract

Depth image-based rendering is an important technique for virtual view synthesis with limited 3-D data. However, occluded areas result in disocclusions in synthesized images. Filling of the disocclusions in a plausible manner is a critical task in virtual view synthesis. In addition to spatial consistency, temporal consistency of the filled regions also affects the visual quality. In this paper, we propose a novel codebook method called sprite cell for filling the disocclusions with high spatial and temporal consistency. Each codeword consists of a color vector, depth value, frame log, and the confidence score of the corresponding pixel. In contrast with the existing methods that reuse the filling results of previous frames without considering their accuracy, the proposed method estimates the confidence scores of the filling results to prevent temporal continuation of filling errors. Moreover, we introduce a method to correct the luminance of filled disocclusions that compensates for the change of scenes. The experimental results show that the proposed method achieves both objective and subjective improvements over the state-of-the-art methods. The sequences synthesized with the proposed method have higher spatio-temporal consistency.

Publication
IEEE Transactions on Multimedia