|Title||View Synthesis from multi-view RGB data using multilayered representation and volumetric estimation|
|Publication Type||Journal Article|
|Year of Publication||2020|
|Authors||Z Su, T Zhou, K Li, D Brady, and Y Liu|
|Journal||Virtual Reality and Intelligent Hardware|
|Pagination||43 - 55|
Background: Aiming at free-view exploration of complicated scenes, this paper presents a method for interpolating views among multi RGB cameras. Methods: In this study, we combine the idea of cost volume, which represent 3D information, and 2D semantic segmentation of the scene, to accomplish view synthesis of complicated scenes. We use the idea of cost volume to estimate the depth and confidence map of the scene, and use a multi-layer representation and resolution of the data to optimize the view synthesis of the main object. Results: /Conclusions By applying different treatment methods on different layers of the volume, we can handle complicated scenes containing multiple persons and plentiful occlusions. We also propose the view-interpolation→multi-view reconstruction→view interpolation pipeline to iteratively optimize the result. We test our method on varying data of multi-view scenes and generate decent results.
|Short Title||Virtual Reality and Intelligent Hardware|