Estimating spatially-varying lighting in urban scenes with disentangled representation
Published in European Conference of Computer Vision (ECCV), 2022
Recommended citation: Tang, Jiajun, Yongjie Zhu, Haoyu Wang, Jun Hoong Chan, Si Li, and Boxin Shi. "Estimating spatially-varying lighting in urban scenes with disentangled representation." In European Conference on Computer Vision, pp. 454-469. Cham: Springer Nature Switzerland, 2022.
We present an end-to-end network for spatially-varying out- door lighting estimation in urban scenes given a single limited field-of- view LDR image and any assigned 2D pixel position. We use three disen- tangled latent spaces learned by our network to represent sky light, sun light, and lighting-independent local contents respectively. At inference time, our lighting estimation network can run efficiently in an end-to-end manner by merging the global lighting and the local appearance rendered by the local appearance renderer with the predicted local silhouette. We enhance an existing synthetic dataset with more realistic material mod- els and diverse lighting conditions for more effective training. We also capture the first real dataset with HDR labels for evaluating spatially- varying outdoor lighting estimation. Experiments on both synthetic and real datasets show that our method achieves state-of-the-art performance with more flexible editability.
Citation: Tang, Jiajun, Yongjie Zhu, Haoyu Wang, Jun Hoong Chan, Si Li, and Boxin Shi. “Estimating spatially-varying lighting in urban scenes with disentangled representation.” In European Conference on Computer Vision, pp. 454-469. Cham: Springer Nature Switzerland, 2022.