Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in European Conference of Computer Vision (ECCV), 2022
We present an end-to-end network for spatially-varying out- door lighting estimation in urban scenes given a single limited field-of- view LDR image and any assigned 2D pixel position. We use three disen- tangled latent spaces learned by our network to represent sky light, sun light, and lighting-independent local contents respectively. At inference time, our lighting estimation network can run efficiently in an end-to-end manner by merging the global lighting and the local appearance rendered by the local appearance renderer with the predicted local silhouette. We enhance an existing synthetic dataset with more realistic material mod- els and diverse lighting conditions for more effective training. We also capture the first real dataset with HDR labels for evaluating spatially- varying outdoor lighting estimation. Experiments on both synthetic and real datasets show that our method achieves state-of-the-art performance with more flexible editability.
Recommended citation: Tang, Jiajun, Yongjie Zhu, Haoyu Wang, Jun Hoong Chan, Si Li, and Boxin Shi. "Estimating spatially-varying lighting in urban scenes with disentangled representation." In European Conference on Computer Vision, pp. 454-469. Cham: Springer Nature Switzerland, 2022.
Published in International Conference of Computer Vision (ICCV), 2023
Illumination planning in photometric stereo aims to find a balance between surface normal estimation accuracy and image capturing efficiency by selecting optimal light configurations. It depends on factors such as the unknown shape and general reflectance of the target object, global illumination, and the choice of photometric stereo backbones, which are too complex to be handled by existing methods based on handcrafted illumination planning rules. This paper proposes a learning-based illumination planning method that jointly considers these factors via integrating a neural network and a generalized image formation model. As it is impractical to supervise illumination planning due to the enormous search space for ground truth light configurations, we formulate illumination planning using reinforcement learning, which explores the light space in a photometric stereo-aware, and reward-driven manner. Experiments on synthetic and real-world datasets demonstrate that photometric stereo under the 20-light configurations from our method is comparable to, or even surpasses that of using lights from all available directions.
Recommended citation: Chan, Jun Hoong, Bohan Yu, Heng Guo, Jieji Ren, Zongqing Lu, and Boxin Shi. "ReLeaPS: Reinforcement Learning-based Illumination Planning for Generalized Photometric Stereo." In International Conference on Computer Vision, 2023.
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.