Chenguang Huang

prof_pic.jpg

I am a research scientist at the Robotics and AI Institute in Zurich. I previously worked as a research scientist at Technical University of Nuremberg and researched on scene representation and vision language action models. I did my PhD at University of Freiburg supervised by Prof. Wolfram Burgard. Previously I obtained my Master’s Degree from Technical University of Munich, majoring in Robotics, Cognition, Intelligence. During my Master’s study, I was lucky to be able to exchange to ETH, Zurich for two semesters and do research on visual-lidar fusion for mapping. Earlier, I got my Bachelor’s Degree on Vehicle Engineering from Jilin University in China. Currently, my research interests lie in scene representation for robot applications (like open-vocabulary mapping, especially in dynamic scenes), vision language action models for robot navigation and manipulation, and 3D-LLM.

News

Aug 08, 2025 Our work DiWA: Diffusion Policy Adaptation with World Models got accepted to CoRL 2025.
Aug 08, 2025 Our work Articulated Object Estimation in the Wild got accepted to CoRL 2025.
Jun 10, 2025 Our work Articulated Object Estimation in the Wild got accepted as a spotlight paper to RSS workshop: EgoAct: 1st Workshop on Egocentric Perception and Action for Robot Learning.
Jun 06, 2025 Our work DiWA: Diffusion Policy Adaptation with World Models got accepted for an oral presentation to RSS workshop: Structured World Models for Robotic Manipulation workshop.
May 26, 2025 Our work Multimodal Spatial Language Maps for Robot Navigation and Manipulation got accepted to International Journal of Robotics Research. Visit our project website at: mslmaps.github.io.

Selected Publications

2025

  1. diwa.png
    Diffusion Policy Adaptation with World Models
    Akshay L Chandra*, Iman Nematollahi*, Chenguang Huang, Tim Welschehold, Wolfram Burgard, and 1 more author
    In Proc. of the Conference on Robot Learning (CoRL), 2025
  2. arti4d.png
    Articulated Object Estimation in the Wild
    Abdelrhman Werby*, Martin Büchner*, Adrian Röfer*, Chenguang Huang, Wolfram Burgard, and 1 more author
    In Proc. of the Conference on Robot Learning (CoRL), 2025
  3. Multimodal Spatial Language Maps for Robot Navigation and Manipulation
    Chenguang Huang, Oier Mees, Andy Zeng, and Wolfram Burgard
    International Journal of Robotics Research, 2025
  4. BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding
    Chenguang Huang, Shengchao Yan, and Wolfram Burgard
    IEEE Robotics and Automation Letters, 2025

2024

  1. Open x-embodiment: Robotic learning datasets and rt-x models: Open x-embodiment collaboration
    Abby O’Neill, Abdul Rehman, Abhiram Maddukuri, Abhishek Gupta, Abhishek Padalkar, and 274 more authors
    In Proc. of the IEEE International Conference on Robotics & Automation (ICRA), 2024
  2. what_matters.png
    What Matters in Employing Vision Language Models for Tokenizing Actions in Robot Control?
    Nicolai Dorka*, Chenguang Huang*, Tim Welschehold, and Wolfram Burgard
    In First Workshop on Vision-Language Models for Navigation and Manipulation at ICRA, 2024
  3. Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation
    Abdelrhman Werby*, Chenguang Huang*, Martin Büchner*, Abhinav Valada, and Wolfram Burgard
    In Proc. of Robotics: Science and Systems (RSS), 2024

2023

  1. Audio Visual Language Maps for Robot Navigation
    Chenguang Huang, Oier Mees, Andy Zeng, and Wolfram Burgard
    In Proc. of the International Symposium of Experimental Robotics (ISER), 2023
  2. Visual language maps for robot navigation
    Chenguang Huang, Oier Mees, Andy Zeng, and Wolfram Burgard
    In Proc. of the IEEE International Conference on Robotics & Automation (ICRA), 2023

Teaching