Chenguang Huang

prof_pic.jpg

I am a final year (4th) PhD student at University of Freiburg supervised by Prof. Wolfram Burgard. I am currently visiting my supervsisor’s new lab at Technical University of Nuremberg and doing research on scene representation and vision language action models. Previously I obtained my Master’s Degree from Technical University of Munich, majoring in Robotics, Cognition, Intelligence. During my Master’s study, I was lucky to be able to exchange to ETH, Zurich for two semesters and do research on visual-lidar fusion for mapping. Earlier, I got my Bachelor’s Degree on Vehicle Engineering from Jilin University in China. Currently, my research interests lie in scene representation for robot applications (like open-vocabulary mapping, especially in dynamic scenes), vision language action models for robot navigation and manipulation, and 3D-LLM. I am Open to Positions now!

News

May 26, 2025 Our work Multimodal Spatial Language Maps for Robot Navigation and Manipulation got accepted to International Journal of Robotics Research.
Jan 31, 2025 Our work BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding got accepted to IEEE Robotics and Automation Letters.
Dec 03, 2024 New arXiv preprint for our latest work on tackling object association in long-term dynamic scene with a customized encoder BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding.
Oct 26, 2024 Our work BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding and Navigation got accepted to CoRL Lifelong Learning for Home Robot workshop in Munich, Germany 2024.
May 15, 2024 Our work Open x-embodiment: Robotic learning datasets and rt-x models got best paper awards at ICRA Yokohama, Japan 2024.

Selected Publications

2025

  1. avlmaps_tabletop.png
    Multimodal Spatial Language Maps for Robot Navigation and Manipulation
    Chenguang Huang, Oier Mees, Andy Zeng, and Wolfram Burgard
    International Journal of Robotics Research, 2025
  2. BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding
    Chenguang Huang, Shengchao Yan, and Wolfram Burgard
    IEEE Robotics and Automation Letters, 2025

2024

  1. Open x-embodiment: Robotic learning datasets and rt-x models: Open x-embodiment collaboration
    Abby O’Neill, Abdul Rehman, Abhiram Maddukuri, Abhishek Gupta, Abhishek Padalkar, and 274 more authors
    In Proc. of the IEEE International Conference on Robotics & Automation (ICRA), 2024
  2. what_matters.png
    What Matters in Employing Vision Language Models for Tokenizing Actions in Robot Control?
    Nicolai Dorka*, Chenguang Huang*, Tim Welschehold, and Wolfram Burgard
    In First Workshop on Vision-Language Models for Navigation and Manipulation at ICRA, 2024
  3. Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation
    Abdelrhman Werby*, Chenguang Huang*, Martin Büchner*, Abhinav Valada, and Wolfram Burgard
    In Proc. of Robotics: Science and Systems (RSS), 2024

2023

  1. Audio Visual Language Maps for Robot Navigation
    Chenguang Huang, Oier Mees, Andy Zeng, and Wolfram Burgard
    In Proc. of the International Symposium of Experimental Robotics (ISER), 2023
  2. Visual language maps for robot navigation
    Chenguang Huang, Oier Mees, Andy Zeng, and Wolfram Burgard
    In Proc. of the IEEE International Conference on Robotics & Automation (ICRA), 2023

Teaching