May 26, 2025 | Our work Multimodal Spatial Language Maps for Robot Navigation and Manipulation got accepted to International Journal of Robotics Research. |
Jan 31, 2025 | Our work BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding got accepted to IEEE Robotics and Automation Letters. |
Dec 03, 2024 | New arXiv preprint for our latest work on tackling object association in long-term dynamic scene with a customized encoder BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding. |
Oct 26, 2024 | Our work BYE: Build Your Encoder with One Sequence of Exploration Data for Long-Term Dynamic Scene Understanding and Navigation got accepted to CoRL Lifelong Learning for Home Robot workshop in Munich, Germany 2024. |
May 15, 2024 | Our work Open x-embodiment: Robotic learning datasets and rt-x models got best paper awards at ICRA Yokohama, Japan 2024. |
May 14, 2024 | Our latest work Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation got accepted to RSS in Delft, Netherlands 2024. |
May 10, 2024 | Google Scholar citation number reached 300 (see here). |
Apr 05, 2024 | Our work Open x-embodiment: Robotic learning datasets and rt-x models got nominated for best paper, best student paper and best manipulation paper awards at ICRA Yokohama, Japan 2024. |
Mar 29, 2024 | Our latest work Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation got accepted to ICRA VLMNM workshop in Yokohama, Japan 2024. |
Mar 29, 2024 | Our latest work What Matters in Employing Vision Language Models for Tokenizing Actions in Robot Control? got accepted to ICRA VLMNM workshop in Yokohama, Japan 2024. |
Mar 27, 2024 | New preprint for our latest work on hierarchical scene representation for language-grounded navigation, Hierarchical Open-Vocabulary 3D Scene Graphs for Language-Grounded Robot Navigation. |
Mar 01, 2024 | Google Scholar citation number reached 200 (see here). |
Feb 26, 2024 | The code for Audio Visual Maps for Robot Navigation is released: https://github.com/avlmaps/AVLMaps. |
Jan 29, 2024 | One paper accepted at ICRA 2024, Yokohama, Open-X Embodiment. |
Oct 03, 2023 | Excited to have contributed to the Open-X Embodiment world collaboration paper by evaluating the generalist robot model in the real-world setup at the University of Freiburg. |
Sep 11, 2023 | One paper accepted at ISER 2023, Audio Visual Language Maps for Robot Navigation. |
Mar 15, 2023 | New preprint for our latest collaboration with Google Research, Audio Visual Language Maps for Robot Navigation. |
Jan 17, 2023 | One paper accepted at ICRA 2023, London, Visual Language Maps for Robot Navigation. |
Oct 13, 2022 | New preprint for our latest collaboration with Google Research, Visual Language Maps for Robot Navigation. |