M3DMap: Object-aware Multimodal 3D Mapping for Dynamic Environments

Dmitry Yudin1,2,
1Moscow Institute of Physics and Technology (MIPT), 2AIRI
Contact author by e-mail: yuddim@yandex.ru

Abstract

3D mapping in dynamic environments poses a challenge for modern researchers in robotics and autonomous transportation. There are no universal representations for dynamic 3D scenes that incorporate multimodal data such as images, point clouds, and text. This article takes a step toward solving this problem. It proposes a taxonomy of methods for constructing multimodal 3D maps, classifying contemporary approaches based on scene types and representations, learning methods, and practical applications. Using this taxonomy, a brief structured analysis of recent methods is provided. The article also describes an original modular method called M3DMap, designed for object-aware construction of multimodal 3D maps for both static and dynamic scenes. It consists of several interconnected components: a neural multimodal object segmentation and tracking module; an odometry estimation module, including trainable algorithms; a module for 3D map construction and updating with various implementations depending on the desired scene representation; and a multimodal data retrieval module. The article highlights original implementations of these modules and their advantages in solving various practical tasks, from 3D object grounding to mobile manipulation. Additionally, it presents theoretical propositions demonstrating the positive effect of using multimodal data and modern foundational models in 3D mapping methods.

Interpolate start reference image

Taxonomy of Multimodal 3D Mapping Methods.

Interpolate start reference image

Scheme of proposed M3DMap approach

BibTeX

@misc{yudin2025m3dmap,
      title={M3DMap: Object-aware Multimodal 3D Mapping for Dynamic Environments}, 
      author={Dmitry Yudin},
      year={2025},
}