Dottore di ricerca

ciclo: XXXVI

supervisore: Giorgio Grisetti

Titolo della tesi: Unified Multimodal 3D Reconstructions

In our modern digital era, taking photos with smartphones is now a common habit. However, these 2D photos show only a part of the whole scene. 3D reconstruction, on the other hand, provides a comprehensive, multi-faceted view of scenes, enriching experiences from personal memories to professional fields like urban planning and archaeology. This technique is fundamental in augmented reality and robotic navigation, where a deep grasp of the world’s 3D structure is essential. Current advancements, while significant, have yet to realize an “ideal” 3D reconstruction system. Such a system would consistently capture nearly all visible surfaces, being extremely robust in camera tracking, and deliver intricate reconstructions rapidly. It would seamlessly scale from small to large environments without loosing accuracy and perform optimally across diverse environmental and lighting conditions. Beyond classic cameras, LiDAR has gained significant attention over the past decade as another tool to perceive and reconstruct our surroundings. While cameras provide visually-rich data, their depth perception is often limited (hard to estimate), especially in low-light conditions. Contrastingly, LiDARs, relying on their emitted light, excel in various lighting scenarios and larger environments, but their measurements are sparse and lack colors. Hence, an “ideal” 3D reconstruction pipeline, to achieve impressive results, should rely on both sensors, to mitigate their limitations. This research tries to exploit similarities between the two sensors, underscoring the value of integrating uniformly data from both to achieve a more comprehensive environmental understanding, ensuring accurate performance without extensive waiting. However, integrating camera and LiDAR data presents challenges due to their distinct data natures, necessitating precise calibration, synchronization, and complex multimodal processing pipelines. Advances in technology have facilitated similarities between LiDAR generated images and those from passive sensors, opening avenues for visual place recognition. Our exploration in this domain yielded promising results, particularly highlighting LiDAR’s consistent performance in diverse lighting. Our research journey then transitioned to bridging the gap between LiDAR and RGB-D sensors. By devising a Simultaneous Localization and Mapping (SLAM) pipeline adaptable to both sensors and rooted in photometric alignment, our findings were comparable with specialized systems. Delving into Bundle Adjustment, our generalized strategy showcased remarkable efficiency, especially when merging data from both the sensors. Further refinement incorporated geometric information, balancing robustness with precision and achieving impressive accuracy across varied environments. In addition, we introduce a robotics perception dataset from Rome, encompassing RGB, dense depth, 3D LiDAR point clouds, IMU, and GPS data. Recognizing current dataset limitations and the proficiency of contemporary SLAM and 3D reconstruction methods, our dataset offers a fresh challenge to push algorithm boundaries. We emphasize precise calibration and synchronization, capturing varied settings from indoor to highways using modern equipment. Collected both manually and through vehicles, it is tailored for a range of robotic uses. In essence, this thesis encapsulates the pursuit of enhancing SLAM and 3D reconstructions through multimodality. By harnessing the capabilities of diverse depth sensors, we have made significant progress in the domain, paving the way for more integrated, compact, robust and detailed systems in the future.

Produzione scientifica

11573/1684477 - 2023 - Photometric LiDAR and RGB-D Bundle Adjustment
Di Giammarino, Luca; Giacomini, Emanuele; Brizi, Leonardo; Salem, Omar; Grisetti, Giorgio - 01a Articolo in rivista
rivista: IEEE ROBOTICS AND AUTOMATION LETTERS (USa, Piscataway, NJ: IEEE Robotics and Automation Society) pp. 4362-4369 - issn: 2377-3766 - wos: WOS:001012840800008 (2) - scopus: 2-s2.0-85161598575 (3)

11573/1699009 - 2023 - Enhancing LiDAR Performance: Robust De-Skewing Exclusively Relying on Range Measurements
Salem, O. A. A. K.; Giacomini, E.; Brizi, L.; Di Giammarino, L.; Grisetti, G. - 04b Atto di convegno in volume
congresso: AIxIA 2023 22nd International Conference of the Italian Association for Artificial Intelligence (Roma Tre University - ICITA Department Via Vito Volterra, 62, 00146 Roma)
libro: AIxIA 2023 22nd International Conference of the Italian Association for Artificial Intelligence - (978-3-031-47545-0; 978-3-031-47546-7)

11573/1673357 - 2022 - MD-SLAM: Multi-cue Direct SLAM
Di Giammarino, Luca; Brizi, Leonardo; Guadagnino, Tiziano; Stachniss, Cyrill; Grisetti, Giorgio - 04b Atto di convegno in volume
congresso: IEEE/RSJ International Conference on Intelligent Robots and Systems (Kyoto; Japan)
libro: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) - (978-1-6654-7927-1; 978-1-6654-7928-8)

11573/1621085 - 2022 - HiPE: Hierarchical Initialization for Pose Graphs
Guadagnino, T.; Di Giammarino, L.; Grisetti, G. - 01a Articolo in rivista
rivista: IEEE ROBOTICS AND AUTOMATION LETTERS (USa, Piscataway, NJ: IEEE Robotics and Automation Society) pp. 287-294 - issn: 2377-3766 - wos: WOS:000719559700003 (1) - scopus: 2-s2.0-85118652799 (2)

© Università degli Studi di Roma "La Sapienza" - Piazzale Aldo Moro 5, 00185 Roma