Vision and Learning in the Context of Exploratory Rovers
ETH Zürich
978-620-3-92453-4
6203924539
96
2021-06-17
43.90 €
eng
https://images.our-assets.com/cover/230x230/9786203924534.jpg
https://images.our-assets.com/fullcover/230x230/9786203924534.jpg
https://images.our-assets.com/cover/2000x/9786203924534.jpg
https://images.our-assets.com/fullcover/2000x/9786203924534.jpg
Generative Adversarial Networks (GANs) have had tremendous applications in Computer Vision. Yet, in the context of space science and planetary exploration the door is open for major advances. We introduce tools to handle planetary data from the mission Chang’E-4 and present a framework for Neural Style Transfer using Cycle-consistency from rendered images. We also introduce a new real-time pipeline for Simultaneous Localization and Mapping (SLAM) and Visual Inertial Odometry (VIO) in the context of planetary rovers. We leverage prior information of the location of the lander to propose an object-level SLAM approach that optimizes pose and shape of the lander together with camera trajectories of the rover. As a further refinement step, we propose to use techniques of interpolation between adjacent temporal samples; videlicet synthesizing non-existing images to improve the overall accuracy of the system. The experiments are conducted in the context of the Iris Lunar Rover, a nano-rover that will be deployed in lunar terrain in 2021 as the flagship of Carnegie Mellon, being the first unmanned rover of America to be on the Moon.<div><p style="text-align: justify;">Generative Adversarial Networks (GANs) have had tremendous applications in Computer Vision. Yet, in the context of space science and planetary exploration the door is open for major advances. We introduce tools to handle planetary data from the mission Chang’E-4 and present a framework for Neural Style Transfer using Cycle-consistency from rendered images.</p><p style="text-align: justify;"> </p><p style="text-align: justify;">We also introduce a new real-time pipeline for Simultaneous Localization and Mapping (SLAM) and Visual Inertial Odometry (VIO) in the context of planetary rovers. We leverage prior information of the location of the lander to propose an object-level SLAM approach that optimizes pose and shape of the lander together with camera trajectories of the rover. As a further refinement step, we propose to use techniques of interpolation between adjacent temporal samples; videlicet synthesizing non-existing images to improve the overall accuracy of the system.</p><p style="text-align: justify;"> </p><p style="text-align: justify;">The experiments are conducted in the context of the Iris Lunar Rover, a nano-rover that will be deployed in lunar terrain in 2021 as the flagship of Carnegie Mellon, being the first unmanned rover of America to be on the Moon.</p></div>
https://morebooks.shop/books/gb/published_by/lap-lambert-academic-publishing/47/products
Air and space technology
https://morebooks.shop/store/gb/book/vision-and-learning-in-the-context-of-exploratory-rovers/isbn/978-620-3-92453-4