Cooperative World Modeling

The online generation of maps is one of the core foundations for intelligent cognitive functionalities onboard of Autonomous Underwater Vehicles (AUV). Here, only a very short overview of selected contributions during year 2 of the project is given. Please check the publications for more detailed info.


Fig 1: An example of cooperative mapping with 4 robots. On the left, the final map with robots’ paths. Details of the areas indicated by red squares are shown on the right. It can be seen in the details on the right that no cooperation leads to worse results than the new multirobot strategy.

Based on work on 2D image registration and Simultaneous Localization and Mapping (SLAM) from year 1 of the project, a cooperative online method for underwater mapping was developed. The main challenge in this context are the severe communication constraints that restrict the amount of information that can be exchanged between the robots. The newly developed strategy determines the best possible information to be send to the other robots within the bandwidth limits.


Fig 2: On the left, a views on a 3D map of the Lesumer Speerwerk in Bremen. It is generated from registering 17 individual sonar scans. On the right, an overlay of the topview on the 3D map of the Lesumer Speerwerk with the view in Google-Earth.

Furthermore, significant results on 3D underwater mapping have been achieved. Figure 2 shows an example of a 3D map of a lock gate. The map is generated by only registration, i.e., without any information about the vehicle’s motion. The registered data consist of 17 sonar scans. The overall result shows a good correspondence with ground truth as indicated by an overlay of the top view with Google-earth. This work was further extended to include an uncertainty estimation which allows proper SLAM. This can be used for optimization when larger areas are mapped as illustrated in figure 3.


Fig 3: A 3D spectral registration map before (left) and after SLAM optimization (right). The map contains approximately 3M points. The underlying pose graph structure is shown in blue.