The Constructor Robotics group contributed within the EU-project “Cognitive autonomous diving buddy (CADDY)” to the collection of underwater gesture data and their interpretation by developing methods for their machine recognition under the challenging conditions of Underwater Human Machine Interaction (U-HRI) [1].
The CADDY Underwater Gestures Dataset consists of 10,000 stereo pair images collected in 8 different real world scenarios [3]. The gestures are related to the diver sign language dubbed CADDIAN to communicate with Autonomous Underwater Vehicles (AUV) [2].
There is also code for underwater gesture recognition [2] released by our group using different classical machine learning (ML) and deep learning (DL) methods and comparing them to each other based on the CADDY dataset.
References
[1] A. Birk, “A Survey of Underwater Human-Robot Interaction (U-HRI),” Current Robotics Reports, Springer Nature, vol. 3, pp. 199-211, 2022. https://doi.org/10.1007/s43154-022-00092-7 [Open Access]
[2] A. G. Chavez, A. Ranieri, D. Chiarella, and A. Birk, “Underwater Vision-Based Gesture Recognition: A Robustness Validation for Safe Human-Robot Interaction,” IEEE Robotics and Automation Magazine (RAM), vol. 28, pp. 67-78, 2021. https://doi.org/10.1109/MRA.2021.3075560 [Preprint PDF]
[3] A. G. Chavez, A. Ranieri, D. Chiarella, E. Zereik, A. Babic, and A. Birk, “CADDY Underwater Stereo-Vision Dataset for Human-Robot Interaction (HRI) in the Context of Diver Activities,” Journal of Marine Science and Engineering (JMSE), spec.iss. Underwater Imaging, vol. 7, 2019. https://doi.org/10.3390/jmse7010016 [Open Access]