Marine Robotics - Autonomous Underwater Vehicle (AUV) Design
I lead the RGB-D detection and machine learning task completion in the sub-software team for our AUV, focusing on underwater navigation and object interaction.
Industry
Industry
Robotics
Robotics
Hours
Hours
25-30
25-30
Skills
Skills
ROS2, Ubuntu, RGB-D Detection
ROS2, Ubuntu, RGB-D Detection
ROS2, Ubuntu, RGB-D Detection
Challenge
As part of a marine robotics project, our team is designing an Autonomous Underwater Vehicle (AUV) for competition, capable of independently navigating underwater environments, detecting objects, and completing specific tasks. In my role as the lead for RGB-D detection and machine learning, I am responsible for developing systems that enable the AUV to interpret depth and visual data in real-time, even in challenging underwater conditions.
One of the primary challenges lies in managing RGB-D data (color and depth information) to accurately detect and classify objects, such as buoys or underwater markers, and to determine optimal paths for task execution. Underwater visibility and lighting variations add complexity to RGB-D processing, so my focus has been on refining algorithms that adjust to environmental changes. Additionally, I oversee the integration of machine learning for task-specific actions, such as approaching and manipulating targets, while ensuring the models are optimized for efficient processing on our AUV’s limited hardware.
Results
Our AUV performed strongly at the RoboSub competition, placing 15th overall, and outperforming well-known teams from UC Berkeley, UCLA, and the University of Toronto. The RGB-D detection system allowed the AUV to accurately identify objects and navigate effectively, demonstrating the capability of our machine learning models to adapt in real-time underwater conditions.
This experience enhanced my leadership skills and expertise in RGB-D processing, machine learning, and embedded system optimization. Our recognition reflects the effectiveness of combining depth perception, computer vision, and machine learning in a demanding marine robotics setting. Today, we continue improving and optimizing performance, adding new changes as the team grows and our knowledge progresses.
Challenge
As part of a marine robotics project, our team is designing an Autonomous Underwater Vehicle (AUV) for competition, capable of independently navigating underwater environments, detecting objects, and completing specific tasks. In my role as the lead for RGB-D detection and machine learning, I am responsible for developing systems that enable the AUV to interpret depth and visual data in real-time, even in challenging underwater conditions.
One of the primary challenges lies in managing RGB-D data (color and depth information) to accurately detect and classify objects, such as buoys or underwater markers, and to determine optimal paths for task execution. Underwater visibility and lighting variations add complexity to RGB-D processing, so my focus has been on refining algorithms that adjust to environmental changes. Additionally, I oversee the integration of machine learning for task-specific actions, such as approaching and manipulating targets, while ensuring the models are optimized for efficient processing on our AUV’s limited hardware.
Results
Our AUV performed strongly at the RoboSub competition, placing 15th overall, and outperforming well-known teams from UC Berkeley, UCLA, and the University of Toronto. The RGB-D detection system allowed the AUV to accurately identify objects and navigate effectively, demonstrating the capability of our machine learning models to adapt in real-time underwater conditions.
This experience enhanced my leadership skills and expertise in RGB-D processing, machine learning, and embedded system optimization. Our recognition reflects the effectiveness of combining depth perception, computer vision, and machine learning in a demanding marine robotics setting. Today, we continue improving and optimizing performance, adding new changes as the team grows and our knowledge progresses.