Skip to content

Relevance to Mission

The Mission

"...As the AUVs approach the anomaly, a high-resolution underwater camera aboard the lead AUV engages, sending a live feed to the MOC. In the MOC, AI-assisted target recognition rapidly analyzes the object’s structure. The command and control system fuses all incoming data from the seabed-mapping drones, surface vessels, and subsurface sensors. The anomaly was not just a lone object—it was one of three devices placed along the cable route. The MOC orders deployment of an intervention-capable undersea vehicle with fine-motor manipulators to investigate further. This vehicle maneuvers in to safely secure the object in a containment unit and transports it to shore for later analysis..."


Our Relevant Solution

Our Solution We built a prototype mobile robot that can autonomously navigate an area, detect a target object (trash), and physically pick it up with a manipulator. This project, developed at Cornell Tech, involved combining computer vision (for object recognition and localization), path planning, and robotic arm control into one system. The result was a robot that could patrol a defined space, avoid obstacles, and perform a pick-and-place task without human intervention. In this video, the robot:

  1. Accurately navigates and maps an environment via SLAM (top right).
  2. Uses the YOLOv3 computer vision algorithm to detect trash (bottom right).
  3. Performs manipulation via MoveIt to pick up the trash and place it in the trash can (left).

The robot used is a Turtlebot3 Waffle Pi. The video is shot at 4× speed.


Alignment to High-Resolution Undersea Imaging Focus Areas

1. Provides detailed, real-time seabed imaging to support anomaly detection and subsea infrastructure monitoring.

Realtime Imaging to Support Infrastructure Monitoring

The robot in real time patrols the environment to map the surrounding infrastructure within the gazebo simulator.

Anomaly Detection and Collision Avoidance with Reinforcement Learning

Using Deep Q-Network (DQN) algorithms, we trained an AI agent to navigate towards goals while avoiding hazards in a simulated environment. One demo shows a virtual underwater vehicle learning to traverse a map to reach a target location; another demonstrates obstacle avoidance behaviors learned through reinforcement learning. These experiments validate that our AI can learn navigation policies transferrable to AUV route-planning and threat evasion tasks, and provide decision support (e.g., recommending rerouting an AUV to inspect an anomaly safely).

2. Integrates AI-driven data analysis for rapid object classification and mapping.

Simultaneous Localization and Mapping

We developed a SLAM system that fuses LiDAR, IMU, and encoder data to generate real-time maps while tracking vehicle location. In our demo, a robot mapped an indoor environment using a particle filter to handle sensor uncertainty. This is analogous to mapping undersea environments with sonar or cameras—critical for GPS-denied underwater navigation.

3. Supports operations in shallow water environments with precision navigation and mapping capabilities.

Autonomous Robot Patrol Algorithms Using LiDAR and Map-Based Navigation

Intelligent navigation and path planning algorithms for continuous patrol and obstacle avoidance.