Relevance to Mission
The Mission
"The Department of Defense (DoD) seeks low-cost, autonomous systems (e.g., unmanned aerial systems, drones, unmanned ground vehicles, unmanned ground sensors) capable of carrying and employing EW payloads or other sensors to create a realistic Multi-Domain Operations Environment (MDOE) for training. The DoD currently lacks affordable, effective, and easy-to-deploy autonomous systems capable of creating realistic electromagnetic spectrum (EMS) effects in a contested Multi-Domain Operations Environment (MDOE). Current training environments do not adequately replicate the complex interactions of multiple autonomous platforms (such as UAS, drones, UGV, and UGS) carrying EW payloads and sensors that warfighters encounter in large scale combat operations (LSCO). The DoD seeks solutions that leverage existing autonomous platform technologies, combined with EW payloads and sensors, to create a realistic training environment. Proposals shall identify specific autonomous platforms and associated payloads, detail their employment in creating an effective training environment, and outline all required resources (material and personnel) for implementation."
Our Relevant Solution
We built a prototype mobile robot that can autonomously navigate an area, detect a target object (trash), and physically pick it up with a manipulator. This project, developed at Cornell Tech, involved combining computer vision (for object recognition and localization), path planning, and robotic arm control into one system. The result was a robot that could patrol a defined space, avoid obstacles, and perform a pick-and-place task without human intervention. In this video, the robot:
- Accurately navigates and maps an environment via SLAM (top right).
- Uses the YOLOv3 computer vision algorithm to detect trash (bottom right).
- Performs manipulation via MoveIt to pick up the trash and place it in the trash can (left).
The robot used is a Turtlebot3 Waffle Pi. The video is shot at 4Γ speed.
Alignment to EW Battlefield Realism, Autonomous Systems Focus Areas
β End-to-End Autonomous System for Infrastructure Mapping and Patrol
We present a fully autonomous robotic platform that conducts real-time patrol missions and maps surrounding infrastructure using onboard sensors. In simulation environments like Gazebo, our system autonomously navigates pre-defined zones, dynamically adapting to the environment without manual intervention.
Use Case Alignment:
-
UGVs/UGS with mounted mapping sensors for initial situational awareness in contested MDOE zones.
-
Prepares terrain data for later EW payload delivery or surveillance sweeps.
β Anomaly Detection & Collision Avoidance with Reinforcement Learning
We use Deep Q-Network (DQN) reinforcement learning algorithms to train autonomous agents to avoid hazards and make intelligent route decisions. These agents are capable of rerouting in real time when anomalies (e.g., electronic signal anomalies or mobile threats) are detected.
Use Case Alignment:
-
Autonomous threat evasion by unmanned aerial/ground systems in contested environments.
-
Enables AUVs or UGVs to respond to EW threats dynamically without manual oversight.
β Simultaneous Localization and Mapping (SLAM) with Sensor Fusion
Our SLAM system integrates LiDAR, IMU, and encoder data using a particle filter for uncertainty management. This allows real-time map generation and localization, even in GPS-denied or electromagnetically contested environments.
Use Case Alignment:
-
Navigation in GPS-denied or EMS-contested zones (e.g., tunnels, buildings, undersea).
-
Foundational for coordinating multiple autonomous platforms during LSCO simulation.
β Autonomous Robot Patrol with LiDAR-Based Navigation
Weβve developed intelligent path-planning algorithms that enable persistent area patrol while avoiding dynamic and static obstacles. These systems are adaptable and cost-efficient, making them ideal for deployment in complex training environments.
Use Case Alignment:
-
UGVs/UGS patrolling mock battlefields, emulating adversary movement or signal sources.
-
Enables realistic EMS effects simulation via distributed patrol networks.
π§ System Integration and Deployment Readiness
Our modular architecture allows integration with a range of EW payloads and sensors (RF emitters, spectrum analyzers, spoofing modules). The entire system can be scaled across aerial and ground platforms and deployed using a compact, rapidly operable kit. Personnel requirements are minimalβ1β2 operators can initiate and monitor multiple units remotely.
Summary
This platform offers a cost-effective, autonomous training environment that realistically mimics EW and sensor-driven interactions across domains. It satisfies key DoD criteria for:
- Replicating EMS effects in contested environments
- Supporting coordinated multi-agent behavior
- Reducing personnel overhead for training operations
- Enabling rapid deployment and scalability