University of Florida Gator logo RoboPI Laboratory logo
RoboPI laboratory is home for field robotics at the Department of ECE, University of Florida (UF). We develop novel and improved computational systems to enable autonomous robots to sense, navigate, and operate effectively in the wild: GPS-denied, communication-constrained, and unstructured marine environments.
RoboPI Aqua AUV robot deployment in underwater environment NemoSens AUV deployment at sea RoboPI team on a research vessel during field deployment NemoSens AUV sensor platform in water RoboPI lab group photo at Gulf of Mexico field site RoboPI underwater robot field trial in 2025 Underwater robot operating in Blue Grotto cave system RoboPI team conducting subsea mapping robotics trials with Ocean Rescue Alliance RoboPI lab showcase at NIA day event ROV and ASV robots during RoboPI field deployment
Active Localization and Stealth Recovery of Subsea UUVs
Safeguarding Underwater Data Centers from Acoustic Attacks
Human-in-the-Lead Control in Construction Robotics
Low-light Rolling Shutter Imagery for Robot Motion
NemoGator: Edge-AI for Bio-inspired AUVs
STEM Curricula Development (Micro-P2 and AuRo)
YouTube: Word2Wave (W2W) for Subsea Mission Programming
Word2Wave: Subsea Mission Programming Using Natural Language. This project explores the design and development of a language interface for dynamic mission programming of autonomous underwater vehicles (AUVs). The proposed W2W framework enables interactive programming and parameter configuration of AUVs for remote subsea missions. See More
YouTube: CavePI AUV Demonstration
We showcased our latest CavePI AUV at the RSS-2025 in USC. In this work, we demonstrate the system design and algorithmic integration of a visual servoing framework for semantically guided autonomous underwater cave exploration. See more
YouTube: AquaFuse: Waterbody Fusion
AquaFuse— is a physics-guided waterbody fusion method that can perform waterbody crossover, image enhancement, and geometry-preserving data augmentation for 3D view synthesis. It leverages closed-form solutions to estimate and exploit the scene depth, background light, backscatter, and medium illumination parameters for the fusion process. See More
YouTube: Ego-to-Exo: AR-based Exocentric View Synthesis
Ego-to-Exo: AR-based Exocentric View Synthesis for Improved ROV Teleoperation. This project aims at developing an augmented reality (AR) inspired ROV teleoperation interface that generates on-demand third-person perspectives as well as provides interactive control choices for viewpoint selection. The Ego-to-Exo (egocentric to exocentric) framework is integrated into a monocular visual SLAM system and uses estimated poses and past egocentric views for exocentric view generation. We demonstrate that the generated exocentric views embed significantly more information and global context about the scene compared to typical egocentric view and ensures safe and efficient ROV teleoperation. See More
YouTube: UDC Surveillance and Security
The main advantages of having a data center underwater are the free cooling and the isolation from variable environments on land. But these two advantages can also become liabilities, because the dense water carries acoustic signals faster than in air, and the isolated data center is difficult to monitor or to service if components break. Our research explores methods to identify vulnerable ways that an assailant may exploit to carry out acoustic attacks, and develop ways to eliminate or mitigate them. See More!
YouTube: Weakly Supervised Caveline Detection
Caveline Following for Safe AUV Navigation. Mapping underwater caves is a time-consuming, labor-intensive, and hazardous operation. For autonomous cave mapping by underwater robots, the major challenge lies in vision-based estimation in the complete absence of ambient light, which results in constantly moving shadows due to the motion of the camera-light setup. Thus, detecting and following the caveline as navigation guidance is paramount for robots in autonomous cave mapping missions. In this project, we design a computationally light caveline detection model based on a novel Vision Transformer (ViT)-based learning pipeline: CL-ViT. We address the problem of scarce annotated training data by a weakly supervised formulation where the learning is reinforced through a series of noisy predictions from intermediate sub-optimal models. See More
YouTube: UDepth In Action
Monocular Depth Estimation of Low-light Underwater Scenes. Unlike terrestrial robots, visually guided underwater robots have very few low-cost solutions for dense 3D visual sensing because of the high cost and domain-specific operational complexities involved in deploying underwater LiDARs, RGB-D cameras, or laser scanners. To address this issue, a fast monocular depth estimation method for enabling 3D perception capabilities of low-cost underwater robots. We formulate a novel end-to-end deep visual learning pipeline named UDepth— which incorporates image formation characteristics of natural underwater scenes. Project Page