RoboPI laboratory is home for field robotics at the department of ECE, UF. We focus on developing novel and improved computational systems for autonomous field robots. In particular, we address the long-held challenges of deploying autonomous mobile robots and AI-capable sensors for long-term operation in the wild. Our developed systems and algorithms enable efficient perception and intelligent task execution in applications such as remote inspection, surveillance, enrionmental monitoring, and beyond.   Checkout our  current projects!.

We have three overlapping teams: software team, hardware team, and domain team. The software team designs and develops novel features for autonomous robots and intelligent machines and push the boundaries of state-of-the-art literature on robotics, on-device AI, machine vision, and deep learning. The hardware team takes care of the device integration of those novel algorithms and makes sure that the real-time constraints are met on the robot platforms. Then the domain team deploys the robotics system in the wild - validates the application-specific features, and provides feedback for iterative improvements. We follow this DDD: Design, Develop, and Deploy principle in our research for rapid innovation. We thank our sponsors: National Science Foundation (NSF), Office of Naval Research (ONR), Texas Instruments (TI), and UF Research initiatives for supporting our work!

Research Highlights

AquaFuse— is a physics-guided waterbody fusion method that can perform waterbody crossover, image enhancement, and geometry-preserving data augmentation for 3D view synthesis. It leverages closed-form solutions to estimate and exploit the scene depth, background light, backscatter, and medium illumination parameters for the fusion process. See More


Our latest 𝘊𝘢𝘷𝘦𝘗𝘐 𝘙𝘰𝘣𝘰𝘵 made its first swim recently. We are developing this for autonomous guided exploration inside underwater caves for exploration and mapping. The goal is to enable autonomous visual serving (by caveline following) as well as some shared autonomy features for working in a team. In the video, Ruo Chen, Nare Karapetyan, and Ioannis Rekleitis were swimming with it and testing the early prototype! More details are comming soon!


Word2Wave: Subsea Mission Programming Using Natural Language. This project explores the design and development of a language-based interface for dynamic mission programming of autonomous underwater vehicles (AUVs). The proposed W2W framework enables interactive programming and parameter configuration of AUVs for remote subsea missions. It includes language rules for efficient language-to-mission mapping, visualization, and validation for mission command generation from human speech or text. The proposed learning pipeline adapts a T5-Small language model that can learn language-to-mission mapping from processed language data effectively, providing robust and efficient performance. See More


Safeguarding Underwater Data Centers (UDCs) from Acoustic Attacks. UDCs hold promises as a futuristic concept for data storage and remote computing infrastructures. The main advantages of UDCs are the free natural cooling properties of ocean water and the isolation from on-land vulnerabilities. However, these advantages can also become liabilities, because the dense water carries acoustic signals faster than in air, and the isolated data center is difficult to monitor or to service if components break. This project led by Dr. Islam, with Co-PI Dr. Rampazzi, will explore these vulnerabilities to design and develop an effective defense system and intelligent surveillance strategies for mitigating physical or remote acoustic attacks on UDCs. This is a unique collaboration between the UF Center for Coastal Solutions (CCS) and the Florida Institute for Cybersecurity Research (FICS); we will develop novel UDC defense systems and surveillance capabilities and deploy those for subsea operational validation. This exciting project is funded by the Cyber Security and Complex Software Systems program at the Office of Naval Research (ONR). See More


Ego-to-Exo: AR-based Exocentric View Synthesis for Improved ROV Teleoperation. This project aims at developing an augmented reality (AR) inspired ROV teleoperation interface that generates on-demand third-person perspectives as well as provides interactive control choices for viewpoint selection. The Ego-to-Exo (egocentric to exocentric) framework is integrated into a monocular visual SLAM system and uses estimated poses and past egocentric views for exocentric view generation. We demonstrate that the generated exocentric views embed significantly more information and global context about the scene compared to typical egocentric view and ensures safe and efficient ROV teleoperation. See More


Active Localization and Stealth Recovery of Unmanned Underwater Vehicles (UUVs). The ability to operate for long periods of time and then return back safely - is a critical feature for autonomous underwater robots. A major challenge for a robot in such long-term missions is to estimate its location accurately since GPS signals cannot penetrate the ocean's surface, and Wi-Fi or radio communication infrastructures are not available underwater. Using a dedicated surface vessel for acoustic referencing or coming up to the water surface for GPS signals are power hungry, computationally expensive, and often impossible (in stealth applications). This project led by Dr. Islam, with Co-PIs: Dr. Koppal and Dr. Shin - will develop a novel optics-based framework and on-board AI technologies to solve this problem. The research and educational activities are funded by the NSF Foundational Research in Robotics (FRR) program. See More


Caveline Following for Safe AUV Navigation. Mapping underwater caves is a time-consuming, labor-intensive, and hazardous operation. For autonomous cave mapping by underwater robots, the major challenge lies in vision-based estimation in the complete absence of ambient light, which results in constantly moving shadows due to the motion of the camera-light setup. Thus, detecting and following the caveline as navigation guidance is paramount for robots in autonomous cave mapping missions. In this project, we design a computationally light caveline detection model based on a novel Vision Transformer (ViT)-based learning pipeline: CL-ViT. We address the problem of scarce annotated training data by a weakly supervised formulation where the learning is reinforced through a series of noisy predictions from intermediate sub-optimal models. See More


Where to look?— is an intriguing problem of computer vision that deals with finding interesting or salient pixels in an image/video. As seen in this GANGNAM video!, the problem of Salient Object Detection (SOD) aims at identifying the most important objects in a scene. It is a successor to the human fixation prediction problem that aims to highlight pixels that human viewers would focus on at first glance. More


Monocular Depth Estimation of Low-light Underwater Scenes. Unlike terrestrial robots, visually guided underwater robots have very few low-cost solutions for dense 3D visual sensing because of the high cost and domain-specific operational complexities involved in deploying underwater LiDARs, RGB-D cameras, or laser scanners. To address this issue, a fast monocular depth estimation method for enabling 3D perception capabilities of low-cost underwater robots. We formulate a novel end-to-end deep visual learning pipeline named UDepth— which incorporates domain knowledge of image formation characteristics of natural underwater scenes. Project Page