RoboPI laboratory is home for field robotics at the department of ECE, UF. We focus on developing novel and improved computational systems for autonomous field robots. In particular, we address the long-held challenges of deploying autonomous mobile robots and AI-capable sensors for long-term operation in the wild. Our developed systems and algorithms enable efficient perception and intelligent task execution in applications such as remote inspection, surveillance, enrionmental monitoring, and beyond.   Checkout our  current projects!.

We have three overlapping teams: software team, hardware team, and domain team. The software team designs and develops novel features for autonomous robots and intelligent machines and push the boundaries of state-of-the-art literature on robotics, on-device AI, machine vision, and deep learning. The hardware team takes care of the device integration of those novel algorithms and makes sure that the real-time constraints are met on the robot platforms. Then the domain team deploys the robotics system in the wild - validates the application-specific features, and provides feedback for iterative improvements. We follow this DDD: Design, Develop, and Deploy principle in our research for rapid innovation. We thank our sponsors: National Science Foundation (NSF), Texas Instruments (TI), and UF Research initiatives for supporting our work!

Research Highlights

Ego-to-Exo: AR-based Exocentric View Synthesis for Improved ROV Teleoperation. This project aims at developing an augmented reality (AR) inspired ROV teleoperation interface that generates on-demand third-person perspectives as well as provides interactive control choices for viewpoint selection. The Ego-to-Exo (egocentric to exocentric) framework is integrated into a monocular visual SLAM system and uses estimated poses and past egocentric views for exocentric view generation. We demonstrate that the generated exocentric views embed significantly more information and global context about the scene compared to typical egocentric view and ensures safe and efficient ROV teleoperation.

Active Localization and Stealth Recovery of Unmanned Underwater Vehicles (UUVs). The ability to operate for long periods of time and then return back safely - is a critical feature for autonomous underwater robots. A major challenge for a robot in such long-term missions is to estimate its location accurately since GPS signals cannot penetrate the ocean's surface, and Wi-Fi or radio communication infrastructures are not available underwater. Using a dedicated surface vessel for acoustic referencing or coming up to the water surface for GPS signals are power hungry, computationally expensive, and often impossible (in stealth applications). This project led by Dr. Islam, with Co-PIs: Dr. Koppal and Dr. Shin - will make scientific and engineering advances by using a novel optics-based framework and on-board AI technologies to solve this problem. The research and educational activities are funded by the NSF Foundational Research in Robotics (FRR) program.

Underwater Cave Exploration. Underwater caves play a crucial role in groundwater flows in Karst topography, a type of landscape featuring sinkholes, sinking streams, caves, springs, and other characteristic features. Almost 25% of the earth's population relies on Karst's freshwater resources. In the United States, withdrawals from bedrock aquifers accounted for 23% of freshwater withdrawals. More water is withdrawn from the Florida aquifer than from all other carbonate aquifers combined, which is why monitoring the water quality and mapping those topographies are critically important. Our robots help human divers by collecting water quality data, topography mapping information, and navigating through the dangerous cave regions autonomously. This will help us explore the unexplored, map the unmapped and reveal archaeological mysteries of underwater caves.

Caveline Following for Safe AUV Navigation. Mapping underwater caves is a time-consuming, labor-intensive, and hazardous operation. For autonomous cave mapping by underwater robots, the major challenge lies in vision-based estimation in the complete absence of ambient light, which results in constantly moving shadows due to the motion of the camera-light setup. Thus, detecting and following the caveline as navigation guidance is paramount for robots in autonomous cave mapping missions. In this project, we design a computationally light caveline detection model based on a novel Vision Transformer (ViT)-based learning pipeline: CL-ViT. We address the problem of scarce annotated training data by a weakly supervised formulation where the learning is reinforced through a series of noisy predictions from intermediate sub-optimal models.

Where to look?— is an intriguing problem of computer vision that deals with finding interesting or salient pixels in an image/video. As seen in this GANGNAM video!, the problem of Salient Object Detection (SOD) aims at identifying the most important or distinct objects in a scene. It is a successor to the human fixation prediction problem that aims to highlight pixels that human viewers would focus on at first glance.

Monocular Depth Estimation of Low-light Underwater Scenes. Unlike terrestrial robots, visually guided underwater robots have very few low-cost solutions for dense 3D visual sensing because of the high cost and domain-specific operational complexities involved in deploying underwater LiDARs, RGB-D cameras, or laser scanners. To address this issue, a fast monocular depth estimation method for enabling 3D perception capabilities of low-cost underwater robots. We formulate a novel end-to-end deep visual learning pipeline named UDepth— which incorporates domain knowledge of image formation characteristics of natural underwater scenes.