RoboPI laboratory is home for field robotics at the department of ECE, UF. We focus on developing novel and improved computational systems
for autonomous field robots. In particular, we address the long-held challenges of deploying autonomous mobile robots and
AI-capable sensors for long-term operation in the wild. Our developed systems and algorithms enable efficient perception
and intelligent task execution in applications such as remote inspection,
surveillance, enrionmental monitoring, and beyond.
Checkout our
current projects!.
We have three overlapping teams: software team, hardware team, and domain team.
The software team designs and develops novel features for autonomous robots and intelligent machines and push the boundaries of state-of-the-art
literature on robotics, on-device AI, machine vision, and deep learning. The hardware team takes care of the device integration of those
novel algorithms and makes sure that the real-time constraints are met on the robot platforms. Then the domain team deploys the robotics system in the wild -
validates the application-specific features, and provides feedback for iterative improvements. We follow this DDD: Design, Develop, and Deploy
principle in our research for rapid innovation. We thank our sponsors: National Science Foundation (NSF), Texas Instruments (TI), and UF Research initiatives
for supporting our work!
Ego-to-Exo: AR-based Exocentric View Synthesis for Improved ROV Teleoperation. This project aims at developing
an augmented reality (AR) inspired ROV teleoperation interface that generates on-demand third-person perspectives as well as
provides interactive control choices for viewpoint selection. The Ego-to-Exo (egocentric
to exocentric) framework is integrated into a monocular visual SLAM system and uses estimated poses and past
egocentric views for exocentric view generation. We demonstrate that the generated exocentric views embed significantly more information and
global context about the scene compared to typical egocentric view and ensures safe and efficient ROV teleoperation.
Active Localization and Stealth Recovery of Unmanned Underwater Vehicles (UUVs). The ability to operate for long periods of time and then
return back safely - is a critical feature for autonomous underwater robots. A major challenge for a robot in such long-term missions is
to estimate its location accurately since GPS signals cannot penetrate the ocean's surface, and Wi-Fi or radio
communication infrastructures are not available underwater. Using a dedicated surface vessel for
acoustic referencing or coming up to the water surface for GPS signals are power hungry, computationally expensive,
and often impossible (in stealth applications). This project led by
Dr. Islam,
with Co-PIs: Dr. Koppal and
Dr. Shin
- will make scientific and engineering
advances by using a novel optics-based framework and on-board AI technologies to solve this problem.
The research and educational activities are funded by the NSF Foundational Research in Robotics
(FRR) program.
Underwater Cave Exploration. Underwater caves play a crucial role in groundwater flows in Karst topography, a type of landscape featuring sinkholes,
sinking streams, caves, springs, and other characteristic features. Almost 25% of the earth's population relies on Karst's freshwater resources.
In the United States, withdrawals from bedrock aquifers accounted for 23% of freshwater withdrawals. More water is withdrawn from the Florida aquifer than
from all other carbonate aquifers combined, which is why monitoring the water quality and mapping those topographies are critically important. Our robots help
human divers by collecting water quality data, topography mapping information, and navigating through the dangerous cave regions autonomously.
This will help us explore the unexplored, map the unmapped and reveal archaeological mysteries of underwater caves.
Caveline Following for Safe AUV Navigation. Mapping underwater caves is a time-consuming, labor-intensive, and hazardous operation.
For autonomous cave mapping by underwater robots, the major challenge lies
in vision-based estimation in the complete absence of ambient light, which results in constantly moving shadows due to the motion of the camera-light setup.
Thus, detecting and following the caveline as navigation guidance is paramount for robots in autonomous cave mapping missions. In this project, we design a computationally
light caveline detection model based on a novel Vision Transformer (ViT)-based learning pipeline: CL-ViT. We address the problem of scarce annotated
training data by a weakly
supervised formulation where the learning is reinforced through a series of noisy predictions from intermediate sub-optimal models.
Where to look?— is an intriguing problem of computer vision that deals with finding interesting or salient pixels in an image/video.
As seen in this GANGNAM video!, the problem of Salient Object Detection (SOD) aims at identifying the most important or distinct objects in a scene.
It is a successor to the human fixation prediction problem that aims to highlight pixels that human viewers would focus on at first glance.
Monocular Depth Estimation of Low-light Underwater Scenes. Unlike terrestrial robots, visually guided underwater robots have very few
low-cost solutions for dense 3D visual sensing because of the high cost and domain-specific operational
complexities involved in deploying underwater LiDARs, RGB-D cameras, or laser scanners. To address this issue, a fast monocular depth estimation method for enabling 3D perception
capabilities of low-cost underwater robots. We formulate a novel end-to-end deep visual learning pipeline
named UDepth— which incorporates domain knowledge of image formation characteristics of natural underwater scenes.
Our @RoboPI_UF
group attended the 2024 Marine Robotics Workshop and Field Trials at the Bellairs Research Institute of Barbados.
🤖 pic.twitter.com/Dm7wRzBHYy
Dec 2023:
Our paper on caveline detection on edge platforms for AUV navigation got accepted in ICMLA 2023.
This is our collaborative work led by ICAS Lab @USC; checkout their
LinkedIn post!
Dec 2023:
Our ongoing work on light pollution monitoring was showcased at the Optimizing Solutions for Resilient Coasts (OSRC) Summit, organized by CCS.
See this featured news!
Sept 2023: Our project on optics-based underwater robot localization and stealth navigation got funded ($600K) by the
NSF IIS (FRR) program.
Sept 2023: Our collaborative project on human-in-the-lead construction robotics got funded ($700K) by the
NSF CNS (FW-HTF) program.
Aug 2023: Our project on bio-mimetic robots
for coastal eco-monitoring got funded ($90K) by the
UF Research
Opportunity Seed Fund (ROSF) 2023.
July 2023:
Two best paper awards this summer!!
One at the IEEE Conf. on Artificial Intelligence (CAI) 2023,
and another at the ACM HotStorage 2023.
See more in this LinkedIn post.
Dr. @XahidBuffon &
his @RoboPI_UF lab deployed robots w/ Dr. Ioannis Rekleitis 300 ft in an underwater
#CaveSystem in Orange Grove, which will support human divers
"explore the unexplored, map the unmapped & reveal archeological mysteries."