RoboPI laboratory is home for field robotics at the department of ECE, UF. We focus on developing novel and improved computational systems
for autonomous field robots. In particular, we address the long-held challenges of deploying autonomous mobile robots and
AI-capable sensors for long-term operation in the wild. Our developed systems and algorithms enable efficient perception
and intelligent task execution in applications such as remote inspection,
surveillance, enrionmental monitoring, and beyond.
Checkout our
current projects!.
We have three overlapping teams: software team, hardware team, and domain team.
The software team designs and develops novel features for autonomous robots and intelligent machines and push the boundaries of state-of-the-art
literature on robotics, on-device AI, machine vision, and deep learning. The hardware team takes care of the device integration of those
novel algorithms and makes sure that the real-time constraints are met on the robot platforms. Then the domain team deploys the robotics system in the wild -
validates the application-specific features, and provides feedback for iterative improvements. We follow this DDD: Design, Develop, and Deploy
principle in our research for rapid innovation. We thank our sponsors: National Science Foundation (NSF), Office of Naval Research (ONR), Texas Instruments (TI), and UF Research initiatives
for supporting our work!
Word2Wave: Subsea Mission Programming Using Natural Language. This project explores the design and development of a language-based interface for dynamic mission programming of
autonomous underwater vehicles (AUVs). The proposed W2W framework enables interactive programming and parameter configuration of AUVs for remote subsea missions.
It includes language rules for efficient language-to-mission mapping, visualization, and validation for mission command generation from human speech or text. The proposed learning
pipeline adapts a T5-Small language model that can learn language-to-mission mapping from processed language data effectively, providing robust and efficient performance.
See More
Safeguarding Underwater Data Centers (UDCs) from Acoustic Attacks. UDCs hold promises as a futuristic concept for
data storage and remote computing infrastructures. The main advantages of UDCs are the free natural cooling properties of ocean water and the isolation from on-land vulnerabilities.
However, these advantages can also become liabilities, because the dense water carries acoustic signals faster than in air, and the isolated data center is difficult to monitor or to service if components break.
This project led by
Dr. Islam,
with Co-PI Dr. Rampazzi, will explore these vulnerabilities to design and develop an effective defense system and intelligent surveillance
strategies for mitigating physical or remote acoustic attacks on UDCs. This is a unique collaboration between the UF Center for Coastal Solutions (CCS)
and the Florida Institute for Cybersecurity Research (FICS); we will develop novel UDC defense systems and surveillance capabilities and deploy those for subsea operational validation.
This exciting project is funded by the
Cyber Security and Complex Software Systems program
at the Office of Naval Research (ONR). See More
Ego-to-Exo: AR-based Exocentric View Synthesis for Improved ROV Teleoperation. This project aims at developing
an augmented reality (AR) inspired ROV teleoperation interface that generates on-demand third-person perspectives as well as
provides interactive control choices for viewpoint selection. The Ego-to-Exo (egocentric
to exocentric) framework is integrated into a monocular visual SLAM system and uses estimated poses and past
egocentric views for exocentric view generation. We demonstrate that the generated exocentric views embed significantly more information and
global context about the scene compared to typical egocentric view and ensures safe and efficient ROV teleoperation.
See More
Active Localization and Stealth Recovery of Unmanned Underwater Vehicles (UUVs). The ability to operate for long periods of time and then
return back safely - is a critical feature for autonomous underwater robots. A major challenge for a robot in such long-term missions is
to estimate its location accurately since GPS signals cannot penetrate the ocean's surface, and Wi-Fi or radio
communication infrastructures are not available underwater. Using a dedicated surface vessel for
acoustic referencing or coming up to the water surface for GPS signals are power hungry, computationally expensive,
and often impossible (in stealth applications). This project led by
Dr. Islam,
with Co-PIs: Dr. Koppal and
Dr. Shin
- will develop a novel optics-based framework and on-board AI technologies to solve this problem.
The research and educational activities are funded by the NSF Foundational Research in Robotics
(FRR) program. See More
Caveline Following for Safe AUV Navigation. Mapping underwater caves is a time-consuming, labor-intensive, and hazardous operation.
For autonomous cave mapping by underwater robots, the major challenge lies
in vision-based estimation in the complete absence of ambient light, which results in constantly moving shadows due to the motion of the camera-light setup.
Thus, detecting and following the caveline as navigation guidance is paramount for robots in autonomous cave mapping missions. In this project, we design a computationally
light caveline detection model based on a novel Vision Transformer (ViT)-based learning pipeline: CL-ViT. We address the problem of scarce annotated
training data by a weakly
supervised formulation where the learning is reinforced through a series of noisy predictions from intermediate sub-optimal models. See More
Where to look?— is an intriguing problem of computer vision that deals with finding interesting or salient pixels in an image/video.
As seen in this GANGNAM video!, the problem of Salient Object Detection (SOD) aims at identifying the most important objects in a scene.
It is a successor to the human fixation prediction problem that aims to highlight pixels that human viewers would focus on at first glance.
More
Monocular Depth Estimation of Low-light Underwater Scenes. Unlike terrestrial robots, visually guided underwater robots have very few
low-cost solutions for dense 3D visual sensing because of the high cost and domain-specific operational
complexities involved in deploying underwater LiDARs, RGB-D cameras, or laser scanners. To address this issue, a fast monocular depth estimation method for enabling 3D perception
capabilities of low-cost underwater robots. We formulate a novel end-to-end deep visual learning pipeline
named UDepth— which incorporates domain knowledge of image formation characteristics of natural underwater scenes.
Project Page
Our @RoboPI_UF
group attended the 2024 Marine Robotics Workshop and Field Trials at the Bellairs Research Institute of Barbados.
🤖 pic.twitter.com/Dm7wRzBHYy
Dec 2023:
Our paper on caveline detection on edge platforms for AUV navigation got accepted in ICMLA 2023.
This is our collaborative work led by ICAS Lab @USC; checkout their
LinkedIn post!
Dec 2023:
Our ongoing work on light pollution monitoring was showcased at the Optimizing Solutions for Resilient Coasts (OSRC) Summit, organized by CCS.
See this featured news!
July 2023:
Two best paper awards this summer!!
One at the IEEE Conf. on Artificial Intelligence (CAI) 2023,
and another at the ACM HotStorage 2023.
See more in this LinkedIn post.
Dr. @XahidBuffon &
his @RoboPI_UF lab deployed robots w/ Dr. Ioannis Rekleitis 300 ft in an underwater
#CaveSystem in Orange Grove, which will support human divers
"explore the unexplored, map the unmapped & reveal archeological mysteries."