Monocular depth estimation is challenging in low-light underwater scenes, particularly on computationally constrained devices.
In the UDepth project, we demonstrate that it is possible to achieve state-of-the-art depth estimation performance while
ensuring a small computational footprint. While the full model offers over 66 FPS inference rates on a single GPU , our domain projection for coarse depth prediction runs at
51.5 FPS rates on single-board Jetson TX2s.
We are working on developing robust sensing and estimation capabilities of on-device cameras in thermal, acoustic, and spectral domains. In particular,
our focus is on wearable cameras used by firefighters and deployed in various SAR applications.
We are hiring RoboGators (UF UGs) in this project.
FUnIE-GAN is a GAN-based model for fast underwater image enhancement. It can be used as a visual filter in
the robot autonomy pipeline for improved perception in noisy low-light conditions underwater. In addition to SOTA performance, it
offers over 48+ FPS inference on Jetson Xavier and 25+ FPS on TX2 devices.
Light scattering and attenuation underwater cause range-and-wavelength-dependent non-linear distortions that severely affect machine
vision despite often using high-end cameras. Physics-based approxiations with prior knowledge and/or learning-based image enhancement models
can help restore the input image qualities, which in turn improve visual perception. However, such visual filtering is challenging
without any prior knowledge on embedded robotic systems.
We are working with the FOCUS Laboratory and other external collaborators on these problems
for a variety of degraded settings. The goal of these projects is to design adaptable solutions for combating degraded machine vision by harnessing the power of online
learning and deep reinforcement learning.
Despite recent advancements of interactive vision APIs and AutoML technologies, there are no universal platforms
or criteria to measure the goodness of visual sensing conditions underwater to extrapolate the performance bounds
of visual perception algorithms. Our current work attempts to address these issues for real-time underwater robot vision.
More details: coming soon...
An essential capability of visually-guided robots is to identify interesting and salient objects in images for
accurate scene parsing and to make important operational decisions. Our work on saliency-guided visual attention modeling (SVAM)
develops robust and efficient solutions for saliency estimation by combining the power of bottom-up and top-down deep visual learning.
Our proposed model, named SVAM-Net, integrates deep visual features at various scales and semantics for effective SOD in natural underwater images.
SVAM-Net incorporates two spatial attention modules to jointly learn coarse-level and fine-level semantic features for accurate salient object detection in underwater
imagery. It provides SOTA scores on benchmark datasets, exhibits better generalization performance on challenging test cases than existing approaches, and achieves fast
end-to-end run-time on single-board machines.
While SVAM is a class-agnostic approach, we are also working on the problem of class-aware visual attention modeling
by semantic scene parsing.
We emperically demonstrated that a general-purpose solution of spatial attention modeling can facilitate over 45% faster
processing in robot perception tasks such as salient ROI enhancement, image super-resolution, and uninformed visual search.
We wre currently developing efficient deep visual models to achieve the desired performance bounds.
Not all desired behavior of a robot can be modeled as tractable optimization problems or scripted by traditional
robot programming paradigms. Our research attempts to identify such problems and subsequently design practical solutions by
using learning from demonstration (LfD) techniques on the TurtleBot-4 platform.
Over 54% of the US population thinks that drones and UAVs
(unmanned aerial vehicles) should not be allowed to fly in residential areas as it undermines the ability to assess
context and measure trust. Such growing concerns are pervasive across cyberspace toward numerous other human-centric
robots and intelligent systems. We are trying to address these issues by devising effective technological and/or educational
solutions to ensure transparency and trust. We are exploring various forms of implicit and explicit human-robot interaction for
companion robots (eg, Piaggio Fast Forward, Mabu, Staaker, Skydio, Pepper) in manufacturing, health care, and entertainment industry.
With a broader goal of ensuring safe and effective human-robot cooperation in various application-specific use cases,
we are extending our prior work to define and
quantify these interactions and implement other socially-compliant features for companion robots.
Focusing on the Florida coastlines, we are working closely with the
Center for Coastal Solutions (CCS) and
Warren B. Nelms IoT Institute to
develop technological solutions to address the practicalities of important subsea applications such as monitoring water quality,
surveying seabed or seagrass habitats, and farming artificial reefs. We are exploring deployable systems
for both passive sensing and prediction (of hazards or salient events) as well as coordinated active tracking by autonomous
mobile robots.
We are hiring RoboGators (UF UGs) in this project
We are hiring SURF UGs in this project
we are further exploring thermal imaging and sonar imaging modalities to formulate improved techniques that will facilitate useful
augmented visuals in autonomous exploration, manned or unmanned rescue operations, and other remote monitoring applications.
We are particularly focusing on low-power aerial surveillance cameras of SAR drones deployed in adverse conditions.
More details: coming soon...
We are conducting several projects that deal with developing low-cost low-power robotic systems and software infrastructures
for enabling autonomous and semi-autonomous underwater robots to work alongside human divers in subsea structure inspection
and tracking invasive fish (eg, lion fish, jewel fish).
More details: coming soon...