Subsea Telerobotics Research


Subsea telerobotics technologies have advanced significantly over the past few decades, driven by the need for remote inspection, maintenance, and research expeditions in underwater environments. Remotely operated vehicles (ROVs) and autonomous underwater vehicles (AUVs) are typically deployed to perform remote tasks in subsea environments that are beyond the reach of human scuba divers. The robots are generally teleoperated from a surface vessel or a base station for applications such as underwater infrastructure inspection, seabed mapping, remote surveillance, environmental monitoring, scientific expeditions, and more.

Existing subsea telerobotics technologies only offer human-to-machine (H2M) control interfaces for communicating mission instructions to the sub. The sole feedback in this process is a delayed sensory update on the teleop console, which is a noisy low-dimensional representation of the environment perceived by the sub (camera feed, sonar scan). Our research focuses on enabling effective and adaptive human-machine collaborative task execution in subsea telerobotics. Beyond direct teleoperation, our goal is to develop an interactive human-machine interface (HMI) for teleoperators to influence the mission planning and execution by subsea ROVs.

A chronological evolution of subsea HMI technologies is shown. The top row highlights early achievements and milestones. As technology advanced, this century observed industry solutions for interfaces and simulators, incorporating XR, haptic features, and natural language interactions driven by LLMs. Review Paper



Ego-to-Exo Project

In this project, We introduce an active vision interface that offers three new technologies: (i) interactive view synthesis based on operators' active gaze; (ii) on-demand viewpoint selection; and (iii) SLAM pipeline integration paired with the cross-embodied mission simulator interface. Our proposed Ego-to-Exo (egocentric to exocentric) is an active view generation framework integrated into a visual SLAM system for improved subsea teleoperation. Some features of our proposed interface is shown in the Figure on the right, where (a) an underwater ROV is teleoperated for mapping underwater caves; (b) corresponding egocentric view; (c) Ego-Exo console with embodied robot poses; (d-f) other viewpoints selected by teleoperator; and the (g) SLAM pipeline in the mission simulator. Such on-demand exocentric views offer comprehensive peripheral and global semantics for improved teleoperation.

We validate the geometric accuracy of the proposed framework through extensive experiments of 2-DOF indoor navigation and 6-DOF underwater cave exploration in challenging low-light conditions. See more details on the project page!

Project page



EOB (Eye On the Back)

While the subsea industries and naval defense teams deploy underwater ROVs with high-end cameras and sonars - remote teleoperation remains a challenge in adverse visibility conditions and around complex structures. This is because the typical first-person feed from the camera is not the best view for ROV teleoperation, especially in deep water missions in low-visibility conditions. As illustrated in Figure on the right, a third-person view from behind the ROV provides more informative visuals with peripheral semantics.

We address these issues by introducing a novel EOB (Eye On the Back) concept to provide a third-person perspective for remote ROV teleoperation in subsea missions. We demonstrate that third-person camera views from immediately behind the ROV offer significantly more peripheral information than typical first-person feeds alone. We experiment with multiple configurations for an exocentric camera setup (see on the right) in tasks such as subsea structure inspection and underwater cave exploration by a surface operator. We find that the EOB views allow us to design more interactive consoles, which teleoperators can use to find global (exocentric) views, plan complex maneuvers, and conduct efficient missions with higher safety margins. Teleoperation consoles designed with these capabilities will offer flexibility and more interactive features for human-machine collaborative task execution in subsea missions.

As shown in Figure below, we consider an EOB Arc behind the ROV from where third-person perspectives are likely to provide useful peripheral visuals for safe robot maneuvering. For proof of concept, three EOB configs are considered: two physical (Config-1, Config-2) and one virtual (Config-n). The first two configs attach dedicated cameras, either on a rigid rail or wrapped around the tether, thus require physical attachments. On the other hand, the virtual config is envisioned to provide on-demand views along the arc without any structural modifications.

Paper



Publications