We have three overlapping teams: software team, hardware team, and domain team. The software team designs and develops intelligent features for autonomous robots and intelligent machines and push the boundaries of state-of-the-art literature on on-device AI, machine vision, on-device deep learning, and 3D computer vision. The hardware team takes care of the device integration of those novel algorithms and makes sure that the real-time constraints are met on the robot platforms. Then the domain team deploys the robotics system in the wild - validates the application-specific features, and provides feedback for iterative improvements. We follow this DDD: Design, Develop, and Deploy principle in our research for rapid innovation.
Where to look?— is an intriguing problem of computer vision that deals with finding interesting or salient pixels in an image/video. As seen in this GANGNAM video!, the problem of Salient Object Detection (SOD) aims at identifying the most important or distinct objects in a scene. It is a successor to the human fixation prediction problem that aims to highlight pixels that human viewers would focus on at first glance.
Unlike terrestrial robots, visually guided underwater robots have very few low-cost solutions for dense 3D visual sensing because of the high cost and domain-specific operational complexities involved in deploying underwater LiDARs, RGB-D cameras, or laser scanners. To address this issue, a fast monocular depth estimation method for enabling 3D perception capabilities of low-cost underwater robots. We formulate a novel end-to-end deep visual learning pipeline named UDepth—, which incorporates domain knowledge of image formation characteristics of natural underwater scenes.