Download the full Hybrid ToF Whitepaper
Older robot vacuum cleaners used a random, inefficient, and slow cleaning pattern. Smart robots generate a map of their environment using sensors and simultaneous localization and mapping (SLAM) to localize, providing users with a floorplan from which areas can be selected for cleaning, or restricted from access. This whitepaper describes designing slimmer robots with new features using an approach with novel time-of-flight cameras.
Visual-SLAM (vSLAM) uses>1 cameras and requires high computational power to extract depth from the 2D images captured. SLAM uses depth-sensor time-of-flight (ToF) cameras providing true hi-res 3D images. Depth cameras have leaner and more efficient SLAM implementations compatible with embedded processing platforms for applications such as cleaning robots, warehouse robots, tracking drones, etc.
REAL3™ hybrid Time-of-Flight (hToF) combines a single ToF image sensor with two illumination types addressing:
- Provide precise long-distance 3D point-cloud data enabling SLAM
- Depth data with a high lateral resolution for advanced obstacle avoidance
- Accurate close-range data with high resolution to allow cliff detection
Infineon, pmd and partners developed hToF as a cost-efficient depth sensing technology for consumer products. Particularly for consumer robotics, smaller hToF sensors help design slimmer robots than laser-distance-sensor (LDS) tower robots. This whitepaper implements hToF sensors for consumer robots, presenting excellent map accuracies achieved with open-source SLAM implementations, as well as the reduced processing resources needed.
The unique hToF technology uses Infineon’s and pmd's high-resolution REAL3™ ToF sensor, a homogenous flood illumination, and a powerful spot grid illumination offering a cost-efficient solution. The hToF camera’s broad, 110° horizontal field-of-view (FoV) is ideal for both SLAM and obstacle avoidance. hToF works in all lighting conditions (complete darkness to bright sunlight) across a large variety of furniture and floor reflectivity and textures. This is a huge advantage over structured light (SL) that struggles in bright sunlight, and also over stereo vision (SV) that struggles in darkness and with repetitive textures. Older robots were routinely immobilized under low-clearance furniture objects. Smart robots with hToF technology navigate better even under lower-clearance furniture, thanks to the reduced height and also the additional height-clearance information from the hToF sensor.
In an evaluation study, an R&D robot with two hToF cameras has been used in various office and home environments. The hTof sensor solution enables the robot to generate precise and consistent maps with excellent accuracy even for challenging environments with glass walls, non-rectangular rooms, dark floor material, and cluttered furniture objects. 3D depth image data by hToF handles glass walls properly, as the frame is detected as a wall.
The computational effort for hToF data’s depth processing using pmd’s library, and SLAM usingthe open-source Google Cartographer implementation, is benchmarked on three embedded SoC platforms used in robotics: NVIDIA Jetson Nano, Qualcomm RB5, and Raspberry Pi 3B/4B. We show that depth processing for hToF and corresponding SLAM algorithms can be executed on a single A53 (Raspberry 3B), in stark contrast to vSLAM systems that usually need >2 cores for their processing due to the high computational load ofthe required correspondence search.
In summary, the hToF technology with Infineon’s REAL3™ sensors along with pmd’s depth processing offer a powerful solution for consumer robots simultaneously supporting SLAM, advanced obstacle avoidance, and cliff detection. All this while reducing overall BOM cost, compute demand, and system complexity. The maps generated by open-source SLAM algorithms are accurate and reliable and prove the fidelity ofthe hToF spot-illuminated depth data. The solution is computationally lean, requiring only one core for depth processing and SLAM computation.