State-of-the-Art Photography and Immersive AR/MR Experiences

This story was originally published on EETimes Europe.

When buying a new smartphone, consumers today attach growing importance to the devices’ cameras. On the one hand, they want to take high-quality photos; on the other, they want comprehensive augmented- reality functionality to support emerging and advanced applications in the AR/MR space such as AR/MR games. The smartphone’s camera has thus become a decisive criterion in the purchase process and must be able to link reality with the digital world, whether in commercial, educational, or business/industrial applications. 3D cameras using 3D time-of-flight (ToF) imagers deliver precisely this functionality. In recent years, the development of 3D cameras in mobiles has been focused particularly on the front side, as this camera is crucial for various security and high-convenience applications. For example, the front camera is used for advanced security functions such as biometric face authentication for screen and apps unlock and secure payment. Moreover, the camera is also used for virtual reality avatars for social media interaction and photography beautification with special depth effects. These and other features have boosted the deployment of the 3D camera on the front side of the phone, and this trend has now become established on the market. At the same time, computational photography with professional quality and immersive AR/MR experiences are the next game changers sparking the development of smartphones and providing substantial opportunities for the further adoption of 3D image sensing cameras on the rear side of the phone. This will also lead to countless new applications of this kind and thus will drive further developments. According to Strategy Analytics [1], the market for ToF rear cameras in smartphones is expected to exceed 500 million units per year worldwide within the next five years (Fig. 1).

 

Figure 1. Market forecast of rear 3D ToF camera in smartphones [1]

At present, however, the use of augmented/mixed reality applications is still very limited. Augmented reality applications that became popularized some years ago in gaming and 3D design, where users place a character or object somewhere in the scene, are usually nothing more than overlapping of computer-generated visuals onto the physical environment – supplementing the real world with digital details and resulting in a limited or non-immersive user experience. The two key elements in this case are just a camera phone and an AR app with appropriate software to calculate and project the image onto the scene on the phone’s screen. True MR, however, merges the real world with digital elements, making them interact and blend in real time. Mixed reality enables a much higher level of immersion and ubiquitous interaction of virtual and real objects, with the virtual images perfectly rendered into the real world. Mixed reality requires accurate 3D depth data acquisition under all light conditions in real time, and imposes much higher requirements on processing power than simpler AR effects with image superposition. Solutions based on 3D image sensing technologies have recently gained attention as they deliver the depth level of detail that 2D cameras fail to provide. Still, challenges exist, as current commercial devices deploying existing 3D image sensing solutions are not yet able to provide a reasonable resolution without significant trade-offs on extended distance ranges and on the power consumption.

Such shortcomings are now being resolved by the latest 3D ToF imager generation in Infineon’s REAL3™ portfolio. Before we explain the advantages of this new device in detail, let us have a look at the ToF principle.

 

3D imager based on the time-of-flight principle

The new REAL3™ 3D ToF sensor chip works with infrared light and uses the time-of-flight operating principle (Fig. 2).

Figure 2. Time–of-flight (ToF) principle of REAL3™ 3D ToF imagers.

The infrared light, modulated at a certain frequency typically in the range of several tens of MHz, is emitted by the camera’s illumination system to the object and reflected back. Each individual pixel measures the travel time of the modulated infrared light pulse emitted by the camera to the objects in question and back. Every pixel of the 3D image sensor chip is equipped with a microlens. Thus, the majority of the incident light is directed to the sensitive area of a pixel: virtually no light energy is lost on inactive surfaces, and the optical pixel sensitivity is significantly higher. Algorithms use the measured data to detect the distances, size, movement and shape of the objects and create precise 3D models on this basis.

Infineon developed its family of REAL3™ 3D ToF sensors together with pmdtechnologies, a ToF-focused company from Siegen in Germany. So far, only a few companies have managed to bring suitable solutions to the market because broad system competence and deep understanding of the technology is required. Infineon and pmdtechnologies see themselves in a pioneering technology role and have been cooperating since 2013. They have jointly developed five generations of 3D ToF imagers, which have been used in several smartphones as well as in AR headsets, smart home devices, industrial cameras and cars.

Compared with the previous 3D image sensors, the sixth generation of REAL3™ 3D ToF sensors has been optimized specifically to address smartphone pain points for photo enhancements and AR/MR. The sensor requires 40 percent less power than previous generations, thus saving the smartphone’s battery life. This is key for AR/MR gaming, for example, where the ToF camera is on for a long time. The camera design with this new imager can be 35 percent smaller due to a high SoC integration level and so it gives more freedom for advanced phone designs and thus greater cost efficiency. Finally, the new REAL3™ imager and system is designed for providing depth data from distances of up to 10 m, without losing the high resolution on short range (Fig. 3).

Figure 3: Fact sheet of long-range REAL3TM 3D ToF sensor

To reach a distance of 10 m, the modulation sequence of the emitted light has to be changed significantly (Fig. 4).

Figure 4: The distance can be precisely measured using two modulation frequencies [3]

The modulation frequency of the projected IR light of the ToF camera is related to the depth quality/accuracy. The higher the modulation frequency, the better the depth quality, but at the same time the unambiguity range (UR) decreases. Thus, a high depth accuracy requires a high modulation frequency, but for a long unambiguity range, a low modulation frequency is necessary. To solve this problem, REAL3™ 3D ToF sensors use two different modulation frequencies. This means that for each frequency, a distance measurement is performed, and then the measurements of both frequencies are compared to determine the correct position of the object. In this way, the measurement range can be extended beyond the UR of each single modulation frequency. All these steps are accomplished with the sophisticated camera driver software from pmdtechnologies.

The latest generation of 3D imagers and associated VCSEL drivers from Infineon allows REAL3™ imagers to use significantly higher modulation frequency pairs than in previous 3D cameras. In this way, the depth accuracy increases and, at the same time, the environment can be scanned in high quality with a range of up to 10 m. At a distance of up to 5 m, even high-resolution images with 40 k depth points are possible (Fig. 5).

 

Figure 5: The new long-range REAL3™ image sensor offers flexible and superior resolution for improved AR/MR experiences over extended distance ranges.

In contrast, other solutions available on the market only offer a resolution of approx. 600 depth points at this distance, which means that small structures cannot be resolved. With this high range and resolution, Infineon’s sensor family is suitable for the entire scale of AR/MR applications, whether AR/MR games, 3D scanning of rooms and objects or 3D imaging for furniture planning and other design applications.

In addition, “always-on” applications such as mobile AR games benefit in particular from the new sensor’s low energy consumption. In the new generation, the power consumption has been reduced by 40 percent, which means significantly longer battery running times for mobile devices.

Accurate and robust depth data under all ambient light conditions

The REAL3™ image sensors have another advantage, as they provide accurate and robust depth data under all lighting conditions. This means that high-quality depth data can be generated even in difficult ambient light conditions. This is a factor that is becoming increasingly important, especially in modern photography. For this purpose, pmdtechnologies has developed a patented circuit, integrated in every pixel, called “Suppression of Background Illumination” (SBI)[2]. SBI expands the dynamic capabilities of the sensor chip and reduces pixel saturation caused by ambient light, e.g. direct sunlight.

Low light conditions are among the most challenging environments for a 2D RGB camera, and prevent high-quality night mode portraits and fast autofocus. Figure 6 shows an example of a ToF-assisted night vision portrait with a beautifully blurred background and the same image without this effect. The secret behind the ToF-assisted image is the ability of the new 3D sensor to create a night vision image of the scene, which is shown in the gray value image.

Figure 6: High resolution, ToF-assisted portraits with perfect bokeh effects in low-light situations

The gray value image is available in addition to the depth data. Here as well, the resolution is key: the new 3D sensor provides a clear night vision image, while other commercial solutions in the same category of applications deliver only several hundreds of pixels. The gray value image as well as the depth map of the captured portrait are shown in the center of the Figure 6. The left image shows a 2D image without ToF assistance. The right image depicts the result of using the gray value image and depth map for the creation of a beautifully blurred background. Finally, the ToF assistance enables very efficient image processing, so that the beautification effects can also be applied in real time to video streams. In conclusion, in a suitably designed camera system, the sensor chip thus ensures reliable operation under all ambient light conditions, whether in strong sunlight, in darkness, inside buildings or vehicles or outdoors. The 3D image sensors can also be dynamically configured via the I²C interface. During operation, the frame rate can thus be adjusted just as easily as the exposure time or predefined operating modes, allowing for functionality optimized to the lighting and operating environment. In addition, intelligent power management ensures low power consumption.

Furthermore, with REAL3™ image sensors, ToF cameras can be reliably calibrated within seconds at the end of production. With an overall size of only 4.4 x 4.8 mm², the new long-range REAL3™ image sensor is the perfect solution for compact 3D camera module designs where board space is limited. Thanks to the highly integrated CMOS image sensors and Infineon VCSEL driver, fewer components are used than the required in other commercially available 3D sensors, which also significantly reduces the costs of the 3D camera system. The 3D imager features such as coded modulation and increased configuration flexibility also provide improved performance and robustness for various applications and multi-camera scenarios. The combination of performance, low power consumption, functionality, size and low cost make ToF 3D sensors particularly suitable for reliable 3D camera applications in mobile devices. The advanced configurability of the new REAL3™ 3D ToF sensor offers differentiated camera performance to support different object distance ranges, different ambient light conditions and different use cases. Depending on requirements, the new sensor enables techniques such as augmented reality in real time (capture the scene in 3D in a flash), long-range scanning, reconstruction of small objects, fast autofocus with low power consumption and image segmentation. This allows effects such as background blurring in videos and images to be realized easily and independently of lighting conditions and without the need for heavy post-processing of the images. As a semiconductor manufacturer, Infineon currently offers its REAL3™ portfolio of image sensors for mobile and other consumer applications as bare dies, while system design with its wide range of requirements is covered by design partners. pmdtechnologies provides the camera driver software for the full range of REAL3™ 3D ToF imagers, which enables the easy configuration of the imagers via a comprehensive and versatile API for the above use cases and beyond. The volume delivery for the new sensor chip will start in the second quarter of 2021. Demo kits are already available.

Summary

The new sensor generation developed by Infineon in cooperation with pmdtechnologies is suitable for various applications such as real-time mixed reality, enhanced photography, 3D object scanning, and room reconstruction. Effects such as background blur in videos and moving scene images are easily achieved regardless of ambient light conditions. In addition, high-quality 3D depth data acquisition at a distance of up to 10 m is enabled, without losing resolution at shorter distances. Always-on applications such as mobile mixed-reality games can benefit from the new sensor’s lowest power consumption, offering users a significantly longer runtime. In applications such as 3D scanning for room and object reconstruction or 3D mapping for furniture design and similar applications such as architectural and industrial interior design, the sensor doubles the measurement range compared with other commercially available sensor solutions.

 

References

[1] Strategy Analytics experts, Oct. 2020.
[2] Frey J., Kref H., Möller T., Dr. Riedel H., Xu Z. (2005), European Patent No. EP1585234B1 (pmdtechnologies ag)
[3] Acknowledgements to Rothermel T. (pmdtechnologies), Bell W. (Infineon Technologies), Tschinder R. (Infineon Technologies)

Other
This might interest you.

#pmd News, #robotics, #products, #technology

Roborock has teamed up with Infineon, pmdtechnologies and Ofilm to develop a next generation robot solution that uses hybrid Time-of-Flight (hToF) to replace the traditional laser distance scanner (LDS) module and obstacle avoidance module.

#pmd News, #products, #technology

Older robot vacuum cleaners used a random, inefficient, and slow cleaning pattern. Smart robots generate a map of their environment using sensors and simultaneous localization and mapping (SLAM) to localize, providing users with a floorplan from which areas can be selected for cleaning, or restricted from access. This whitepaper describes designing slimmer robots with new features using an approach with novel time-of-flight cameras.

#pmd News, #cases, #technology

Visual sensing technology enables robots to observe, understand, and interact with their environment, much like humans. In a busy logistics warehouse, autonomous mobile robots (AMRs) swiftly transport goods, avoiding obstacles and calculating optimal paths.

#pmd News, #products, #technology

In collaboration with device manufacturer OMS and Infineon Technologies AG (FSE: IFX / OTCQX: IFNNY), we have developed a new high-resolution camera solution that enables enhanced depth sensing and 3D scene understanding for next-generation smart consumer robots. The new hybrid Time of Flight solution combines two depth sensing concepts and helps significantly reduce maintenance effort and costs for smart robots.