H04N2213/005

Hybrid depth sensing pipeline
10497140 · 2019-12-03 · ·

An apparatus for a hybrid tracking and mapping is described herein. The apparatus includes logic to determine a plurality of depth sensing techniques. The apparatus also includes logic to vary the plurality of depth sensing techniques based on a camera configuration. Additionally, the apparatus includes logic to generate a hybrid tracking and mapping pipeline based on the depth sensing techniques and the camera configuration.

OBJECT IDENTIFICATION AND MATERIAL ASSESSMENT USING OPTICAL PROFILES
20190364262 · 2019-11-28 ·

An image processing system with a camera system and a processing system analyzes a series of images of a scene to assess an optical property of an object in the scene. A surface of an object that is differentiable from other objects and has a common point that can be identified and analyzed in the series of images captured from different distances of the camera system to the object and different angular orientations of the camera system relative to the object. A set of characteristic values of the image pixels corresponding to the common point is determined that include a point intensity value, a distance from the camera system to the common point, a normal vector for the common point, and an angular orientation between an optical path of the image pixel and the normal vector. The set of characteristics values are used to create an optical profile of the common point that is compared to a set of predefined characteristic profiles to identify the object. In embodiments, the camera system is an active camera, and the wavelength includes near infrared.

Image Processing Method and Apparatus
20190335155 · 2019-10-31 ·

An image processing method includes obtaining multiple video frames, where the multiple video frames are collected from a same scene at different angles and determining a depth map of each video frame according to corresponding pixels among the multiple video frames; supplementing background missing regions of the multiple video frames according to depth maps of the multiple video frames, to obtain supplemented video frames of the multiple video frames and depth maps of the multiple supplemented video frames. The method also includes generating an alpha image of each video frame according to an occlusion relationship between each of the multiple video frames and a supplemented video frame of each video frame in a background missing region and generating a browsing frame at a specified browsing angle according to the multiple video frames, the supplemented video frames of the multiple video frames, and alpha images of the multiple video frames.

METHOD AND CIRCUIT OF ASSIGNING SELECTED DEPTH VALUES TO RGB SUBPIXELS AND RECOVERING SELECTED DEPTH VALUES FROM RGB SUBPIXELS FOR COLORED DEPTH FRAME PACKING AND DEPACKING
20190320202 · 2019-10-17 ·

A method comprises: obtaining two depth values from each of a first pixel depth value and a fourth pixel depth value, and obtaining one depth value from each of a second pixel depth value and a third pixel depth value; and assigning the two depth values obtained from the first pixel depth value to the R-subpixel and B-subpixel values of the first pixel, assigning the depth value obtained from the second pixel depth value to the R-subpixel, G-subpixel and B-subpixel values of the second pixel, assigning the depth value obtained from the third pixel depth value to the R-subpixel, G-subpixel and B-subpixel values of the third pixel, and assigning the two depth values obtained from the fourth pixel depth value to the G-subpixel value of the first pixel and the R-subpixel, G-subpixel and B-subpixel values of the fourth pixel.

VEHICLE HEADLAMP LIGHTING CONTROL MODULE FOR ADAS AND AUTONOMOUS VEHICLES
20240147062 · 2024-05-02 ·

Vehicle headlamps are typically used to illuminate scenes for better vision. The same headlamps can be used as part of an active camera at discrete times during the camera's imaging periods. In embodiments, a lighting control module utilizes an electronic control module to control on/off states of the headlamps during a set of non-imaging periods of the camera system and a camera control configured to control multiple on/off states of the set of headlamps during a set of imaging periods of the camera system. Embodiments of lighting control modules can be used in Advanced Driver Assistance Systems (ADAS) and Autonomous Control Systems for commercial and passenger vehicles.

PRECISION REFLECTIVITY AND AMBIENT LIGHT REMOVAL FOR A GEIGER MODE/SINGLE PHOTON ACTIVE SENSOR SYSTEM
20240171857 · 2024-05-23 ·

Geiger mode photo diodes are solid state photodetectors that are able to detect single photons. Such Geiger mode photo diodes are also referred to as single-photon detectors (SPDs). An array of SPDs can be used as a single detector element in an active sensing system, but sensor systems based on SPD arrays have at least two shortcomings due to ambient light. First, solar background light can hamper the ability to accurately determine depth. Second, ambient light impacts the reflectivity precision because of challenges differentiating between reflected light and ambient light. Embodiments enable active sensors that remove the ambient signal from a sensor's optical input. Other embodiments produce sensor intensity values that have higher precision than typical SPD array devices. Further embodiments produce sensor depth values that have higher precision than typical SPD array devices.

Image processing method and apparatus

An image processing method includes obtaining multiple video frames, where the multiple video frames are collected from a same scene at different angles and determining a depth map of each video frame according to corresponding pixels among the multiple video frames; supplementing background missing regions of the multiple video frames according to depth maps of the multiple video frames, to obtain supplemented video frames of the multiple video frames and depth maps of the multiple supplemented video frames. The method also includes generating an alpha image of each video frame according to an occlusion relationship between each of the multiple video frames and a supplemented video frame of each video frame in a background missing region and generating a browsing frame at a specified browsing angle according to the multiple video frames, the supplemented video frames of the multiple video frames, and alpha images of the multiple video frames.

Methods and apparatus for a lighting-invariant image sensor for automated object detection and vision systems

An active camera system images successive frames of a scene utilizing an array of detectors configured to produce a response having a linear relationship to a number of incident photons. Multi-frame capture is utilized with differentiated frame illumination for successive frames. Frame processing allows the camera to produce, at the pixel level, different image intensities representing maximum signal intensity, ambient signal intensity, and object signal intensity. The linearized response of detectors in the lighting-invariant image sensor is used to establish non-attenuated signal strength that enables shadow removal and glare removal for one or more of the image signals. Embodiments of a lighting-invariant image sensor may be used in Autonomous and Semi-Autonomous Vehicle Control/Assist Systems.

Image capture device and image capture method

To provide an image capture device capable of doing multiple image captures by using multiple image capture units and capable of measuring a distance between each of the image capture units and a target more correctly. An image capture device according to the present invention is an image capture device with multiple image capture units. The image capture device comprises: one light emission unit for distance measurement that emits a reference beam; and the multiple image capture units that capture images of a reflected beam of the reference beam while having common timing of image capture.

Image encoding and display

Aspects of the technology encompass image encoding, which includes generating image content according to a viewpoint defined by image viewpoint data. Successive output images are generated such that each output image includes image content generated according to one or more viewpoints, and encoding metadata associated with each output image which indicates each viewpoint relating to image content contained in that output image, and which defines which portions of that output image were generated according to each of those viewpoints. An image display method for generating successive display images from successive input images is also provided, which includes re-projecting portions of each input image to form a respective display image, according to any differences between a desired display viewpoint and a particular viewpoint defined for that portion by metadata associated with a respective input image. The metadata indicating viewpoints relating to image content contained in the respective input images.