Patent classifications
H04N2213/003
Vehicle, vehicle positioning system, and vehicle positioning method
A vehicle, a vehicle positioning system and a vehicle positioning method are provided. The vehicle positioning system includes a 2D image sensor, a 3D sensor and a processor. The 2D image sensor is configured for obtaining 2D image data. The 3D sensor is configured for obtaining 3D point cloud data. The processor is coupled to the 2D image sensor and the 3D sensor, and configured for merging the 2D image data and the 3D point cloud data to generate 3D image data, identifying at least one static object from the 2D image data, obtaining 3D point cloud data of the static object from the 3D image data based on each one of the at least one static object, and calculating a vehicle relative coordinate of the vehicle based on the 3D point cloud data of the static object.
EFFICIENT MULTI-VIEW CODING USING DEPTH-MAP ESTIMATE AND UPDATE
This disclosure is directed to coding a multi-view signal, which includes processing a list of plurality of motion vector candidates associated with a coding block of a current picture in a dependent view of the multi-view signal. Such processing includes estimating a first motion vector based on a second motion vector associated with a reference block in a current picture of a reference view of the multi-view signal, the reference block corresponding to the coding block of the current picture in the dependent view. The first motion vector is added into the list, and an index is used that specifies at least one candidate from the list to be used for motion-compensated prediction. The coding block in the current picture is coded by performing the motion-compensated prediction based on the at least one candidate indicated by the index.
CMOS image sensor for RGB imaging and depth measurement with laser sheet scan
An imaging unit includes a light source and a pixel array. The light source projects a line of light that is scanned in a first direction across a field of view of the light source. The line of light oriented in a second direction that is substantially perpendicular to the first direction. The pixel array is arranged in at least one row of pixels that extends in a direction that is substantially parallel to the second direction. At least one pixel in a row is capable of generating two-dimensional color information of an object in the field of view based on a first light reflected from the object and is capable of generating three-dimensional (3D) depth information of the object based on the line of light reflecting from the object. The 3D-depth information includes time-of-flight information.
Systems and methods for using depth information to extrapolate two-dimensional images
The disclosed computer-implemented method may include (1) receiving a first 2D frame depicting an evolving 3D scene and elements in the evolving 3D scene, (2) receiving a second 2D frame depicting the evolving 3D scene and the elements, (3) deriving 2D motion vectors from the first 2D frame and the second 2D frame that each include an estimated offset from coordinates of an element in the first 2D frame to coordinates of the element in the second 2D frame, (4) receiving depth information for the evolving 3D scene, (5) using the 2D motion vectors and the depth information to extrapolate a synthetic 2D frame, and (6) displaying the synthetic 2D frame to a user. Various other methods, systems, and computer-readable media are also disclosed.
MULTIVIEW IMAGE CREATION SYSTEM AND METHOD
A multiview image creation system and method create a multiview image from a single view image. Creating a multiview image includes importing a single view image and assigning the single view image to a first multiview image layer of a composite multiview image. Creating a multiview image further includes replicating the single view image into a plurality of view images of the first multiview image layer, converting a depth setting of the first multiview image layer into a plurality of shift values of corresponding view images based on an ordered number of the view images, and shifting the view images of the first multiview image layer according to the corresponding shift values. The plurality of multiview image layers may be automatically rendered in a predefined sequence as the composite multiview image on a multiview display.
CMOS IMAGE SENSOR FOR 2D IMAGING AND DEPTH MEASUREMENT WITH AMBIENT LIGHT REJECTION
Using the same image sensor to capture both a two-dimensional (2D) image of a three-dimensional (3D) object and 3D depth measurements for the object. A laser point-scans the surface of the object with light spots, which are detected by a pixel array in the image sensor to generate the 3D depth profile of the object using triangulation. Each row of pixels in the pixel array forms an epipolar line of the corresponding laser scan line. Timestamping provides a correspondence between the pixel location of a captured light spot and the respective scan angle of the laser to remove any ambiguity in triangulation. An Analog-to-Digital Converter (ADC) in the image sensor generates a multi-bit output in the 2D mode and a binary output in the 3D mode to generate timestamps. Strong ambient light is rejected by switching the image sensor to a 3D logarithmic mode from a 3D linear mode.
CMOS IMAGE SENSOR FOR 2D IMAGING AND DEPTH MEASUREMENT WITH AMBIENT LIGHT REJECTION
Using the same image sensor to capture both a two-dimensional (2D) image of a three-dimensional (3D) object and 3D depth measurements for the object. A laser point-scans the surface of the object with light spots, which are detected by a pixel array in the image sensor to generate the 3D depth profile of the object using triangulation. Each row of pixels in the pixel array forms an epipolar line of the corresponding laser scan line. Timestamping provides a correspondence between the pixel location of a captured light spot and the respective scan angle of the laser to remove any ambiguity in triangulation. An Analog-to-Digital Converter (ADC) in the image sensor generates a multi-bit output in the 2D mode and a binary output in the 3D mode to generate timestamps. Strong ambient light is rejected by switching the image sensor to a 3D logarithmic mode from a 3D linear mode.
Fusing measured multifocal depth data with object data
A depth sensor receives depth data indicative of a distance from the depth sensor to a three-dimensional spatial zone. The depth data is based on an in-focus status of a projection of the three-dimensional spatial zone onto a multi-pixel sensing zone of an imaging sensor. The three-dimensional spatial zone is one of at least two distinct three-dimensional spatial zones. The multi-pixel sensing zone is one of at least two distinct multi-pixel sensing zones of the imaging sensor. Object data of an object residing in at least the three-dimensional spatial zone is received. Fused data is generated. The fused data comprises the depth data, and the object data.
Image processing device configured to generate depth map and method of operating the same
Provided is an image processing device. The device includes an active pixel sensor array including a plurality of pixels configured to generate a plurality of signals corresponding to a target, and an image processor configured to generate a depth map about the target based on an intensity difference of two signals among the plurality of signals.
Image-Processing Method and Electronic Device
An image-processing method is provided in an electronic device, which includes a visible-light camera and a depth camera, a field-of-view (FOV) of the visible-light camera is partially overlapped with a FOV of the depth camera, and the visible-light image includes an overlapping region overlapped with the depth image and a non-overlapping region un-overlapped with the depth image. The method includes determining whether an object appears in both the overlapping region and the non-overlapping region of the visible-light image simultaneously; obtaining first depth information according to the depth image if it appears, the first depth information indicating depth information of the first part of the object; obtaining second depth information according to the first depth information, the second depth information indicating depth information of the second part of the object; and obtaining synthesized depth information of the object according to the first depth information and the second depth information.