G06T7/557

Vehicle positioning method and system based on laser device

The present application discloses a positioning method for a movable platform, including: detecting, by a laser positioning system (LPS) mounted on the movable platform, a plurality of reflectors mounted on a target object, wherein the movable platform is moving; calculating in real-time, by the LPS, according to the current position information, relative positions of the plurality of reflectors with respect to the LPS; and obtaining, by the LPS, a relative position of the movable platform with respect to the target object based on the relative positions of the plurality of reflectors with respect to the LPS. The present application also discloses positioning system that performs the positioning method.

Determining the relative position between a point cloud generating camera and another camera

A method for determining the relative position between a first camera and a second camera used in a medical application, wherein the first camera captures a 2D image of a phantom, the second camera emits light onto the phantom and analyzes the reflected light, thus generating a 3D point cloud representing points on the surface of the phantom, and the phantom has a planar surface forming a background on which a plurality of 2D markers are formed, wherein one of the background and the 2D markers is reflective, thus reflecting light emitted by the second camera back to the second camera, and the other one is non-reflective, thus not reflecting light emitted by the second camera back to the second camera, the method involving that a) the first camera captures a 2D image of the phantom, b) the second camera generates a 3D point cloud representing the planar surface of the phantom, c) the 2D markers are identified in the 2D image, thus obtaining 2D marker data representing the locations of the 2D markers in the 2D image, d) the 2D markers are identified in the 3D point cloud using the property that points on a non-reflective part of the planar surface are identified as having a larger distance to the second camera than points on a reflective part of the planar surface, thus obtaining 3D marker data representing the locations of the 2D markers in a reference system of the second camera, and e) finding the relative position between the first camera and the second camera by applying a Perspective-n-Points algorithm on the 2D marker data and the 3D marker data.

Determining the relative position between a point cloud generating camera and another camera

A method for determining the relative position between a first camera and a second camera used in a medical application, wherein the first camera captures a 2D image of a phantom, the second camera emits light onto the phantom and analyzes the reflected light, thus generating a 3D point cloud representing points on the surface of the phantom, and the phantom has a planar surface forming a background on which a plurality of 2D markers are formed, wherein one of the background and the 2D markers is reflective, thus reflecting light emitted by the second camera back to the second camera, and the other one is non-reflective, thus not reflecting light emitted by the second camera back to the second camera, the method involving that a) the first camera captures a 2D image of the phantom, b) the second camera generates a 3D point cloud representing the planar surface of the phantom, c) the 2D markers are identified in the 2D image, thus obtaining 2D marker data representing the locations of the 2D markers in the 2D image, d) the 2D markers are identified in the 3D point cloud using the property that points on a non-reflective part of the planar surface are identified as having a larger distance to the second camera than points on a reflective part of the planar surface, thus obtaining 3D marker data representing the locations of the 2D markers in a reference system of the second camera, and e) finding the relative position between the first camera and the second camera by applying a Perspective-n-Points algorithm on the 2D marker data and the 3D marker data.

Quotidian scene reconstruction engine

A stored volumetric scene model of a real scene is generated from data defining digital images of a light field in a real scene containing different types of media. The digital images have been formed by a camera from opposingly directed poses and each digital image contains image data elements defined by stored data representing light field flux received by light sensing detectors in the camera. The digital images are processed by a scene reconstruction engine to form a digital volumetric scene model representing the real scene. The volumetric scene model (i) contains volumetric data elements defined by stored data representing one or more media characteristics and (ii) contains solid angle data elements defined by stored data representing the flux of the light field. Adjacent volumetric data elements form corridors, at least one of the volumetric data elements in at least one corridor represents media that is partially light transmissive. The constructed digital volumetric scene model data is stored in a digital data memory for subsequent uses and applications.

Quotidian scene reconstruction engine

A stored volumetric scene model of a real scene is generated from data defining digital images of a light field in a real scene containing different types of media. The digital images have been formed by a camera from opposingly directed poses and each digital image contains image data elements defined by stored data representing light field flux received by light sensing detectors in the camera. The digital images are processed by a scene reconstruction engine to form a digital volumetric scene model representing the real scene. The volumetric scene model (i) contains volumetric data elements defined by stored data representing one or more media characteristics and (ii) contains solid angle data elements defined by stored data representing the flux of the light field. Adjacent volumetric data elements form corridors, at least one of the volumetric data elements in at least one corridor represents media that is partially light transmissive. The constructed digital volumetric scene model data is stored in a digital data memory for subsequent uses and applications.

QUOTIDIAN SCENE RECONSTRUCTION ENGINE

A stored volumetric scene model of a real scene is generated from data defining digital images of a light field in a real scene containing different types of media. The digital images have been formed by a camera from opposingly directed poses and each digital image contains image data elements defined by stored data representing light field flux received by light sensing detectors in the camera. The digital images are processed by a scene reconstruction engine to form a digital volumetric scene model representing the real scene. The volumetric scene model (i) contains volumetric data elements defined by stored data representing one or more media characteristics and (ii) contains solid angle data elements defined by stored data representing the flux of the light field. Adjacent volumetric data elements form corridors, at least one of the volumetric data elements in at least one corridor represents media that is partially light transmissive. The constructed digital volumetric scene model data is stored in a digital data memory for subsequent uses and applications.

QUOTIDIAN SCENE RECONSTRUCTION ENGINE

A stored volumetric scene model of a real scene is generated from data defining digital images of a light field in a real scene containing different types of media. The digital images have been formed by a camera from opposingly directed poses and each digital image contains image data elements defined by stored data representing light field flux received by light sensing detectors in the camera. The digital images are processed by a scene reconstruction engine to form a digital volumetric scene model representing the real scene. The volumetric scene model (i) contains volumetric data elements defined by stored data representing one or more media characteristics and (ii) contains solid angle data elements defined by stored data representing the flux of the light field. Adjacent volumetric data elements form corridors, at least one of the volumetric data elements in at least one corridor represents media that is partially light transmissive. The constructed digital volumetric scene model data is stored in a digital data memory for subsequent uses and applications.

Method and apparatus for imaging circadiometer
11503195 · 2022-11-15 ·

A system and method for an imaging circadiometer that measures the spatial distribution of eye-mediated, non-image-forming optical radiation within the visible spectrum.

Method and apparatus for imaging circadiometer
11503195 · 2022-11-15 ·

A system and method for an imaging circadiometer that measures the spatial distribution of eye-mediated, non-image-forming optical radiation within the visible spectrum.

MULTI-APERTURE RANGING DEVICES AND METHODS

Embodiments of systems and methods for multi-aperture ranging are disclosed. An embodiment of an image processing system includes at least one processor and memory configured to receive a multi-aperture image set that includes a high-resolution subaperture image and a low-resolution subaperture image, wherein the high-resolution subaperture image and the low-resolution subaperture image were captured simultaneously from a camera using dissimilar focal lengths, predict a high-resolution predicted disparity map from the high-resolution subaperture image using a neural network, predict a low-resolution predicted disparity map from the low-resolution subaperture image using the neural network, and generate an integrated range map from the high-resolution and low-resolution predicted disparity maps, wherein the integrated range map includes an array of range information that corresponds to the multi-aperture image set and that is generated by overlaying common points in both the high-resolution predicted disparity map and the low-resolution predicted disparity map.