Patent classifications
H04N13/271
MARKER-BASED GUIDED AR EXPERIENCE
Systems, devices, media, and methods are presented for producing an augmented reality (AR) experience for display on a smart eyewear device. The AR production system includes a marker registration utility for setting and storing markers, a localization utility for locating the eyewear device relative to a marker location and to the mapped environment, and a virtual object rendering utility to presenting one or more virtual objects having a desired size, shape, and orientation. A high-definition camera captures an input image of the environment. If the input image includes a marker, the system retrieves from memory a set of data including a first marker location expressed in terms relative to a marker coordinate system. The localization utility determines a local position of the eyewear device relative to the marker location. The virtual object rendering utility prepares one or more virtual objects for display based on the eyewear location, the head pose of the wearer, and the location of one or more physical object landmarks in the environment.
DETECTING OBJECT SURFACES IN EXTENDED REALITY ENVIRONMENTS
Techniques and systems are provided for detecting object surfaces in extended reality environments. In some examples, a system obtains image data associated with a portion of a scene within a field of view (FOV) of a device. The portion of the scene includes at least one object. The system determines, based on the image data, a depth map of the portion of the scene. The system also determines, using the depth map, one or more planes within the portion of the scene. The system then generates, using the one or more planes, at least one planar region with boundaries corresponding to boundaries of a surface of the at least one object. The system also generates a three-dimensional representation of the portion of the scene using the at least one planar region and updates a three-dimensional representation of the scene using the three-dimensional representation of the portion of the scene.
Reconstructing A Three-Dimensional Scene
In one embodiment, a method includes identifying, in each image of a stereoscopic pair of images of a scene at a particular time, every pixel as either a static pixel corresponding to a portion of a scene that does not have local motion at that time or a dynamic pixel corresponding to a portion of a scene that has local motion at that time. For each static pixel, the method includes comparing each of a plurality of depth calculations for the pixel, and when the depth calculations differ by at least a threshold amount, then re-labeling that pixel as a dynamic pixel. For each dynamic pixel, the method includes comparing a geometric 3D calculation for the pixel with a temporal 3D calculation for that pixel, and when the geometric 3D calculation and the temporal 3D calculation are within a threshold amount, then re-labeling the pixel as a static pixel.
Non-uniform spatial resource allocation for depth mapping
A method for depth mapping includes projecting a pattern of optical radiation into a volume of interest containing an object while varying an aspect of the pattern over the volume of interest. The optical radiation reflected from the object responsively to the pattern is sensed, and a depth map of the object is generated based on the sensed optical radiation.
Non-uniform spatial resource allocation for depth mapping
A method for depth mapping includes projecting a pattern of optical radiation into a volume of interest containing an object while varying an aspect of the pattern over the volume of interest. The optical radiation reflected from the object responsively to the pattern is sensed, and a depth map of the object is generated based on the sensed optical radiation.
DEPTH CAMERA ASSEMBLY, DEVICE FOR COLLECTING DEPTH IMAGE AND MULTI-SENSOR FUSION SYSTEM
A depth camera assembly is provided. The depth camera assembly includes: a depth camera, configured to generate a trigger signal, in which the trigger signal is configured to instruct the depth camera to perform a first exposure operation to obtain first image information; a red-green-blue (RGB) camera, communicatively connected to the depth camera to receive the trigger signal, in which the trigger signal is configured to instruct the RGB camera to perform a second exposure operation to obtain second image information; and a processor, communicatively connected respectively to the depth camera and the RGB camera to receive the trigger signal, the first image information and the second image information, and configured to record a time stamp of the first image information and the second image information based on local time of receiving the trigger signal.
MULTI-SENSOR FUSION SYSTEM AND AUTONOMOUS MOBILE APPARATUS
The present disclosure relates to a multi-sensor fusion system and an autonomous mobile apparatus. The multi-sensor fusion system includes: a trigger module including a pulse signal output, the pulse signal output being used to output a pulse signal; and a plurality of depth camera modules, at least one depth camera module including a trigger signal generation module and a trigger signal output, the trigger signal generation module being used to generate a trigger signal according to the pulse signal, and the trigger signal output being connected to the trigger signal generation module, and used to output the trigger signal, where the trigger signal is used for triggering the at least one depth camera module to perform an exposure operation, and other depth camera modules perform exposure operations according to the received trigger signal output by the trigger signal output.
Apparatus and methods for three-dimensional sensing
A three-dimensional (3D) sensing apparatus together with a projector subassembly is provided. The 3D sensing apparatus includes two cameras, which may be configured to capture ultraviolet and/or near-infrared light. The 3D sensing apparatus may also contain an optical filter and one or more computing processors that signal a simultaneous capture using the two cameras and processing the captured images into depth. The projector subassembly of the 3D sensing apparatus includes a laser diode, one or optical elements, and a photodiode that are useable to enable 3D capture.
Apparatus and methods for three-dimensional sensing
A three-dimensional (3D) sensing apparatus together with a projector subassembly is provided. The 3D sensing apparatus includes two cameras, which may be configured to capture ultraviolet and/or near-infrared light. The 3D sensing apparatus may also contain an optical filter and one or more computing processors that signal a simultaneous capture using the two cameras and processing the captured images into depth. The projector subassembly of the 3D sensing apparatus includes a laser diode, one or optical elements, and a photodiode that are useable to enable 3D capture.
Monocular cued detection of three-dimensional structures from depth images
Detection of three dimensional obstacles using a system mountable in a host vehicle including a camera connectible to a processor. Multiple image frames are captured in the field of view of the camera. In the image frames, an imaged feature is detected of an object in the environment of the vehicle. The image frames are portioned locally around the imaged feature to produce imaged portions of the image frames including the imaged feature. The image frames are processed to compute a depth map locally around the detected imaged feature in the image portions. Responsive to the depth map, it is determined if the object is an obstacle to the motion of the vehicle.