Patent classifications
G06T7/579
Photometric-based 3D object modeling
Aspects of the present disclosure involve a system and a method for performing operations comprising: accessing a source image depicting a target structure; accessing one or more target images depicting at least a portion of the target structure; computing correspondence between a first set of pixels in the source image of a first portion of the target structure and a second set of pixels in the one or more target images of the first portion of the target structure, the correspondence being computed as a function of camera parameters that vary between the source image and the one or more target images; and generating a three-dimensional (3D) model of the target structure based on the correspondence between the first set of pixels in the source image and the second set of pixels in the one or more target images based on a joint optimization of target structure and camera parameters.
Virtual content positioned based on detected object
Various implementations disclosed herein include devices, systems, and methods that use an object as a background for virtual content. Some implementations involve obtaining an image of a physical environment. A location of a surface of an object is detected based on the image. A virtual content location to display virtual content is determined, where the virtual content location corresponds to the location of the surface of the object. Then, a view of the physical environment and virtual content displayed at the virtual content location is provided.
Virtual content positioned based on detected object
Various implementations disclosed herein include devices, systems, and methods that use an object as a background for virtual content. Some implementations involve obtaining an image of a physical environment. A location of a surface of an object is detected based on the image. A virtual content location to display virtual content is determined, where the virtual content location corresponds to the location of the surface of the object. Then, a view of the physical environment and virtual content displayed at the virtual content location is provided.
SYSTEM AND METHOD FOR USING IMAGE DATA TO TRIGGER CONTACTLESS CARD TRANSACTIONS
A method for controlling a near field communication between a device and a transaction card is disclosed. The method includes the steps of capturing, by a front-facing camera of the device, a series of images of the transaction card and processing each image of the series of images to identify a darkness level associated with a distance of the transaction card from the front of the device. The method includes comparing each identified darkness level to a predetermined darkness level associated with a preferred distance for a near field communication read operation and automatically triggering a near field communication read operation between the device and the transaction card for the communication of a cryptogram from an applet of the transaction card to the device in response to the identified darkness level corresponding to the predetermined darkness level associated with the preferred distance for the near field communication read operation.
VIRTUAL AND AUGMENTED REALITY SYSTEMS AND METHODS
A virtual or augmented reality display system can include a first sensor to provide measurements of a user's head pose over time and a processor to estimate the user's head pose based on at least one head pose measurement and based on at least one calculated predicted head pose. The processor can combine the head pose measurement and the predicted head pose using one or more gain factors. The one or more gain factors may vary based upon the user's head pose position within a physiological range of movement.
VIRTUAL AND AUGMENTED REALITY SYSTEMS AND METHODS
A virtual or augmented reality display system can include a first sensor to provide measurements of a user's head pose over time and a processor to estimate the user's head pose based on at least one head pose measurement and based on at least one calculated predicted head pose. The processor can combine the head pose measurement and the predicted head pose using one or more gain factors. The one or more gain factors may vary based upon the user's head pose position within a physiological range of movement.
TIME-OF-FLIGHT IMAGING CIRCUITRY, TIME-OF-FLIGHT IMAGING SYSTEM, TIME-OF-FLIGHT IMAGING METHOD
The present disclosure generally pertains to a time-of-flight imaging circuitry configured to: obtain first image data from an image sensor, the first image data being indicative of a scene, which is illuminated with spotted light; determine a first image feature in the first image data; obtain second image data from the image sensor, the second image data being indicative of the scene; determine second image feature in the second image data; estimate a motion of the second image feature with respect to the first image feature; and merge the first and the second image data based on the estimated motion.
TIME-OF-FLIGHT IMAGING CIRCUITRY, TIME-OF-FLIGHT IMAGING SYSTEM, TIME-OF-FLIGHT IMAGING METHOD
The present disclosure generally pertains to a time-of-flight imaging circuitry configured to: obtain first image data from an image sensor, the first image data being indicative of a scene, which is illuminated with spotted light; determine a first image feature in the first image data; obtain second image data from the image sensor, the second image data being indicative of the scene; determine second image feature in the second image data; estimate a motion of the second image feature with respect to the first image feature; and merge the first and the second image data based on the estimated motion.
NON-RIGID STEREO VISION CAMERA SYSTEM
A long-baseline and long depth-range stereo vision system is provided that is suitable for use in non-rigid assemblies where relative motion between two or more cameras of the system does not degrade estimates of a depth map. The stereo vision system may include a processor that tracks camera parameters as a function of time to rectify images from the cameras even during fast and slow perturbations to camera positions. Factory calibration of the system is not needed, and manual calibration during regular operation is not needed, thus simplifying manufacturing of the system.
NON-RIGID STEREO VISION CAMERA SYSTEM
A long-baseline and long depth-range stereo vision system is provided that is suitable for use in non-rigid assemblies where relative motion between two or more cameras of the system does not degrade estimates of a depth map. The stereo vision system may include a processor that tracks camera parameters as a function of time to rectify images from the cameras even during fast and slow perturbations to camera positions. Factory calibration of the system is not needed, and manual calibration during regular operation is not needed, thus simplifying manufacturing of the system.