Patent classifications
G06T7/514
Optical Surface Encoder
An apparatus to generate data relating to a specular surface of an object, the apparatus including a screen that is movable relative to the object, a graphic rendered on the screen' wherein the graphic is notionally divided into contiguous segments such that the graphic content in each segment allows that segment to be distinguished from a plurality of other segments, at least one camera for capturing successive frames of the object' illuminated by said graphic, during said relative movement of the screen, and a computing device that accepts data from pixels of the captured frames.
Estimating photometric properties using a camera and a display screen
A system determines what regions in a scene captured by a camera include highly reflective (specular) surfaces. The system illuminates the scene by outputting images to a display screen such as a television screen so as to illuminate the scene from different angles. Images of the scene are captured by the camera under the different illuminations. The images are analyzed to determine which regions in the scene exhibit changes in reflected luminance that correspond to the changes in illumination, indicating a specular surface. Regions containing diffuse surfaces are also identified based on such regions exhibiting a reflected luminance that is substantially independent of the changes in illumination.
SURFACE TEXTURING FROM MULTIPLE CAMERAS
System and method for texturing a 3D surface using 2D images sourced from a plurality of imaging devices. System and method for applying a realistic texture to a model, based on texture found in one or more two-dimensional (2D) images of the object, with the texture covering the entire 3D model even if there are portions of the object that were invisible in the 2D image. System and method which does not require machine learning, is not incapable of blending between images, and which is not incapable of filling in portions of a 3D model that are invisible in the 2D image.
SURFACE TEXTURING FROM MULTIPLE CAMERAS
System and method for texturing a 3D surface using 2D images sourced from a plurality of imaging devices. System and method for applying a realistic texture to a model, based on texture found in one or more two-dimensional (2D) images of the object, with the texture covering the entire 3D model even if there are portions of the object that were invisible in the 2D image. System and method which does not require machine learning, is not incapable of blending between images, and which is not incapable of filling in portions of a 3D model that are invisible in the 2D image.
Estimation of absolute depth from polarization measurements
A head mounted display comprises an eye tracking system configured to enable eye tracking using polarization. The eye tracking system includes one or more illumination sources and an optical detector comprising polarization sensitive pixels. The one or more illumination sources are configured to illuminate a user's eye and generate reflections directed towards the optical detector. The eye tracking system determines, for each polarization sensitive pixel in a subset of the polarization sensitive pixels, one or more estimation parameters. The eye tracking system determines, for the subset of the polarization sensitive pixels, depth information for one or more glints associated with one or more surfaces of the eye, based in part on the polarization of the reflections and the one or more estimation parameters. The determined depth information is used to update a model of the eye. The eye tracking system determines eye tracking information based on the updated model.
Estimation of absolute depth from polarization measurements
A head mounted display comprises an eye tracking system configured to enable eye tracking using polarization. The eye tracking system includes one or more illumination sources and an optical detector comprising polarization sensitive pixels. The one or more illumination sources are configured to illuminate a user's eye and generate reflections directed towards the optical detector. The eye tracking system determines, for each polarization sensitive pixel in a subset of the polarization sensitive pixels, one or more estimation parameters. The eye tracking system determines, for the subset of the polarization sensitive pixels, depth information for one or more glints associated with one or more surfaces of the eye, based in part on the polarization of the reflections and the one or more estimation parameters. The determined depth information is used to update a model of the eye. The eye tracking system determines eye tracking information based on the updated model.
Method Of Detecting And Describing Features From An Intensity Image
The invention provides methods of detecting and describing features from an intensity image. In one of several aspects, the method comprises the steps of providing an intensity image captured by a capturing device, providing a method for determining a depth of at least one element in the intensity image, in a feature detection process detecting at least one feature in the intensity image, wherein the feature detection is performed by processing image intensity information of the intensity image at a scale which depends on the depth of at least one element in the intensity image, and providing a feature descriptor of the at least one detected feature. For example, the feature descriptor contains at least one first parameter based on information provided by the intensity image and at least one second parameter which is indicative of the scale.
Method Of Detecting And Describing Features From An Intensity Image
The invention provides methods of detecting and describing features from an intensity image. In one of several aspects, the method comprises the steps of providing an intensity image captured by a capturing device, providing a method for determining a depth of at least one element in the intensity image, in a feature detection process detecting at least one feature in the intensity image, wherein the feature detection is performed by processing image intensity information of the intensity image at a scale which depends on the depth of at least one element in the intensity image, and providing a feature descriptor of the at least one detected feature. For example, the feature descriptor contains at least one first parameter based on information provided by the intensity image and at least one second parameter which is indicative of the scale.
System and method for compensation of reflection on a display device
A system and method for compensating for reflections caused by light-generating objects in the scene facing a display device includes capturing images of the scene. Reflection-inducting zones corresponding to the light generating objects are identified from the captured images. The reflection effect on the display device from the reflection-inducing zones are estimated. A target image to be displayed on the display device is adjusted based on the estimated reflection effect.
System and method for compensation of reflection on a display device
A system and method for compensating for reflections caused by light-generating objects in the scene facing a display device includes capturing images of the scene. Reflection-inducting zones corresponding to the light generating objects are identified from the captured images. The reflection effect on the display device from the reflection-inducing zones are estimated. A target image to be displayed on the display device is adjusted based on the estimated reflection effect.