Patent classifications
G06T2207/10012
Systems and methods for providing mixed-reality experiences under low light conditions
Systems and methods are provided for facilitating computer vision tasks (e.g., simultaneous location and mapping) and pass-through imaging include a head-mounted display (HMD) that includes a first set of one or more cameras configured for performing computer vision tasks and a second set of one or more cameras configured for capturing image data of an environment for projection to a user of the HMD. The first set of one or more cameras is configured to detect at least a visible spectrum light and at least a particular band of wavelengths of infrared (IR) light. The second set of one or more cameras includes one or more detachable IR filters configured to attenuate IR light, including at least a portion of the particular band of wavelengths of IR light.
UNDERWATER ORGANISM IMAGING AID SYSTEM, UNDERWATER ORGANISM IMAGING AID METHOD, AND STORAGE MEDIUM
An underwater organism imaging aid system according to an aspect of the present disclosure includes, at least one memory configured to store instructions, and at least one processor configured to execute the instructions to: detect an underwater organism from an image acquired by a camera, determine a positional relationship between the underwater organism detected and the camera, and output auxiliary information for moving the camera in such a way that a side face of the underwater organism and an imaging face of the camera face each other based on the positional relationship.
Arrangement for producing head related transfer function filters
When three-dimensional audio is produced by using headphones, particular HRTF-filters are used to modify sound for the left and right channels of the headphone. As the morphology of every ear is different, it is beneficial to have HRTF-filters particularly designed for the user of headphones. Such filters may be produced by deriving ear geometry from a plurality of images taken with an ordinary camera, detecting necessary features from images and fitting said features to a model that has been produced from accurately scanned ears comprising representative values for different sizes and shapes. Taken images are sent to a server (52) that performs the necessary computations and submits the data further or produces the requested filter.
Determining Spatial Relationship Between Upper and Lower Teeth
A computer-implemented method includes receiving a 3D model of upper teeth (U1) of a patient (P) and a 3D model of lower teeth (L1) of the patient (P1), and receiving a plurality of 2D images, each image representative of at least a portion of the upper teeth (U1) and lower teeth (L1) of the patient (P). The method also includes determining, based on the 2D images, a spatial relationship between the upper teeth (U1) and lower teeth (L1) of the patient (P).
METHOD FOR AUTOMATICALLY RECONSTITUTING THE REINFORCING ARCHITECTURE OF A COMPOSITE MATERIAL
A method for automatically reconstituting the architecture, along a reinforcing axis, of the reinforcement of a composite material, includes acquiring images of the reinforcement of the composite material, each image being acquired along a section plane perpendicular to the reinforcing axis; for each image acquired, detecting, using a neural network, barycentre and/or the circumference of each section of the reinforcing thread; for at least one acquired reference image, assigning a tag corresponding to a reinforcing thread, to each detected barycentre or circumference; for each other acquired image, assigning, to each detected barycentre and/or each detected circumference, the tag of the corresponding barycentre in the acquired reference image; reconstituting the architecture of each reinforcing thread from each detected barycentre and/or circumference having the tag of the reinforcing thread and the position on the reinforcing axis associated with the acquired image on which the barycentre and/or the circumference has been detected.
Hand pose estimation from stereo cameras
Systems and methods herein describe using a neural network to identify a first set of joint location coordinates and a second set of joint location coordinates and identifying a three-dimensional hand pose based on both the first and second sets of joint location coordinates.
Image processing system and method thereof for generating projection images based on inward or outward multiple-lens camera
An image processing system is disclosed, comprising: an M-lens camera, a compensation device and a correspondence generator. The M-lens camera generates M lens images. The compensation device generates a projection image according to a first vertex list and the M lens images. The correspondence generator is configured to conduct calibration for vertices to define vertex mappings, horizontally and vertically scan each lens image to determine texture coordinates of its image center, determine texture coordinates of control points according to the vertex mappings, and P1 control points in each overlap region in the projection image; and, determine two adjacent control points and a coefficient blending weight for each vertex in each lens image according to the texture coordinates of the control points and the image center in each lens image to generate the first vertex list, where M>=2.
Systems and methods for digitally representing a scene with multi-faceted primitives
Disclosed is a system and associated methods for generating and rendering a polyhedral point cloud that represents a scene with multi-faceted primitives. Each multi-faceted primitive stores multiple sets of values that represent different non-positional characteristics that are associated with a particular point in the scene from different angles. For instance, the system generates a multi-faceted primitive for a particular point of the scene that is captured in first capture from a first position and a second capture from a different second position. Generating the multi-faceted primitive includes defining a first facet with a first surface normal oriented towards the first position and first non-positional values based on descriptive characteristics of the particular point in the first capture, and defining a second facet with a second surface normal orientated towards the second position and second non-positional values based on different descriptive characteristics of the particular point in the second capture.
Body size estimation apparatus, body size estimation method, and program
Provided are a body size estimation apparatus, a body size estimation method, and a program that enable the estimation of the body size of a user even when the user has not taken a T-pose in advance. A body size data storage unit (50) stores body size data indicating a body size of a user. A posture data acquisition unit (52) acquires position data indicating positions of a plurality of body parts away from each other of the user. A body size estimation unit (54) estimates a body size of the user based on positions of two or more body parts indicated by the position data. A body size update unit (56) updates, in a case where the estimated body size is larger than the body size indicated by the body size data stored in the body size data storage unit (50), the body size indicated by the body size data to the estimated body size.
System and method for noise-based training of a prediction model
In some embodiments, noise data may be used to train a neural network (or other prediction model). In some embodiments, input noise data may be obtained and provided to a prediction model to obtain an output related to the input noise data (e.g., the output being a prediction related to the input noise data). One or more target output indications may be provided as reference feedback to the prediction model to update one or more portions of the prediction model, wherein the one or more portions of the prediction model are updated based on the related output and the target indications. Subsequent to the portions of the prediction model being updated, a data item may be provided to the prediction model to obtain a prediction related to the data item (e.g., a different version of the data item, a location of an aspect in the data item, etc.).