Patent classifications
G06T7/75
Determining Spatial Relationship Between Upper and Lower Teeth
A computer-implemented method includes receiving a 3D model of upper teeth (U1) of a patient (P) and a 3D model of lower teeth (L1) of the patient (P1), and receiving a plurality of 2D images, each image representative of at least a portion of the upper teeth (U1) and lower teeth (L1) of the patient (P). The method also includes determining, based on the 2D images, a spatial relationship between the upper teeth (U1) and lower teeth (L1) of the patient (P).
IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE, AND PROGRAM
An image processing method performed by a processor and including detecting positions of plural vortex veins in a fundus image of an examined eye, and computing a center of distribution of the plural detected vortex vein positions.
SYSTEMS, METHODS AND PROGRAMS FOR GENERATING DAMAGE PRINT IN A VEHICLE
The disclosure relates to systems, methods and computer readable media for providing network-based identification, generation and management of a unique damage (finger) print of vehicle(s) by geodetic mapping of stable key points onto a ground truth 3D model of the vehicle, and vehicle parts—identified from the raw images using supervised and unsupervised machine learning. Specifically, the disclosure relates to System and methods for the generation of unique damage print on a vehicle that is obtained from captured images of the damaged vehicle, photogrammetrically localized to a specific vehicle part, and the computer programs enabling the method, the damage print configured to be used, for example, in fraud detection in insurance claims.
VIRTUAL THERMAL CAMERA IMAGING SYSTEM
System and method that includes mapping temperature values from a two dimensional (2D) thermal image of a component to a three dimensional (3D) drawing model of the component to generate a 3D thermal model of the component; mapping temperature values from the 3D thermal model to a 2D virtual thermal image corresponding to a virtual thermal camera perspective; and predicting an attribute for the component by applying a prediction function to the 2D virtual thermal image.
Real-time virtual try-on item modeling
A method includes generating, based on user images, a user 3-D model. The method proceeds with obtaining, via a user interface, a request to graphically represent an accessory on to a user graphical representation. This user graphical representation is generated using the user 3-D model. In response to this request, an accessory 3-D model is obtained. Further, the method includes positioning, via the user interface and based on parameters of the user 3-D model and of the accessory 3-D model, an accessory graphical representation on to the user graphical representation. The method further includes updating, in response to detecting user movement, the user 3-D model and the accessory 3-D model and presenting, via the user interface and based on these updated 3-D models, the accessory graphical representation and the user graphical representation in accordance with the user movement.
Body size estimation apparatus, body size estimation method, and program
Provided are a body size estimation apparatus, a body size estimation method, and a program that enable the estimation of the body size of a user even when the user has not taken a T-pose in advance. A body size data storage unit (50) stores body size data indicating a body size of a user. A posture data acquisition unit (52) acquires position data indicating positions of a plurality of body parts away from each other of the user. A body size estimation unit (54) estimates a body size of the user based on positions of two or more body parts indicated by the position data. A body size update unit (56) updates, in a case where the estimated body size is larger than the body size indicated by the body size data stored in the body size data storage unit (50), the body size indicated by the body size data to the estimated body size.
Single view tracking of cylindrical objects
The present invention relates to tracking objects. Specifically, the present invention relates to determining the position and/or location of styli from image data. Aspects and/or embodiments seek to provide a method for determining an orientation and/or a position of a cylindrical object from image data using a single viewpoint.
Network and system for pose and size estimation
A network for category-level 6D pose and size estimation, including a 3D-OCR module for 3D Orientation-Consistent Representation, a GeoReS module for Geometry-constrained Reflection Symmetry, and a MPDE module for Mirror-Paired Dimensional Estimation; wherein the 3D-OCR module and the GeoReS module are incorporated in parallel; the 3D-OCR module receives a canonical template shape including canonical category-specific keypoints; the GeoReS module receives an original input depth observation including pre-processed predicted category labels and potential masks of the target instances; the MPDE module receives the output from the GeoReS module as well as the original input depth observation; and the network outputs the estimation results based on the output of the MPDE module, the output of the 3D-OCR module, as well as the canonical template shape. Also provided are corresponding systems and methods.
System for generating a three-dimensional scene of a physical environment
A system configured to assist a user in scanning a physical environment in order to generate a three-dimensional scan or model. In some cases, the system may include an interface to assist the user in capturing data usable to determine a scale or depth of the physical environment and to perform a scan in a manner that minimizes gaps.
METHOD OF HIDING AN OBJECT IN AN IMAGE OR VIDEO AND ASSOCIATED AUGMENTED REALITY PROCESS
A method for generating a final image from an initial image including an object suitable to be worn by an individual. The presence of the object in the initial image is detected. A first layer is superposed on the initial image. The first layer includes a mask at least partially covering the object in the initial image. The appearance of at least one part of the mask is modified. The suppression of all or part of an object in an image or a video is enabled. Also, a process of augmented reality intended to be used by an individual wearing a vision device on the face, and a try-on device for a virtual object.