G06V2201/12

Systems, methods, and apparatus for correcting malocclusions of teeth

Methods and systems are provided for manufacturing an appliance for correcting malocclusions of a patient's teeth. The method may include measuring the positions of a patient's teeth and receiving tooth movement constraints. The method may also include generating an initial treatment plan based on the measured tooth positions and the tooth movement constraints and measuring the malocclusions of the patient's teeth for one or more types of dental malocclusions. The method may also include generating a plurality of treatment plans from the initial treatment plan based on the measured malocclusion and generating a model of an appliance for each stage of the treatment plan. The method may also include generating instructions for fabricating the appliance for each stage of the treatment plan based on the model of the appliance for each stage of the treatment plan.

Object reconstruction with texture parsing

Techniques are provided for generating one or more three-dimensional (3D) models. In one example, an image of an object (e.g., a face or other object) is obtained, and a 3D model of the object in the image is generated. The 3D model includes geometry information. Color information for the 3D model is determined, and a fitted 3D model of the object is generated based on a modification of the geometry information and the color information for the 3D model. In some cases, the color information (e.g., determination and/or modification of the color information) and the fitted 3D model can be based on one or more vertex-level fitting processes. A refined 3D model of the object is generated based on the fitted 3D model and depth information associated with the fitted 3D model. In some cases, the refined 3D model can be based on a pixel-level refinement or fitting process.

PREDICTING DISPLAY FIT AND OPHTHALMIC FIT MEASUREMENTS USING A SIMULATOR
20220350985 · 2022-11-03 ·

A system and method of detecting display fit measurements and/or ophthalmic measurements for a head mounted wearable computing device including a display device is provided. The system and method may include capturing image data including a face of a user to be fitted for the head mounted wearable computing device. A three-dimensional head pose and gaze measurements may be extracted and a three-dimensional model may be developed from the captured image data. The system may detect display fit measurements and/or ophthalmic fit measurements from the three-dimensional model, and may provide one or more head mounted wearable computing devices that meet the display fit and/or ophthalmic fit requirements.

PROCESSING DEVICE, PROCESSING METHOD, AND COMPUTERREADABLE MEDIUM
20220343629 · 2022-10-27 · ·

A processing device (10) includes a classification means (12) for classifying three-dimensional point cloud data acquired based on reflected light from a plurality of reinforcing steel bars irradiated with light in a bar arrangement inspection into clusters, which are shape units corresponding to the plurality of reinforcing steel bars, based on position information at each point in the three-dimensional point cloud data, a smoothing means (13) for smoothing the contours of the classified clusters, and a cluster association means (14) for determining whether a first cluster and a second cluster contained in the smoothed clusters correspond to the same reinforcing steel bar based on positional relation between the smoothed clusters.

Systems and methods for computer-based labeling of sensor data captured by a vehicle

Examples disclosed herein may involve (i) based on an analysis of 2D data captured by a vehicle while operating in a real-world environment during a window of time, generating a 2D track for at least one object detected in the environment comprising one or more 2D labels representative of the object, (ii) for the object detected in the environment: (a) using the 2D track to identify, within a 3D point cloud representative of the environment, 3D data points associated with the object, and (b) based on the 3D data points, generating a 3D track for the object that comprises one or more 3D labels representative of the object, and (iii) based on the 3D point cloud and the 3D track, generating a time-aggregated, 3D visualization of the environment in which the vehicle was operating during the window of time that includes at least one 3D label for the object.

Three-dimensional shape measuring method and three-dimensional shape measuring device

A three-dimensional shape measuring method includes: projecting a first grid pattern based on a first light and a second grid pattern based on a second light onto a target object in such a way that the first grid pattern and the second grid pattern intersect each other, the first light and the second light being lights of two colors included in three primary colors of light; picking up, by a three-color camera, an image of the first grid pattern and the second grid pattern projected on the target object, and acquiring a first picked-up image based on the first light and a second picked-up image based on the second light; and performing a phase analysis of a grid image with respect to at least one of the first picked-up image and the second picked-up image and calculating height information of the target object.

DETECTOR FOR OBJECT RECOGNITION

A detector for object recognition includes an illumination source for projecting an illumination pattern on an area including at least one object; an optical sensor having a light-sensitive area and configured for determining a first image including a two-dimensional image of the area, and a second image including a plurality of reflection features generated in response to illumination, each reflection feature including a beam profile; an evaluation device for determining beam profile information for each reflection feature by analyzing their beam profiles, determining a three-dimensional image using the determined beam profile information, identifying the reflection features located inside and/or outside an image region, determining a depth level from the beam profile information of the reflection features located inside and/or outside of the image region, determining a material property of the object from the beam profile information, and determining a position and/or orientation of the object.

DEEP LEARNING FOR OBJECT DETECTION USING PILLARS
20230080764 · 2023-03-16 ·

Among other things, we describe techniques for detecting objects in the environment surrounding a vehicle. A computer system is configured to receive a set of measurements from a sensor of a vehicle. The set of measurements includes a plurality of data points that represent a plurality of objects in a 3D space surrounding the vehicle. The system divides the 3D space into a plurality of pillars. The system then assigns each data point of the plurality of data points to a pillar in the plurality of pillars. The system generates a pseudo-image based on the plurality of pillars. The pseudo-image includes, for each pillar of the plurality of pillars, a corresponding feature representation of data points assigned to the pillar. The system detects the plurality of objects based on an analysis of the pseudo-image. The system then operates the vehicle based upon the detecting of the objects.

METHOD AND DEVICE FOR MULTI-SENSOR DATA-BASED FUSION INFORMATION GENERATION FOR 360-DEGREE DETECTION AND RECOGNITION OF SURROUNDING OBJECT

Presented are a method and a device for multi-sensor data-based fusion information generation for 360-degree detection and recognition of a surrounding object. The present invention proposes a method for multi-sensor data-based fusion information generation for 360-degree detection and recognition of a surrounding object, the method comprising the steps of: acquiring a feature map from a multi-sensor signal by using a deep neural network; converting the acquired feature map into an integrated three-dimensional coordinate system; and generating a fusion feature map for performing recognition by using the converted integrated three-dimensional coordinate system.

IMPROVEMENTS IN OR RELATING TO PHOTOGRAMMETRY

Photogrammetric analysis of an object is carried out by capturing images of the object. Photogrammetric analysis requires the capture of multiple overlapping images of the object from various camera positions. These images are then processed to generate a three-dimensional (3D) point cloud representing the object in 3D space. A 3D model of the object is used to generate a model 3D point cloud. Based on the modelled point cloud and camera optics, the visibility of each point in the 3D point cloud is determined for a range of possible camera positions. The radial component of the camera is fixed by defining a shell of suitable camera positions around the part and for each position on the defined shell, the quality as a function of camera position is calculated. This defines a density function over the potential camera positions. Initial camera positions are selected based on the density function.