G06V20/653

METHOD AND DEVICE FOR 3D SHAPE MATCHING BASED ON LOCAL REFERENCE FRAME
20220343105 · 2022-10-27 · ·

A method and a device for 3D shape matching based on a local reference frame are proposed. After acquiring a 3D point cloud and feature points in the method, the feature point set is projected to a plane, and feature transformation is performed on the projected points by using at least one factor from the distances between the 3D points and the feature points, the distances between the 3D points and the projected points, and the average distances between the 3D points and its 1-ring neighboring points to acquire a point distribution with a larger variance in a certain direction than the projected point set, and the local reference frame is determined based on the transformed point distribution. The 3D local feature descriptor established based on this local reference frame can encode the 3D local surface information more robustly, so as to obtain a better 3D shape matching effect.

Apparatus and method for identifying an articulatable part of a physical object using multiple 3D point clouds

An apparatus comprises an input interface configured to receive a first 3D point cloud associated with a physical object prior to articulation of an articulatable part, and a second 3D point cloud after articulation of the articulatable part. A processor is operably coupled to the input interface, an output interface, and memory. Program code, when executed by the processor, causes the processor to align the first and second point clouds, find nearest neighbors of points in the first point cloud to points in the second point cloud, eliminate the nearest neighbors of points in the second point cloud such that remaining points in the second point cloud comprise points associated with the articulatable part and points associated with noise, generate an output comprising at least the remaining points of the second point cloud associated with the articulatable part without the noise points, and communicate the output to the output interface.

Temporal and geometric consistency in physical setting understanding

A machine learning model is trained and used to perform a computer vision task such as semantic segmentation or normal direction prediction. The model uses a current image of a physical setting and input generated from three dimensional (3D) anchor points that store information determined from prior assessments of the physical setting. The 3D anchor points store previously-determined computer vision task information for the physical setting for particular 3D points locations in a 3D worlds space, e.g., an x, y, z coordinate system that is independent of image capture device pose. For example, 3D anchor points may store previously-determined semantic labels or normal directions for 3D points identified by simultaneous localization and mapping (SLAM) processes. The 3D anchor points are stored and used to generate input for the machine model as the model continues to reason about future images of the physical setting.

VEHICLE USING SPATIAL INFORMATION ACQUIRED USING SENSOR, SENSING DEVICE USING SPATIAL INFORMATION ACQUIRED USING SENSOR, AND SERVER
20230077393 · 2023-03-16 ·

A method of sensing a three-dimensional (3D) space using at least one sensor is proposed. The method can include acquiring spatial information over time for the sensed 3D space, applying a neural network based object classification model to the acquired spatial information over time to identify at least one object in the sensed 3D space. The method can also include tracking the sensed 3D space including the identified at least one object, and using information related to the tracked 3D space.

Device and method for registering three-dimensional data

A method and a device for registering three-dimensional data are disclosed. The method for registering three-dimensional data comprises: generating first two-dimensional data by two-dimensionally converting first three-dimensional data indicating a surface of a three-dimensional model of a target, generating second two-dimensional data by two-dimensionally converting second three-dimensional data indicating at least a part of the three-dimensional surface of the target; determining a first matching region in the first two-dimensional data and a second matching region in the second two-dimensional data by matching the second two-dimensional data to the first two-dimensional data; setting, as initial position, a plurality of points of the first three-dimensional data, which correspond to the first matching region and a plurality of points of the second three-dimensional data, which correspond to the second matching region; and registering the first three-dimensional data and the second three-dimensional data using the initial position.

Image processing and emphysema threshold determination

Methods, devices, systems and apparatus for determining emphysema thresholds for processing a pulmonary medical image are provided. In one aspect, a method includes: determining lung lobe regions in the pulmonary medical image, and, for each of the lung lobe regions, clustering CT values in the lung lobe region to divide the lung lobe region into a first sub region and a second sub region and acquiring a CT value corresponding to an intersection of a first CT value distribution function for the first sub region and a second CT value distribution function for the second sub region in the lung lobe region as an emphysema threshold for the lung lobe region.

SYSTEM AND METHOD FOR CALIBRATING A THREE-DIMENSIONAL SCANNING DEVICE
20230126591 · 2023-04-27 ·

A system for calibrating a three-dimensional scanning device includes a structured-light scanner capable of performing a structured-light operation, and a processor that performs calibration on a device under calibration (DUC). The structured-light scanner captures a base image by performing the structured-light operation prior to calibration. The structured-light scanner captures a calibration image with respect to corresponding DUC during calibration, and the calibration image is inputted to the processor, which determines transformation mapping from the calibration image to the base image. The determined transformation is then transferred to the DUC during calibration.

Determining touch applied to an ultrasonic sensor

In a method for determining touch applied to an electronic device, ultrasonic signals are emitted from an ultrasonic sensor. A plurality of reflected ultrasonic signals from a finger interacting with the ultrasonic sensor is captured. A first data based at least in part on a first reflected ultrasonic signal of the plurality of reflected ultrasonic signals is compared with a second data based at least in part on a second reflected ultrasonic signal of the plurality of reflected ultrasonic signals. A signal change due to a change in a feature of the finger during a touch interaction with the ultrasonic sensor is determined based on differences between the first data and the second data. A touch applied by the finger to the electronic device is determined based at least in part on the signal change due to the change in the feature of the finger.

System and method for providing eyewear try-on and recommendation services using truedepth camera
11475648 · 2022-10-18 · ·

The present invention relates to technology for providing eyewear recommendation and try-on services to a customer and allowing the customer to select optimal eyewear based on the eyewear recommendation and try-on services. A system for providing eyewear try-on and recommendation services according to one embodiment may include a feature data extraction unit for extracting feature data from the face mesh of a customer created using a TrueDepth camera, an eyewear adjustment unit for performing rendering by adjusting the specifications of eyewear by reflecting the extracted feature data, a matching processing unit for matching the facial image of the customer and the face mesh, and a try-on processing unit for performing a try-on process by overlapping the rendered eyewear on the facial image of the customer in an augmented reality manner with reference to the face mesh in the matching state.

System and method for deploying virtual replicas of real-world elements into a persistent virtual world system
11471772 · 2022-10-18 · ·

A system for developing and deploying virtual replicas of real-world elements into a persistent virtual world system. The development of the virtual replicas is performed in a virtual environment that enables development and configuration of the virtual replicas that mirror the behavior and appearance of the corresponding real elements. The virtual replicas are enriched through data captured by sensing mechanisms that synchronize in real-time the virtual replicas with the real-world elements. The virtual replicas are shared in a virtual world-based quality assurance system where they can be either approved or rejected for subsequent adjustments, when necessary. After approval and deployment, the replicas are shared in a deployed persistent virtual world system that is viewable to end users for management and interaction of the virtual replicas. Methods thereof are also disclosed.