G06T2200/08

Automated supervision and inspection of assembly process
11568597 · 2023-01-31 · ·

A method and apparatus for performing automated supervision and inspection of an assembly process. The method is implemented using a computer system. Sensor data is generated at an assembly site using a sensor system positioned relative to the assembly site. A three-dimensional global map for the assembly site and an assembly being built at the assembly site is generated using the sensor data. A current stage of an assembly process for building an assembly at the assembly site is identified using the three-dimensional global map. A context for the current stage is identified. A quality report for the assembly is generated based on the three-dimensional global map and the context for the current stage.

Method for generating a 3D physical model of a patient specific anatomic feature from 2D medical images

There is provided a method for generating a 3D physical model of a patient specific anatomic feature from 2D medical images. The 2D medical images are uploaded by an end-user via a Web Application and sent to a server. The server processes the 2D medical images and automatically generates a 3D printable model of a patient specific anatomic feature from the 2D medical images using a segmentation technique. The 3D printable model is 3D printed as a 3D physical model such that it represents a 1:1 scale of the patient specific anatomic feature. The method includes the step of automatically identifying the patient specific anatomic feature.

MULTI-VIEW NEURAL HUMAN RENDERING
20230027234 · 2023-01-26 ·

An image-based method of modeling and rendering a three-dimensional model of an object is provided. The method comprises: obtaining a three-dimensional point cloud at each frame of a synchronized, multi-view video of an object, wherein the video comprises a plurality of frames; extracting a feature descriptor for each point in the point cloud for the plurality of frames without storing the feature descriptor for each frame; producing a two-dimensional feature map for a target camera; and using an anti-aliased convolutional neural network to decode the feature map into an image and a foreground mask.

METHOD AND APPARATUS FOR TRAINING A NEURAL NETWORK
20230230313 · 2023-07-20 ·

A first aspect of the invention provides a method of training a neural network for capturing volumetric video, comprising: generating a 3D model of a scene; using the 3D model to generate a high fidelity depth map; capturing a perceived depth map of the scene, having a field of view that is aligned with a field of view of the high fidelity depth map; and training the neural network based on the high fidelity depth map and the perceived depth map, wherein the high fidelity depth map has a higher fidelity to the scene than the perceived depth map has.

System and Method for Improved Generation of Avatars for Virtual Try-On of Garments

A system and a method for improved generation of 3D avatars for virtual try-on of garments is provided. Inputs from a first user type are received, via a first input unit, for generating one or more garment types in a graphical format. Further, a 3D avatar of a second user type is generated in a semi-automatic manner or an automatic manner based on capturing a first input type or a second input type respectively received via a second input unit. The first input type comprises measurements of body specifications of the second user type and the second input type comprises body images of the second user type. Further, the generated garments are rendered on the generated 3D avatar of the second user type for carrying out a virtual try-on operation.

Systems and methods for autonomous cardiac mapping

Methods and systems for autonomous cardiac mapping are disclosed. An example system for autonomous cardiac mapping of a heart chamber includes a processor being configured to acquire a representative geometric shell of the heart chamber, control a robotic device to autonomously navigate a mapping probe to a plurality of locations within the heart chamber based at least in part on the representative geometric shell, and generate a three-dimensional electroanatomical map of the heart chamber based on electrical data collected by the probe at the plurality of locations.

METHOD AND SYSTEM FOR ASSESSING VESSEL OBSTRUCTION BASED ON MACHINE LEARNING

Methods and systems are described for assessing a vessel obstruction. The methods and systems obtain a volumetric image dataset of a myocardium and at least one coronary vessel, wherein the myocardium comprises muscular tissue of the heart. A three-dimensional (3D) image corresponding to a coronary vessel of interest is created from the volumetric image dataset. Feature data that represents features of both the myocardium and the coronary vessel of interest is generated. At least some of the feature data is determined by a first machine learning-based model based on the 3D image. A second machine learning-based model is used to determine at least one parameter based on the feature data, wherein the at least one parameter represents functionally significant coronary lesion severity of the coronary vessel of interest.

IMAGE PROCESSING APPARATUS AND METHOD

An image processing apparatus and method are provided. The image processing apparatus acquires a target image including a depth image of a scene, determines three-dimensional (3D) point cloud data corresponding to the depth image based on the depth image, and extracts an object included in the scene to acquire an object extraction result based on the 3D point cloud data.

SURFACE PROFILE ESTIMATION AND BUMP DETECTION FOR AUTONOMOUS MACHINE APPLICATIONS
20230230273 · 2023-07-20 ·

In various examples, surface profile estimation and bump detection may be performed based on a three-dimensional (3D) point cloud. The 3D point cloud may be filtered in view of a portion of an environment including drivable free-space, and within a threshold height to factor out other objects or obstacles other than a driving surface and protuberances thereon. The 3D point cloud may be analyzed—e.g., using a sliding window of bounding shapes along a longitudinal or other heading direction—to determine one-dimensional (1D) signal profiles corresponding to heights along the driving surface. The profile itself may be used by a vehicle—e.g., an autonomous or semi-autonomous vehicle—to help in navigating the environment, and/or the profile may be used to detect bumps, humps, and/or other protuberances along the driving surface, in addition to a location, orientation, and geometry thereof.

IMAGE PROCESSING SYSTEM AND METHOD

A computer-implemented method of determining a pose of each of a plurality of objects includes, for each given object: using image data and associated depth information to estimate a pose of the given object. The method includes iteratively updating the estimated poses by: sampling, for each given object, a plurality of points from a predetermined model of the given object transformed in accordance with the estimated pose of the given object; determining first occupancy data for each given object dependent on positions of the points sampled from the predetermined model, relative to a voxel grid containing the given object; determining second occupancy data for each given object dependent on positions of the points sampled from the predetermined models of the other objects, relative to the voxel grid containing the given object; and updating the estimated poses of the plurality of objects to reduce an occupancy penalty.