G06T17/205

MODEL-BASED IMAGE SEGMENTATION

A method and system for mapping boundary detecting features of at least one source triangulated mesh of known topology to a target triangulated mesh of arbitrary topology. A region of interest in a volumetric image associated with each triangle of the target triangulated mesh is provided to a feature mapping network. The feature mapping network assigns a feature selection vector to each triangle of the target triangulated mesh. The associated region of interest and assigned feature selection vector for each triangle of the target triangulated mesh are provided to a boundary detection network. A predicted boundary based on features of the associated region of interest selected by the assigned feature selection vector is obtained from the boundary detection network.

Viewpoint dependent brick selection for fast volumetric reconstruction

A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.

Generating approximations of cardiograms from different source configurations
11576624 · 2023-02-14 · ·

Systems are provided for generating data representing electromagnetic states of a heart for medical, scientific, research, and/or engineering purposes. The systems generate the data based on source configurations such as dimensions of, and scar or fibrosis or pro-arrhythmic substrate location within, a heart and a computational model of the electromagnetic output of the heart. The systems may dynamically generate the source configurations to provide representative source configurations that may be found in a population. For each source configuration of the electromagnetic source, the systems run a simulation of the functioning of the heart to generate modeled electromagnetic output (e.g., an electromagnetic mesh for each simulation step with a voltage at each point of the electromagnetic mesh) for that source configuration. The systems may generate a cardiogram for each source configuration from the modeled electromagnetic output of that source configuration for use in predicting the source location of an arrhythmia.

Systems and methods for scanning a patient in an imaging system

The present disclosure relates to systems and methods for scanning a patient in an imaging system. The imaging system may include at least one camera directed at the patient. The systems and methods may obtain a plurality of images of the patient that are captured by the at least one camera. Each of the plurality of images may correspond to one of a series of time points. The systems and methods may also determine a motion of the patient over the series of time points based on the plurality of images of the patient. The systems and methods may further determine whether the patient is ready for scan based on the motion of the patient, and generate control information of the imaging system for scanning the patient in response to determining that the patient is ready for scan.

Systems and methods for orthosis design

The present disclosure is related to systems and methods for orthosis design. The method includes obtaining a three-dimensional (3D) model associated with a subject. The method includes obtaining one or more reference images associated with the subject. The method includes determining, based on the 3D model and the one or more reference images, orthosis design data for the subject. The orthosis design data may be used to determine an orthosis for the subject.

DIGITAL REALITY PLATFORM PROVIDING DATA FUSION FOR GENERATING A THREE-DIMENSIONAL MODEL OF THE ENVIRONMENT

The present invention relates to three-dimensional reality capturing of an environment, wherein data of various kinds of measurement devices are fused to generate a three-dimensional model of the environment. In particular, the invention relates to a computer-implemented method for registration and visualization of a 3D model provided by various types of reality capture devices and/or by various surveying tasks.

MODEL-BASED IMAGE SEGMENTATION

Presented are concepts for initialising a model for model-based segmentation of an image which use specific landmarks (e.g. detected using other techniques) to initialize the segmentation mesh. Using such an approach, embodiments need not be limited to predefined model transformations, but can initialise a segmentation mesh with arbitrary shape. In this way, embodiments may provide for an image segmentation algorithm that not only delivers a robust surface-based segmentation result but also does so for strongly varying target structure variations (in terms of shape).

MESH CORRECTION DEPENDING ON MESH NORMAL DIRECTION

The invention relates to a system and computer-implemented method for enabling correction of a segmentation of an anatomical structure in 3D image data. The segmentation may be provided by a mesh which is applied to the 3D image data to segment the anatomical structure. The correction may for example involve a user directly or indirectly selecting a mesh part, such as a mesh point, that needs to be corrected. The behaviour of the correction, e.g., in terms of direction, radius/neighbourhood or strength, may then be dependent on the mesh normal direction, and in some embodiments, on a difference between the mesh normal direction and the orientation of the viewing plane.

Mesh updates via mesh frustum cutting

Various implementations or examples set forth a method for scanning a three-dimensional (3D) environment. The method includes generating, based on sensor data captured by a depth sensor on a device, one or more 3D meshes representing a physical space, wherein each of the 3D meshes comprises a corresponding set of vertices and a corresponding set of faces comprising edges between pairs of vertices; determining that a mesh is visible in a current frame captured by an image sensor on the device; determining, based on the corresponding set of vertices and the corresponding set of faces for the mesh, a portion of the mesh that lies within a view frustum associated with the current frame; and updating the one or more 3D meshes by texturing the portion of the mesh with one or more pixels in the current frame onto which the portion is projected.

Multi-characteristic remeshing for graphical objects
11574444 · 2023-02-07 · ·

A multi-characteristic remeshing system that generates remeshed 3D graphical surfaces can include a compact geometric descriptive language (“CGDL”) conversion module, one or more geometric characteristic parsing modules, and a geometric computation module. The CGDL conversion module receives an input mesh for a 3D graphical object and CGDL source text that describes target characteristics of an output mesh of the 3D graphical object. Each geometric characteristic parsing module identifies inherent geometric characteristics of the input mesh, and generates a geometric characteristic map. The geometric characteristic map includes instructions to generate the output mesh with respective target characteristics. The instruction describes a relationship of the one or more inherent geometric characteristics with the respective target characteristic. The geometric computation module generates the output mesh with the target characteristics, based on the geometric characteristic maps from the geometric characteristic parsing modules.