G06T2200/04

Systems and methods for scanning a patient in an imaging system

The present disclosure relates to systems and methods for scanning a patient in an imaging system. The imaging system may include at least one camera directed at the patient. The systems and methods may obtain a plurality of images of the patient that are captured by the at least one camera. Each of the plurality of images may correspond to one of a series of time points. The systems and methods may also determine a motion of the patient over the series of time points based on the plurality of images of the patient. The systems and methods may further determine whether the patient is ready for scan based on the motion of the patient, and generate control information of the imaging system for scanning the patient in response to determining that the patient is ready for scan.

Spatial construction using guided surface detection
11580658 · 2023-02-14 · ·

Described herein are a system and methods for efficiently using depth and image information for a space to generate a 3D representation of that space. In some embodiments, an indication of one or more points is received with respect to image information, which is then mapped to corresponding points within depth information. A boundary may then be calculated to be associated with each of the points based on the depth information at, and surrounding, each point. Each of the boundaries are extended outward until junctions are identified as bounding the boundaries in a direction. The system may determine whether the process is complete or not based on whether any of the calculated boundaries are currently unlimited in extent in any direction. Once the system determines that each of the boundaries is limited in extent, a 3D representation of the space may be generated based on the identified junctions and/or boundaries.

3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network

A 3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network is described, A machine learning method for low dose computed tomography (LDCT) image correction is provided. The method includes training, by a training circuitry, a neural network (NN) based, at least in part, on two-dimensional (2-D) training data. The 2-D training data includes a plurality of 2-D training image pairs. Each 2-D image pair includes one training input image and one corresponding target output image. The training includes adjusting at least one of a plurality of 2-D weights based, at least in part, on an objective function. The method further includes refining, by the training circuitry, the NN based, at least in part, on three-dimensional (3-D) training data. The 3-D training data includes a plurality of 3-D training image pairs. Each 3-D training image pair includes a plurality of adjacent 2-D training input images and at least one corresponding target output image. The refining includes adjusting at least one of a plurality of 3-D weights based, at least in part, on the plurality of 2-D weights and based, at least in part, on the objective function. The plurality of 2-D weights includes the at least one adjusted 2-D weight.

MODEL-BASED IMAGE SEGMENTATION

Presented are concepts for initialising a model for model-based segmentation of an image which use specific landmarks (e.g. detected using other techniques) to initialize the segmentation mesh. Using such an approach, embodiments need not be limited to predefined model transformations, but can initialise a segmentation mesh with arbitrary shape. In this way, embodiments may provide for an image segmentation algorithm that not only delivers a robust surface-based segmentation result but also does so for strongly varying target structure variations (in terms of shape).

LEARNING-BASED ACTIVE SURFACE MODEL FOR MEDICAL IMAGE SEGMENTATION
20230043026 · 2023-02-09 · ·

A learning-based active surface model for medical image segmentation uses a method including: (a) data generation: obtaining medical images and associated ground truths, and splitting the sample images into a training set and a testing set; (b) raw segmentation: constructing a surface initialization network, parameters of the network trained by images and labels in the training set; (c) surface initialization: segmenting the images by the surface initialization network, and generating the point cloud data as the initial surface from the segmentation; (d) fine segmentation: constructing the surface evolution network, the parameters of the network trained by the initial surface obtained in step (c); (e) surface evolution: deforming the initial surface points along the offsets to obtain the predicted surface, the offsets presenting the prediction of the surface evolution network; (f) surface reconstruction: reconstructing the 3D volumes from the set of predicted surface points set to obtain the final segmentation results.

Method And Apparatus for Image Registration
20230038125 · 2023-02-09 · ·

An image registration apparatus including at least one processor and configured to project, to a first model, a first image generated based on an image obtained from a first camera to generate a first intermediate image, to map the first intermediate image to a first output model to generate a first output image, to project, to a second model, a second image generated based on an image obtained from a second camera to generate a second intermediate image, to map the second intermediate image to a second output model to generate a second output image, and to determine a match rate between the first output image and the second output image and transform at least one of the first model and the second model based on a determined match rate and a preset reference match rate.

ROBOTIC SYSTEM WITH IMAGE-BASED SIZING MECHANISM AND METHODS FOR OPERATING THE SAME

A system and method for estimating aspects of target objects and/or associated task implementations is disclosed.

Technique for transferring a registration of image data of a surgical object from one surgical navigation system to another surgical navigation system

A method, a controller, and a surgical hybrid navigation system for transferring a registration of three dimensional image data of a surgical object from a first to a second surgical navigation system are described. A first tracker that is detectable by a first detector of the first surgical navigation system is arranged in a fixed spatial relationship with the surgical object and a second tracker that is detectable by a second detector of the second surgical navigation system is arranged in a fixed spatial relationship with the surgical object. The method includes registering the three dimensional image data of the surgical object in a first coordinate system of the first surgical navigation system and determining a first position and orientation of the first tracker in the first coordinate system and a second position and orientation of the second tracker in a second coordinate system of the second surgical navigation system.

Technologies for time-delayed augmented reality presentations

Technologies for time-delayed augmented reality (AR) presentations includes determining a location of a plurality of user AR systems located within the presentation site and determining a time delay of an AR sensory stimulus event of an AR presentation to be presented in the presentation site for each user AR system based on the location of the corresponding user AR system within the presentation site. The AR sensory stimulus event is presented to each user AR system based on the determined time delay associated with the corresponding user AR system. Each user AR system generates the AR sensory stimulus event based on a timing parameter that defines the time delay for the corresponding user AR system such that the generation of the AR sensory stimulus event is time-delayed based on the location of the user AR system within the presentation site.

Designing a 3D modeled object via user-interaction
11556678 · 2023-01-17 · ·

A computer-implemented method for designing a 3D modeled object via user-interaction. The method includes obtaining the 3D modeled object and a machine-learnt decoder. The machine-learnt decoder is a differentiable function taking values in a latent space and outputting values in a 3D modeled object space. The method further includes defining a deformation constraint for a part of the 3D modeled object. The method further comprises determining an optimal vector. The optimal vector minimizes an energy. The energy explores latent vectors. The energy comprises a term which penalizes, for each explored latent vector, non-respect of the deformation constraint by the result of applying the decoder to the explored latent vector. The method further includes applying the decoder to the optimal latent vector. This constitutes an improved method for designing a 3D modeled object via user-interaction.