G06T7/30

Method and device for inpainting of colourised three-dimensional point clouds

A method for colourising a three-dimensional point cloud including surveying a point cloud with a surveying instrument. Each point of the point cloud may be characterised by coordinates within an instrument coordinate system having an instrument center. The method may include capturing a first image of the setting with a first camera. Each pixel value of the first image is assigned coordinates within a first camera coordinate system having a first projection center as origin and a first parallax shift relative to the instrument center. The method may include transforming the point cloud from the instrument coordinate system into the first camera coordinate system, resulting in a first transformed point cloud, detecting one or more uncovered points within the first transformed point cloud which are openly visible from the first projection center, and for each uncovered point, assigning a pixel value having corresponding coordinates in the first camera coordinate system.

Method and device for inpainting of colourised three-dimensional point clouds

A method for colourising a three-dimensional point cloud including surveying a point cloud with a surveying instrument. Each point of the point cloud may be characterised by coordinates within an instrument coordinate system having an instrument center. The method may include capturing a first image of the setting with a first camera. Each pixel value of the first image is assigned coordinates within a first camera coordinate system having a first projection center as origin and a first parallax shift relative to the instrument center. The method may include transforming the point cloud from the instrument coordinate system into the first camera coordinate system, resulting in a first transformed point cloud, detecting one or more uncovered points within the first transformed point cloud which are openly visible from the first projection center, and for each uncovered point, assigning a pixel value having corresponding coordinates in the first camera coordinate system.

Method and system for assistance in guiding an endovascular instrument

A system for assisting guiding an endovascular instrument in vascular structures of an anatomical region of interest of a patient. This system includes an imaging device for capturing three-dimensional images of parts of the body of a patient, a programmable device and a viewing unit. The imaging device captures partially superposed fluoroscopic images of the region, and the programmable device forms a first augmented image, representative of a complete panorama of bones of the region, and cooperates with the imaging device to obtain a second augmented image including a representation of the vascular structures of the region. The imaging device captures a current fluoroscopic image of a part of the region, and the programmable device registers the current fluoroscopic image with respect to the first augmented image and locates and displays, on the viewing unit, an image region corresponding to the current fluoroscopic image in the second augmented image.

Method and system for assistance in guiding an endovascular instrument

A system for assisting guiding an endovascular instrument in vascular structures of an anatomical region of interest of a patient. This system includes an imaging device for capturing three-dimensional images of parts of the body of a patient, a programmable device and a viewing unit. The imaging device captures partially superposed fluoroscopic images of the region, and the programmable device forms a first augmented image, representative of a complete panorama of bones of the region, and cooperates with the imaging device to obtain a second augmented image including a representation of the vascular structures of the region. The imaging device captures a current fluoroscopic image of a part of the region, and the programmable device registers the current fluoroscopic image with respect to the first augmented image and locates and displays, on the viewing unit, an image region corresponding to the current fluoroscopic image in the second augmented image.

Localization of a surveying instrument
11568559 · 2023-01-31 · ·

A method for surveying an environment by a movable surveying instrument configured to be carried by a human with a progressional capturing of 2D-images by at least one camera and applying a visual simultaneous location and mapping algorithm (VSLAM) or a visual inertial simultaneous location and mapping algorithm (VISLAM) with a progressional deriving of a sparse evolving point cloud of at least part of the environment, and a progressional deriving of a trajectory of movement. The method comprises a progressional matching of the sparse evolving point cloud with a known CAD-geometry, with a minimizing of a function configured to model a distance between the sparse point cloud and the known CAD-geometry and deriving a spatial localization and orientation of the surveying instrument. At least one surveying measurement value of the environment by a spatial measurement unit is combined with the sparse point cloud or the plan information.

Localization of a surveying instrument
11568559 · 2023-01-31 · ·

A method for surveying an environment by a movable surveying instrument configured to be carried by a human with a progressional capturing of 2D-images by at least one camera and applying a visual simultaneous location and mapping algorithm (VSLAM) or a visual inertial simultaneous location and mapping algorithm (VISLAM) with a progressional deriving of a sparse evolving point cloud of at least part of the environment, and a progressional deriving of a trajectory of movement. The method comprises a progressional matching of the sparse evolving point cloud with a known CAD-geometry, with a minimizing of a function configured to model a distance between the sparse point cloud and the known CAD-geometry and deriving a spatial localization and orientation of the surveying instrument. At least one surveying measurement value of the environment by a spatial measurement unit is combined with the sparse point cloud or the plan information.

Method and systems for anatomy/view classification in x-ray imaging

Various methods and systems are provided for x-ray imaging. In one embodiment, a method for an image pasting examination comprises acquiring, via an optical camera and/or depth camera, image data of a subject, controlling an x-ray source and an x-ray detector according to the image data to acquire a plurality of x-ray images of the subject, and stitching the plurality of x-ray images into a single x-ray image. In this way, optimal exposure techniques may be used for individual acquisitions in an image pasting examination such that the optimal dose is utilized, stitching quality is improved, and registration failures are avoided.

Method and systems for anatomy/view classification in x-ray imaging

Various methods and systems are provided for x-ray imaging. In one embodiment, a method for an image pasting examination comprises acquiring, via an optical camera and/or depth camera, image data of a subject, controlling an x-ray source and an x-ray detector according to the image data to acquire a plurality of x-ray images of the subject, and stitching the plurality of x-ray images into a single x-ray image. In this way, optimal exposure techniques may be used for individual acquisitions in an image pasting examination such that the optimal dose is utilized, stitching quality is improved, and registration failures are avoided.

Ai driven longitudinal liver focal lesion analysis

Systems and methods for performing an assessment of a lesion are provided. A plurality of input medical images of a lesion is received. The plurality of input medical images comprises an initial input medical image and one or more additional input medical images. The initial input medical image comprises a region of interest around the lesion. A mask of the lesion is curated for the initial input medical image based on the region of interest and a set of candidate masks. The region of interest in the initial input medical image is propagated to the one or more additional input medical images based on prior registration transformations. A mask of the lesion is curated for each of the one or more additional input medical images based on the propagated regions of interest and the set of candidate masks. One or more assessments of the lesion are performed based on the mask for the initial input medical image, the masks for the one or more additional input medical images, and prior assessments of lesions. Results of the one or more assessments of the lesion are output.

Ai driven longitudinal liver focal lesion analysis

Systems and methods for performing an assessment of a lesion are provided. A plurality of input medical images of a lesion is received. The plurality of input medical images comprises an initial input medical image and one or more additional input medical images. The initial input medical image comprises a region of interest around the lesion. A mask of the lesion is curated for the initial input medical image based on the region of interest and a set of candidate masks. The region of interest in the initial input medical image is propagated to the one or more additional input medical images based on prior registration transformations. A mask of the lesion is curated for each of the one or more additional input medical images based on the propagated regions of interest and the set of candidate masks. One or more assessments of the lesion are performed based on the mask for the initial input medical image, the masks for the one or more additional input medical images, and prior assessments of lesions. Results of the one or more assessments of the lesion are output.