G06T7/30

Method for supporting a user, computer program product, data medium and imaging system
11576557 · 2023-02-14 · ·

A method for supporting a user, a corresponding computer program product, a corresponding data medium, and a corresponding imaging system are provided. According to the method, a three-dimensional (3D) data set depicting a target object is provided, and at least one two-dimensional (2D) image of the target object is automatically acquired. The 2D image and the 3D data set are automatically registered with each other by a 2D/3D registration. A spatial direction in which the 2D/3D registration exhibits greatest uncertainty is automatically specified. A signal for aligning an instrument that is provided for the purpose of examining the target object is then automatically generated and output as a function of the specified spatial direction in order to support the user.

Method for supporting a user, computer program product, data medium and imaging system
11576557 · 2023-02-14 · ·

A method for supporting a user, a corresponding computer program product, a corresponding data medium, and a corresponding imaging system are provided. According to the method, a three-dimensional (3D) data set depicting a target object is provided, and at least one two-dimensional (2D) image of the target object is automatically acquired. The 2D image and the 3D data set are automatically registered with each other by a 2D/3D registration. A spatial direction in which the 2D/3D registration exhibits greatest uncertainty is automatically specified. A signal for aligning an instrument that is provided for the purpose of examining the target object is then automatically generated and output as a function of the specified spatial direction in order to support the user.

SYSTEM AND METHOD FOR MULTI-MODAL MICROSCOPY
20230043803 · 2023-02-09 · ·

A system and method for processing multi-modal microscopy imaging data on small-scale computer architecture which avoids restrictive manufacturer data formats and APIs. The system and method leverage a web-based application made available to microscopy instrument control hardware by which direct visual output of the control hardware is captured and transmitted to an edge computing device for processing by one or more inference models in parallel to construct a composite hyperimage.

SYSTEM AND METHOD FOR MULTI-MODAL MICROSCOPY
20230043803 · 2023-02-09 · ·

A system and method for processing multi-modal microscopy imaging data on small-scale computer architecture which avoids restrictive manufacturer data formats and APIs. The system and method leverage a web-based application made available to microscopy instrument control hardware by which direct visual output of the control hardware is captured and transmitted to an edge computing device for processing by one or more inference models in parallel to construct a composite hyperimage.

APPARATUS FOR ACQUIRING DEPTH IMAGE, METHOD FOR FUSING DEPTH IMAGES, AND TERMINAL DEVICE
20230042846 · 2023-02-09 · ·

Provided are an apparatus for acquiring a depth image, a method for fusing depth images, and a terminal device. The apparatus for acquiring a depth image includes an emitting module, a receiving module, and a processing unit. The emitting module is configured to emit a speckle array to an object, where the speckle array includes p mutually spaced apart speckles. The receiving module includes an image sensor. The processing unit is configured to receive the pixel signal and generate a sparse depth image based on the pixel signal, align an RGB image at a resolution of a*b with the sparse depth image, and fuse the aligned sparse depth image with the RGB image using a pre-trained image fusion model to obtain a dense depth image at a resolution of a*b.

APPARATUS FOR ACQUIRING DEPTH IMAGE, METHOD FOR FUSING DEPTH IMAGES, AND TERMINAL DEVICE
20230042846 · 2023-02-09 · ·

Provided are an apparatus for acquiring a depth image, a method for fusing depth images, and a terminal device. The apparatus for acquiring a depth image includes an emitting module, a receiving module, and a processing unit. The emitting module is configured to emit a speckle array to an object, where the speckle array includes p mutually spaced apart speckles. The receiving module includes an image sensor. The processing unit is configured to receive the pixel signal and generate a sparse depth image based on the pixel signal, align an RGB image at a resolution of a*b with the sparse depth image, and fuse the aligned sparse depth image with the RGB image using a pre-trained image fusion model to obtain a dense depth image at a resolution of a*b.

Technique for transferring a registration of image data of a surgical object from one surgical navigation system to another surgical navigation system

A method, a controller, and a surgical hybrid navigation system for transferring a registration of three dimensional image data of a surgical object from a first to a second surgical navigation system are described. A first tracker that is detectable by a first detector of the first surgical navigation system is arranged in a fixed spatial relationship with the surgical object and a second tracker that is detectable by a second detector of the second surgical navigation system is arranged in a fixed spatial relationship with the surgical object. The method includes registering the three dimensional image data of the surgical object in a first coordinate system of the first surgical navigation system and determining a first position and orientation of the first tracker in the first coordinate system and a second position and orientation of the second tracker in a second coordinate system of the second surgical navigation system.

Technique for transferring a registration of image data of a surgical object from one surgical navigation system to another surgical navigation system

A method, a controller, and a surgical hybrid navigation system for transferring a registration of three dimensional image data of a surgical object from a first to a second surgical navigation system are described. A first tracker that is detectable by a first detector of the first surgical navigation system is arranged in a fixed spatial relationship with the surgical object and a second tracker that is detectable by a second detector of the second surgical navigation system is arranged in a fixed spatial relationship with the surgical object. The method includes registering the three dimensional image data of the surgical object in a first coordinate system of the first surgical navigation system and determining a first position and orientation of the first tracker in the first coordinate system and a second position and orientation of the second tracker in a second coordinate system of the second surgical navigation system.

Method and system for image registration using an intelligent artificial agent

Methods and systems for image registration using an intelligent artificial agent are disclosed. In an intelligent artificial agent based registration method, a current state observation of an artificial agent is determined based on the medical images to be registered and current transformation parameters. Action-values are calculated for a plurality of actions available to the artificial agent based on the current state observation using a machine learning based model, such as a trained deep neural network (DNN). The actions correspond to predetermined adjustments of the transformation parameters. An action having a highest action-value is selected from the plurality of actions and the transformation parameters are adjusted by the predetermined adjustment corresponding to the selected action. The determining, calculating, and selecting steps are repeated for a plurality of iterations, and the medical images are registered using final transformation parameters resulting from the plurality of iterations.

Method and system for image registration using an intelligent artificial agent

Methods and systems for image registration using an intelligent artificial agent are disclosed. In an intelligent artificial agent based registration method, a current state observation of an artificial agent is determined based on the medical images to be registered and current transformation parameters. Action-values are calculated for a plurality of actions available to the artificial agent based on the current state observation using a machine learning based model, such as a trained deep neural network (DNN). The actions correspond to predetermined adjustments of the transformation parameters. An action having a highest action-value is selected from the plurality of actions and the transformation parameters are adjusted by the predetermined adjustment corresponding to the selected action. The determining, calculating, and selecting steps are repeated for a plurality of iterations, and the medical images are registered using final transformation parameters resulting from the plurality of iterations.