G06T7/37

Systems and methods for performing intraoperative image registration

Systems and methods are provided for performing intraoperative fusion of two or more volumetric image datasets via surface-based image registration. The volumetric image datasets are separately registered with intraoperatively acquired surface data, thereby fusing the two volumetric image datasets into a common frame of reference while avoiding the need for complex and time-consuming preoperative volumetric-to-volumetric image registration and fusion. The resulting fused image data may be processed to generate one or more images for use during surgical navigation.

Systems and methods for performing intraoperative image registration

Systems and methods are provided for performing intraoperative fusion of two or more volumetric image datasets via surface-based image registration. The volumetric image datasets are separately registered with intraoperatively acquired surface data, thereby fusing the two volumetric image datasets into a common frame of reference while avoiding the need for complex and time-consuming preoperative volumetric-to-volumetric image registration and fusion. The resulting fused image data may be processed to generate one or more images for use during surgical navigation.

Medical Image Registration Method Based on Progressive Images

A two-stage medical image registration method based on progressive images (PIs) to solve the technical problem of low registration accuracy of traditional image registration methods includes: merging a reference image with a floating image to generate multiple intermediate PIs; registering, by a speeded-up robust features (SURF) algorithm and an affine transformation, the floating image with the intermediate PIs to acquire coarse registration results; registering, by the SURF algorithm and the affine transformation, the reference image with the coarse registration results to acquire fine registration results; and comparing the fine registration results of the intermediate PIs, which are acquired by iteration, and selecting an optimal registration result as a final registration image. The method can achieve multimodal registration for brain imaging with MI, NCC, MSD, and NMI superior to those of the existing registration algorithms. The method effectively improves the registration accuracy through the progressive medical image registration strategy.

Medical Image Registration Method Based on Progressive Images

A two-stage medical image registration method based on progressive images (PIs) to solve the technical problem of low registration accuracy of traditional image registration methods includes: merging a reference image with a floating image to generate multiple intermediate PIs; registering, by a speeded-up robust features (SURF) algorithm and an affine transformation, the floating image with the intermediate PIs to acquire coarse registration results; registering, by the SURF algorithm and the affine transformation, the reference image with the coarse registration results to acquire fine registration results; and comparing the fine registration results of the intermediate PIs, which are acquired by iteration, and selecting an optimal registration result as a final registration image. The method can achieve multimodal registration for brain imaging with MI, NCC, MSD, and NMI superior to those of the existing registration algorithms. The method effectively improves the registration accuracy through the progressive medical image registration strategy.

SYSTEMS AND METHODS FOR POINT CLOUD REGISTRATION
20220414821 · 2022-12-29 ·

Systems and methods are provided for point cloud processing with an equivariant neural network and implicit shape learning that may produce correspondence-free registration. The systems and methods may provide for feature space preservation with the same rotation operation as a Euclidean input space, due to the equivariance property, which may provide for solving the feature-space registration in a closed form.

METHOD OF COMPENSATING FOR SHRINKAGE AND DISTORTION USING SCANS
20220414904 · 2022-12-29 · ·

A method of compensating for shrinking and distortion of an object resulting from a manufacturing process. A scan is performed of an object following a manufacturing process to produce scan data. The scan data is aligned to a part mesh of the object. The part mesh is adjusted to substantially coincide with the scan data by moving part mesh vertices. Delta vectors are computed by subtracting initial part mesh vertex positions from final part mesh vertex positions. The inverse of the delta vectors are applied to the preprocessed part mesh to give a scan adjusted pre-processed shape.

METHOD OF COMPENSATING FOR SHRINKAGE AND DISTORTION USING SCANS
20220414904 · 2022-12-29 · ·

A method of compensating for shrinking and distortion of an object resulting from a manufacturing process. A scan is performed of an object following a manufacturing process to produce scan data. The scan data is aligned to a part mesh of the object. The part mesh is adjusted to substantially coincide with the scan data by moving part mesh vertices. Delta vectors are computed by subtracting initial part mesh vertex positions from final part mesh vertex positions. The inverse of the delta vectors are applied to the preprocessed part mesh to give a scan adjusted pre-processed shape.

METHOD AND APPARATUS FOR REGISTERING A NEUROSURGICAL PATIENT AND DETERMINING BRAIN SHIFT DURING SURGERY USING MACHINE LEARNING AND STEREOOPTICAL THREE-DIMENSIONAL DEPTH CAMERA WITH A SURFACE-MAPPING SYSTEM

A method for generating an intraoperative 3D brain model while a patient is operated. Before an opening in a patient's skull is made, the method includes: providing a preoperative 3D brain model of a patient's brain and converting it to a preoperative 3D brain point cloud; providing a preoperative 3D face model of a patient's face and converting it to a preoperative 3D face point cloud. After the opening in the patient's skull is made, the method includes: matching the intraoperative 3D face point cloud with the preoperative 3D face point cloud to find a face point transformation; transforming the intraoperative 3D brain point cloud based on said face point cloud transformation; comparing the intraoperative 3D brain point cloud with the preoperative 3D brain point cloud to determine a brain shift; and converting the preoperative 3D brain model to generate an intraoperative 3D brain model based on said brain shift.

METHOD AND APPARATUS FOR REGISTERING A NEUROSURGICAL PATIENT AND DETERMINING BRAIN SHIFT DURING SURGERY USING MACHINE LEARNING AND STEREOOPTICAL THREE-DIMENSIONAL DEPTH CAMERA WITH A SURFACE-MAPPING SYSTEM

A method for generating an intraoperative 3D brain model while a patient is operated. Before an opening in a patient's skull is made, the method includes: providing a preoperative 3D brain model of a patient's brain and converting it to a preoperative 3D brain point cloud; providing a preoperative 3D face model of a patient's face and converting it to a preoperative 3D face point cloud. After the opening in the patient's skull is made, the method includes: matching the intraoperative 3D face point cloud with the preoperative 3D face point cloud to find a face point transformation; transforming the intraoperative 3D brain point cloud based on said face point cloud transformation; comparing the intraoperative 3D brain point cloud with the preoperative 3D brain point cloud to determine a brain shift; and converting the preoperative 3D brain model to generate an intraoperative 3D brain model based on said brain shift.

METHOD OF AUTONOMOUS HIERARCHICAL MULTI-DRONE IMAGE CAPTURING

A method for optimizing image capture of a scene by a swarm of drones including a root drone and first and second level-1 drones involves the root drone following a predetermined trajectory over the scene, capturing one or more root keyframe images, at a corresponding one or more root drone orientations and root drone-to-scene distances. For each root keyframe image: the root drone generates a ground mask image for that root keyframe image, and applies that ground mask image to the root keyframe image to generate a target image. The root drone then analyzes the target image to generate first and second scanning tasks for the first and second level-1 drones to capture a plurality of images of the scene at a level-1 drone-to-scene distance smaller than the root drone-to-scene distance; and the first and second level-1 drones carry out the first and second scanning tasks respectively.