G06T7/30

SYSTEMS AND METHODS FOR LOW FIELD MR/PET IMAGING

Systems and methods of PET attenuation correction using low-field MR image data includes receiving a first set of image data and a set of low-field magnetic resonance (MR) image data. An attenuation correction map is generated from the low-field MR image data using a first trained neural network. At least one attenuation correction process is applied to the first set of image data based on the attenuation correction map to generate at least one clinical attenuation-corrected image.

POINT CLOUD REGISTRATION METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM

A point cloud registration method, apparatus, device, and storage medium are provided. The method includes: acquiring target point cloud data; dividing the target point cloud data into a plurality of point cloud sets; determining a coincidence degree between every two point cloud sets and determining a fixed point cloud set and a registration point cloud set from two point cloud sets with a coincidence degree between the two point cloud sets being greater than a preset threshold; determining a target registration matrix between the fixed point cloud set and the registration point cloud set; and performing registration of the fixed point cloud set with the registration point cloud set according to the target registration matrix.

POINT CLOUD REGISTRATION METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM

A point cloud registration method, apparatus, device, and storage medium are provided. The method includes: acquiring target point cloud data; dividing the target point cloud data into a plurality of point cloud sets; determining a coincidence degree between every two point cloud sets and determining a fixed point cloud set and a registration point cloud set from two point cloud sets with a coincidence degree between the two point cloud sets being greater than a preset threshold; determining a target registration matrix between the fixed point cloud set and the registration point cloud set; and performing registration of the fixed point cloud set with the registration point cloud set according to the target registration matrix.

SYSTEM AND METHOD FOR COHESIVE MULTI-REGIONAL FUNCTIONAL-ANATOMICAL MEDICAL IMAGE REGISTRATION
20230049430 · 2023-02-16 ·

A method includes applying both a first dedicated functional-anatomical registration scheme to a first volume of interest to deform the first volume of interest and a second dedicated functional-anatomical registration scheme to a second volume of interest to deform the second volume of interest, wherein the first volume of interest at least partially encompasses the second volume of interest. The method includes identifying or segmenting relevant organs or anatomical structures related to a first group and a second group in the first volume of interest and the second volume of interest, respectively; generating a spatially smooth-transition weight mask that gives higher weight to image data corresponding to the identified or segmented relevant organs or anatomical structures related to the first group and the second group; and generating a final cohesive registered image volume from the first image volume and the second image volume utilizing the spatially smooth-transition weight mask.

SYSTEMS AND METHODS FOR VISUAL INSPECTION AND 3D MEASUREMENT

Systems and methods for inspecting the outer skin of a honeycomb body are provided. The inspection system comprises a rotational sub-assembly configured to rotate the honeycomb body, a camera sub-assembly configured to image at least a portion of the outer skin of the honeycomb body as it rotates, a three-dimensional (3D) line sensor sub-assembly configured to obtain height information from the outer skin of the honeycomb body; and an edge sensor sub-assembly configured to obtain edge data from the circumferential edges of the honeycomb body. In some examples, the inspection system utilizes a universal coordinate system to synchronize or align the data obtain from each of these sources to prevent redundant or duplicative detection of one or more defects on the outer skin of the honeycomb body.

SYSTEMS AND METHODS FOR VISUAL INSPECTION AND 3D MEASUREMENT

Systems and methods for inspecting the outer skin of a honeycomb body are provided. The inspection system comprises a rotational sub-assembly configured to rotate the honeycomb body, a camera sub-assembly configured to image at least a portion of the outer skin of the honeycomb body as it rotates, a three-dimensional (3D) line sensor sub-assembly configured to obtain height information from the outer skin of the honeycomb body; and an edge sensor sub-assembly configured to obtain edge data from the circumferential edges of the honeycomb body. In some examples, the inspection system utilizes a universal coordinate system to synchronize or align the data obtain from each of these sources to prevent redundant or duplicative detection of one or more defects on the outer skin of the honeycomb body.

Method and apparatus for de-biasing the detection and labeling of objects of interest in an environment
11579625 · 2023-02-14 · ·

Described herein are methods of generating learning data to facilitate de-biasing the labeled location of an object of interest within an image. Methods may include: receiving sensor data, where the sensor data is a first image; determining reference corner locations of an object in the first image using image processing; generating observed corner locations of the object in the first image from the determined reference corner locations; generating a bias transformation based, at least in part, on a difference between the reference corner locations and the observed corner locations of the object in the first image; receiving sensor data from another image sensor of a second image; receiving observed corner locations of an object in the second image from a user; and applying the bias transformation to the observed corner locations of the object in the second image to generate de-biased corners for the object in the second image.

Method and apparatus for de-biasing the detection and labeling of objects of interest in an environment
11579625 · 2023-02-14 · ·

Described herein are methods of generating learning data to facilitate de-biasing the labeled location of an object of interest within an image. Methods may include: receiving sensor data, where the sensor data is a first image; determining reference corner locations of an object in the first image using image processing; generating observed corner locations of the object in the first image from the determined reference corner locations; generating a bias transformation based, at least in part, on a difference between the reference corner locations and the observed corner locations of the object in the first image; receiving sensor data from another image sensor of a second image; receiving observed corner locations of an object in the second image from a user; and applying the bias transformation to the observed corner locations of the object in the second image to generate de-biased corners for the object in the second image.

Computational optics
11579514 · 2023-02-14 · ·

A system and method for controlling characteristics of collected image data are disclosed. The system and method include performing pre-processing of an image using GPUs, configuring an optic based on the pre-processing, the configuring being designed to account for features of the pre-processed image, acquiring an image using the configured optic, processing the acquired image using GPUs, and determining if the processed acquired image accounts for feature of the pre-processed image, and the determination is affirmative, outputting the image, wherein if the determination is negative repeating the configuring of the optic and re-acquiring the image.

Computational optics
11579514 · 2023-02-14 · ·

A system and method for controlling characteristics of collected image data are disclosed. The system and method include performing pre-processing of an image using GPUs, configuring an optic based on the pre-processing, the configuring being designed to account for features of the pre-processed image, acquiring an image using the configured optic, processing the acquired image using GPUs, and determining if the processed acquired image accounts for feature of the pre-processed image, and the determination is affirmative, outputting the image, wherein if the determination is negative repeating the configuring of the optic and re-acquiring the image.