G06T2207/10084

IMAGE PROCESSING METHOD AND DEVICE
20230103262 · 2023-03-30 ·

An image processing method includes obtaining a first quantity of to-be-analyzed images and performing fusion and enhancement processing on the first quantity of to-be-analyzed images through an image analysis model to obtain a first target image. Each to-be-analyzed image corresponds to a different target modality of a target imaging object. The first target image is used to enhance display of a distribution area of an analysis object of the first quantity of to-be-analyzed images. The analysis object belongs to the imaging object. The image analysis model is obtained by training a second quantity of sample images corresponding to different sample modalities. The first quantity is less than or equal to the second quantity. The target modality belongs to the sample modalities.

METHOD TO ACQUIRE A 3D IMAGE OF A SAMPLE STRUCTURE

In a method to acquire a 3D image of a sample structure initially a first raw 2D set of 2D images of a sample structure is acquired at a limited number of raw sample planes. From this first raw 2D set a 3D image of the sample structure being represented by a 3D volumetric image data set is calculated and a measurement parameter is extracted from the 3D volumetric image data set. Such measurement parameter is assigned to the number of 2D image acquisitions recorded during the acquisition step. Then, a further interleaving 2D set of 2D images of the sample structure is required by recording a further number of interleaving 2D image acquisitions at a further number of interleaved sample planes which do not coincide with the previous acquisition sample planes. The steps “calculating,” “extracting” and “assigning” are repeated for the further interleaving 2D set. The actual and the last extracted measurement parameters are compared to check whether a convergence criterion is met. If not, the steps “acquiring,” “calculating,” “extracting,” “assigning” and “comparing” are repeated for a further interleaving 2D set including a further number of interleaving 2D image acquisitions at a further number of interleaved sample planes which do not coincide with the previous acquisition sample planes. This is done until the convergence criterion is met or until a given maximum number of 2D image acquisitions is recorded. The measurement parameter and a total number of recorded 2D image acquisitions are output. A projection system used for such method comprises a projection light source, a rotatable sample structure holder and a spatially resolving detector. Alternatively or in addition, such method can be used by a data processing system to acquire virtual tomographic images of a sample. With such method, a sample throughput is improved.

AN IMPROVED LESION DETECTION METHOD
20230162339 · 2023-05-25 ·

The present invention relates to lesion detection. In order to improve lesion detection, it is proposed to combine the techniques of deep learning and the strategy of silhouette, i.e. subtraction between contrast-enhanced and non-contrast-enhanced, in order to reflect only the difference between the two images related to the lesion.

SYSTEMS AND METHODS FOR AUTOMATED IDENTIFICATION AND CLASSIFICATION OF LESIONS IN LOCAL LYMPH AND DISTANT METASTASES

Presented herein are systems and methods that provide automated analysis of 3D images to classify representations of lesions identified therein. In particular, in certain embodiments, approaches described herein allow hotspots representing lesions to be classified based on their spatial relationship with (e.g., whether they are in proximity to, overlap with, or are located within) one or more pelvic lymph node regions in detailed fashion.

SYSTEMS AND METHODS FOR IMAGE-BASED NERVE FIBER EXTRACTION

The present disclosure provides methods and systems for image-based nerve fiber extraction. The methods may include obtaining an anatomical image of a subject and a diffusion image of the subject. The subject may include at least one region of interest (ROI) that relates to extraction of at least one target nerve fiber in the subject. The methods may further include determining, based on the anatomical image, the at least one ROI in the diffusion image; and extracting, from the diffusion image, at least one of the at least one target nerve fiber based on the at least one ROI.

Systems and methods for identifying anatomically relevant blood flow characteristics in a patient

Systems and methods are disclosed for identifying anatomically relevant blood flow characteristics in a patient. One method includes: receiving, in an electronic storage medium, a patient-specific representation of at least a portion of vasculature of the patient having a lesion at one or more points; receiving values for one or more metrics of interest associated with one or more locations in the vasculature of the patient; receiving one or more observed lumen measurements of the vasculature of the patient; determining the location of a diseased region in the vasculature of the patient using the received values for the one or more metrics of interest, wherein the determination of the location includes predicting or receiving one or more healthy lumen measurements of the vasculature of the patient; determining the extent of the diseased region; and generating a visualization of at least the diseased region.

SPATIALLY-AWARE INTERACTIVE SEGMENTATION OF MULTI-ENERGY CT DATA

Segmentation of multi-energy CT data, including data in three or more energy bands. A user is enabled to input one or more region indicators in displayed CT data. Probability maps are generated and may be refined using distance metrics, which may include geodesic and Euclidean distance metrics. Segmentation may be based on the probability maps and/or refined probability maps. Segmentation of medical image data is also disclosed.

Simplified method for robust estimation of parameter values

A method for estimating parameter values includes acquiring image data with an imaging apparatus, deriving a parameter model function from the image data, generating a N-dimensional grid, wherein N is a number of values of one or more non-linear terms of the derived model function, pre-calculating the one or more non-linear terms given the parameter model function and the designated values of the non-linear parameters, calculating one or more remaining model terms of the parameter model function, and displaying at least one of the one or more non-linear terms and remaining linear model terms.

SYSTEM FOR OCT IMAGE TRANSLATION, OPHTHALMIC IMAGE DENOISING, AND NEURAL NETWORK THEREFOR

An OCT system includes a machine learning (ML) model trained to receive a single OCT scan/image and provide an image translation and/or denoise function. The ML model may be based on a neural network (NN) architecture including a series of encoding modules in a contracting path followed by a series of decoding modules in an expanding path leading to an output convolution module. An intermediate error module determines a deep error measure, e.g., between a training output image and at least one encoding module and/or decoding module, and an error from the output convolution module is combined with the deep error measure. The NN may be trained using true averaged images as ground truth, training outputs. Alternatively, the NN may be trained using randomly selected, individual OCT images/scans as training outputs.

Method and system of defining a region of interest on medical scan images

A method, medical imaging workstation (1000) and hybrid medical imaging scanner (1100) are provided for defining a region of interest (RoI) for display on at least two medical scan images. When displaying a first medical scan image (740), input data defining a RoI on the image is captured, and stored as at least a first region representation (760). The RoI is displayed on a second medical scan image (750), based on the first region representation (760). Changes to the RoI on the second medical scan image (750) are used to update the first region representation (760). There may be separate region representations (760, 770) associated with each of several medical scan images. The invention may improve the definition of a region of interest, by allowing editing on each of multiple image displays (820, 830, 880) to feed through to all medical scan images.