G06V2201/031

IMAGING A HOLLOW ORGAN

The present invention relates to imaging a hollow organ. In order to provide an improved and facilitated imaging of a hollow organ of interest, a device (10) for providing three-dimensional data of a hollow organ is provided that comprises a measurement input (12), a data processor (14) and an output interface (16). The measurement input is configured to receive a plurality of local electric field measurements (18) of at least one electrode on a catheter inserted in a lumen of a hollow organ of interest. The measurement input is also configured to receive geometrical data (20) representative of the location of the at least one electrode inside the lumen during the measurements. The data processor is configured to receive pre-set electric field characteristics (22) associated with predetermined anatomical landmarks of the hollow organ expectable in the lumen in dependency of a type of the hollow organ. The data processor is also configured to compare at least one of the plurality of local electric field measurements with the pre-set electric field characteristics to determine matching electric field measurements. The data processor is further configured to allocate local electric field measurements to matching electric field characteristics based on the geometrical data to identify anatomical landmarks of the hollow organ by identifying those local field measurements in the plurality of measurements that correspond to landmarks of the hollow organ. The data processor is still further configured to generate a three-dimensional image data cloud (24) by transforming the allocated electric field measurements into portions of the three-dimensional image data cloud based on the identified anatomical landmarks. The output interface is configured to provide the three-dimensional image data cloud.

SYSTEM FOR GENERATION OF USER-CUSTOMIZED IMAGE IDENTIFICATION DEEP LEARNING MODEL THROUGH OBJECT LABELING AND OPERATION METHOD THEREOF
20230215149 · 2023-07-06 · ·

A deep learning system establishes a simple process of generating a deep learning model, and provides an intuitive, natural and easy interaction in performing feedback on image input, manual labelling and automated labelling required for the above-described operations. Therefore, a user without expertise in deep learning can have an opportunity to directly generate and use a user-customized image identification deep learning model for identifying a desired object to be identified.

Deep network lung texture recogniton method combined with multi-scale attention

The invention discloses a deep network lung texture recognition method combined with multi-scale attention, which belongs to the field of image processing and computer vision. In order to accurately recognize the typical texture of diffuse lung disease in computed tomography (CT) images of the lung, a unique attention mechanism module and multi-scale feature fusion module were designed to construct a deep convolutional neural network combing multi-scale and attention, which achieves high-precision automatic recognition of typical textures of diffuse lung diseases. In addition, the proposed network structure is clear, easy to construct, and easy to implement.

Anatomy-aware motion estimation

Described herein are neural network-based systems, methods and instrumentalities associated with estimating the motion of an anatomical structure. The motion estimation may be performed utilizing pre-learned knowledge of the anatomy of the anatomical structure. The anatomical knowledge may be learned via a variational autoencoder, which may then be used to optimize the parameters of a motion estimation neural network system such that, when performing motion estimation for the anatomical structure, the motion estimation neural network system may produce results that conform with the underlying anatomy of anatomical structure.

Systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking

The disclosure herein relates to systems, methods, and devices for medical image analysis, diagnosis, risk stratification, decision making and/or disease tracking. In some embodiments, the systems, devices, and methods described herein are configured to analyze non-invasive medical images of a subject to automatically and/or dynamically identify one or more features, such as plaque and vessels, and/or derive one or more quantified plaque parameters, such as radiodensity, radiodensity composition, volume, radiodensity heterogeneity, geometry, location, and/or the like. In some embodiments, the systems, devices, and methods described herein are further configured to generate one or more assessments of plaque-based diseases from raw medical images using one or more of the identified features and/or quantified parameters.

System and method for generating point cloud data for electro-anatomical mapping
11544847 · 2023-01-03 · ·

Disclosed is a method for generating high resolution point cloud data for electro-anatomical mapping comprising receiving sparsely measured point cloud data having a plurality of data points. Surface mesh data comprising mesh points defining triangles on a myocardial surface is generated. The point cloud data is mapped to the surface mesh data. For each point of the surface mesh data that cannot be mapped to the point cloud data because there is a missing data point in point cloud data, an interpolation operation is performed based on the point cloud data within the neighbourhood of the point to generate a value for the missing data point. The interpolation operation is repeated N times. For every repetition, a difference between the value for the missing data point generated from the current iteration and the value for the missing data point generated from the immediately preceding iteration is compared, until the difference is below a threshold.

MEDICAL IMAGE SYNTHESIS FOR MOTION CORRECTION USING GENERATIVE ADVERSARIAL NETWORKS

A computer system is configured to remove motion artifacts in medical images using a generative adversarial network (GAN). The computer system instantiates the GAN having one or more generative network(s) and one or more discriminative network(s) that are pitted against each other to train a generative model and a discriminative model. The training uses a training dataset including a plurality of medical images that are previously classified as without significant motion artifacts for diagnostic purposes. The discriminative model is trained to classify medical images as real or artificial. The generative model is trained to enhance the quality of a medical image and remove motion artifacts by producing a medical image directly from a post-contrast image, without using a pre-contrast mask.

METHOD FOR AUTOMATIC SEGMENTATION OF CORONARY SINUS

Method, executed by a computer, for identifying a coronary sinus of a patient, comprising: receiving a 3D image of a body region of the patient; extracting 2D axial images of the 3D image taken along respective axial planes, 2D sagittal images of the 3D image taken along respective sagittal planes, and 2D coronal images of the 3D image taken along respective coronal planes; applying an axial neural network to each 2D axial image to generate a respective 2D axial probability map, a sagittal neural network to each 2D sagittal image to generate a respective 2D sagittal probability map, and a coronal neural network to each 2D coronal image to generate a respective 2D coronal probability map; generating, based on the 2D probability maps, a 3D mask of the coronary sinus of the patient.

AUTOMATIC LOCALIZED EVALUATION OF CONTOURS WITH VISUAL FEEDBACK
20220414402 · 2022-12-29 · ·

A localized evaluation network incorporates a discriminator acting as classifier, which may be included within a generative adversarial network (GAN). GAN may include a generative network such as U-NET for creating segmentations. The localized evaluation network is trained on image pairs including medical images of organs of interest and segmentation (mask) images. The network is trained to distinguish whether an image pair does or does not represent the ground truth. GAN examines interior layers of the discriminator and evaluates how much each localized image region contributes to the final classification. The discriminator may analyze regions of the image pair that contribute to a classification by analyzing layer weights of the machine learning model. Disclosed embodiments include a visual attribute, such as a heat map, that represents contributions of localized regions of a contour to an overall confidence score. These localized regions may be highlighted and reported for quality assurance review.

CLASSIFICATION OF ORGAN OF INTEREST SHAPES FOR AUTOSEGMENTATION QUALITY ASSURANCE
20220414867 · 2022-12-29 · ·

Embodiments described herein provide for receiving a second image comprising an overlay depicting an organ-at-risk (OAR) segmentations. The overlay is generated by a first machine learning model based on a first image depicting the anatomical region of a current patient. A second machine learning model receives the second image and set of third images depicting prior patient OAR segmentations on which the second machine learning model was trained. The second machine learning model classifies the second image as one of a set of class names and characterizes the extent to which the second image is similar to, or dissimilar to, images with the same class name in the set of third images. The characterization may be based on outputs of internal layers of the second machine learning model. Dimensionality reduction may be performed on the outputs of the internal layers to present the outputs in a form comprehendible by humans.