G06T2207/30064

Monitoring computed tomography (CT) scan image

Disclosed is a system and a method for monitoring a CT scan image. A CT scan image may be resampled into a plurality of slices using a bilinear interpolation. A region of interest may be identified on each slice using an image processing technique. The region of interest may be masked on each slice using deep learning. Subsequently, a nodule may be detected as the region of interest using the deep learning. Further, a plurality of characteristics associated with the nodule may be identified. Furthermore, an emphysema may be detected in the region of interest on each slice. A malignancy risk score for the patient may be computed. A progress of the nodule may be monitored across subsequent CT scan images. Finally, a report of the patient may be generated.

Deformable capsules for object detection

An improved method of performing object segmentation and classification that reduces the memory required to perform these tasks, while increasing predictive accuracy. The improved method utilizes a capsule network with dynamic routing. Capsule networks allow for the preservation of information about the input by replacing max-pooling layers with convolutional strides and dynamic routing, allowing for the reconstruction of an input image from output capsule vectors. The present invention expands the use of capsule networks to the task of object segmentation and medical image-based cancer diagnosis for the first time in the literature; extends the idea of convolutional capsules with locally-connected routing and propose the concept of deconvolutional capsules; extends the masked reconstruction to reconstruct the positive input class; and proposes a capsule-based pooling operation for diagnosis. The convolutional-deconvolutional capsule network shows strong results for the tasks of object segmentation and classification with substantial decrease in parameter space.

Augmented inspector interface with targeted, context-driven algorithms

Systems and techniques that facilitate an augmented inspector interface with targeted, context-driven algorithms are provided. In various embodiments, a magnification component can magnify a portion of a medical image. In various embodiments, a recognition component can recognize an anatomical structure depicted in the portion of the medical image. In various embodiments, a recommendation component can recommend one or more sets of computing algorithms or computing operations related to the anatomical structure. In various embodiments, a menu component can display the one or more recommended sets of computing algorithms or computing operations in a drop-down menu.

LEARNING DEVICE, LEARNING METHOD, LEARNING PROGRAM, INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
20230054096 · 2023-02-23 · ·

A processor derives a first feature amount for an object included in an image by a first neural network, structures a sentence including description of the object included in the image to derive structured information for the sentence, and derives a second feature amount for the sentence from the structured information by a second neural network. The processor trains the first neural network and the second neural network such that, in a feature space to which the first feature amount and the second feature amount belong, a distance between the derived first feature amount and second feature amount is reduced in a case in which the object included in the image and the object described in the sentence correspond to each other.

Diagnosis support apparatus and X-ray CT apparatus

In one embodiment, a diagnosis support apparatus includes: an input circuit configured to acquire a first medical image; and processing circuitry configured to generate a second medical image from the first medical image in such a manner that information included in the second medical image is reduced from information included in the first medical image, extract auxiliary information from the first medical image, and perform inference of a disease by using the second medical image and the auxiliary information.

INTERACTIVE ENDOSCOPY FOR INTRAOPERATIVE VIRTUAL ANNOTATION IN VATS AND MINIMALLY INVASIVE SURGERY

A controller (522) for live annotation of interventional imagery includes a memory (52220) that stores software instructions and a processor (52210) that executes the software instructions. When executed by the processor (52210), the software instructions cause the controller (522) to implement a process that includes receiving (S210) interventional imagery during an intraoperative intervention and automatically analyzing (S220) the interventional imagery for detectable features. The process executed when the processor (52210) executes the software instructions also includes detecting (S230) a detectable feature and determining (S240) at add an annotation to the interventional imagery for the detectable feature. The processor further includes identifying (S250) a location for the annotation as an identified location in the interventional imagery and adding (S260) the annotation to the interventional imagery at the identified location to correspond to the detectable feature. During the intraoperative intervention, a video is output (S270) as video output based on interventional imagery and the annotation, including the annotation overlaid on the interventional imagery at the identified location.

Systems and methods for processing real-time video from a medical image device and detecting objects in the video

The present disclosure relates to computer-implemented systems and methods for detecting a feature-of-interest in a video. In one implementation, a computer-implemented system may include a discriminator network and a generative network. The discriminator network may include a perception branch and an adversarial branch, the perception branch being configured to output detections of the feature-of-interest in the video. The generative network may be configured to receive detections of the feature-of-interest from the perception branch of the discriminator network and generate artificial representations of the feature-of-interest based on the detections from the perception branch. Further, the adversarial branch may be configured to provide an output identifying differences between the false representations and true representations of the feature-of-interest, and the perception branch may be further configured to be trained by the output of the adversarial branch so that false representations are not detected by the perception branch as true representations.

Method for filtering normal medical image, method for interpreting medical image, and computing device implementing the methods
11574727 · 2023-02-07 · ·

A method of reading a medical image by a computing device operated by at least one processor is provided. The method includes obtaining an abnormality score of the input image using an abnormality prediction model, filtering the input image so as not to be subsequently analyzed when the abnormality score is less than or equal to a cut-off score based on the cut-off score which makes a specific reading sensitivity; and obtaining an analysis result of the input image using a classification model that distinguishes the input image into classification classes when the abnormality score is greater than the cut-off score.

X-ray image synthesis from CT images for training nodule detection systems

Systems and methods for generating synthesized medical images for training a machine learning based network. An input medical image in a first modality is received comprising a nodule region for each of one or more nodules, a remaining region and an annotation for each of the nodules. A synthesized medical image in a second modality is generated from the input medical image comprising the annotation for each of the nodules. A synthesized nodule image of each of the nodule regions and synthesized remaining image of the remaining region are generated in the second modality. It is determined whether a particular nodule is visible in the synthesized medical image based on the synthesized nodule image for the particular nodule and the synthesized remaining image. If at least one nodule is not visible in the synthesized medical image, the annotation for the not visible nodule is removed from the synthesized nodule image.

Object recognition method and device, and storage medium

An object recognition method is performed at an electronic device. The method includes: pre-processing a target image, to obtain a pre-processed image, the pre-processed image including three-dimensional image information of a target region of a to-be-detected object, processing the pre-processed image by using a target data model, to obtain a target probability, the target probability being used for representing a probability that an abnormality appears in a target object in the target region of the to-be-detected object; and determining a recognition result of the target region of the to-be-detected object according to the target probability, the recognition result being used for indicating the probability that the abnormality appears in the target region of the to-be-detected object. The object recognition method can effectively improve accuracy of object recognition and avoid a case of incorrect recognition.