Patent classifications
G06T2207/30064
GENERATING MODIFIED MEDICAL IMAGES AND DETECTING ABNORMAL STRUCTURES
A method is for generating modified medical images. An embodiment of the method includes receiving a first medical image displaying an abnormal structure within a patient, and applying a trained inpainting function to the first medical image to generate a modified first medical image, the trained inpainting function being trained to inpaint abnormal structures within a medical image. The method includes determining an abnormality patch based on the first medical image and the modified first medical image; receiving a second medical image of the same type as the first medical image; and including the abnormality patch into the second medical image to generate a modified second medical image. A method is for detecting abnormal structures using a trained detection function trained based on modified second medical images. Systems, computer programs and computer-readable media related to those methods are also disclosed.
Class-aware adversarial pulmonary nodule synthesis
Systems and methods are provided for generating a synthesized medical image patch of a nodule. An initial medical image patch and a class label associated with a nodule to be synthesized are received. The initial medical image patch has a masked portion and an unmasked portion. A synthesized medical image patch is generated using a trained generative adversarial network. The synthesized medical image patch includes the unmasked portion of the initial medical image patch and a synthesized nodule replacing the masked portion of the initial medical image patch. The synthesized nodule is synthesized according to the class label. The synthesized medical image patch is output.
DEFORMABLE CAPSULES FOR OBJECT DETECTION
An improved method of performing object segmentation and classification that reduces the memory required to perform these tasks, while increasing predictive accuracy. The improved method utilizes a capsule network with dynamic routing. Capsule networks allow for the preservation of information about the input by replacing max-pooling layers with convolutional strides and dynamic routing, allowing for the reconstruction of an input image from output capsule vectors. The present invention expands the use of capsule networks to the task of object segmentation and medical image-based cancer diagnosis for the first time in the literature; extends the idea of convolutional capsules with locally-connected routing and propose the concept of deconvolutional capsules; extends the masked reconstruction to reconstruct the positive input class; and proposes a capsule-based pooling operation for diagnosis. The convolutional-deconvolutional capsule network shows strong results for the tasks of object segmentation and classification with substantial decrease in parameter space.
Image recognition method and device based on deep convolutional neural network
A method includes the following steps: pre-processing chest X-ray films to obtain initial X-ray film images that meets format requirements; screening the initial X-ray film images to detect whether they are posteroanterior chest images; inputting the posteroanterior chest images into a binary classification model of the deep convolutional neural network for negative and positive classification; inputting the images presenting positive results into a detection model of the deep convolutional neural network to detect a disease type and label an outline of a lesion area in each image; and displaying the disease type and lesion area corresponding to the image.
SYSTEMS AND METHODS FOR PROCESSING REAL-TIME VIDEO FROM A MEDICAL IMAGE DEVICE AND DETECTING OBJECTS IN THE VIDEO
The present disclosure relates to systems and methods for processing real- time video and detecting objects in the video. In one implementation, a system is provided that includes an input port for receiving real-time video obtained from a medical image device, a first bus for transferring the received real-time video, and at least one processor configured to receive the real-time video from the first bus, perform object detection by applying a trained neural network on frames of the received real-time video, and overlay a border indicating a location of at least one detected object in the frames. The system also includes a second bus for receiving the video with the overlaid border, an output port for outputting the video with the overlaid border from the second bus to an external display, and a third bus for directly transmitting the received real-time video to the output port.
Capsules for image analysis
An improved method of performing object segmentation and classification that reduces the memory required to perform these tasks, while increasing predictive accuracy. The improved method utilizes a capsule network with dynamic routing. Capsule networks allow for the preservation of information about the input by replacing max-pooling layers with convolutional strides and dynamic routing, allowing for the reconstruction of an input image from output capsule vectors. The present invention expands the use of capsule networks to the task of object segmentation and medical image-based cancer diagnosis for the first time in the literature; extends the idea of convolutional capsules with locally-connected routing and propose the concept of deconvolutional capsules; extends the masked reconstruction to reconstruct the positive input class; and proposes a capsule-based pooling operation for diagnosis. The convolutional-deconvolutional capsule network shows strong results for the tasks of object segmentation and classification with substantial decrease in parameter space.
Systems and methods for processing real-time video from a medical image device and detecting objects in the video
The present disclosure relates to systems and methods for processing real-time video and detecting objects in the video. In one implementation, a system is provided that includes an input port for receiving real-time video obtained from a medical image device, a first bus for transferring the received real-time video, and at least one processor configured to receive the real-time video from the first bus, perform object detection by applying a trained neural network on frames of the received real-time video, and overlay a border indicating a location of at least one detected object in the frames. The system also includes a second bus for receiving the video with the overlaid border, an output port for outputting the video with the overlaid border from the second bus to an external display, and a third bus for directly transmitting the received real-time video to the output port.
SYSTEMS AND METHODS FOR LUNG NODULE EVALUATION
A method for lung nodule evaluation is provided. The method may include obtaining a target image including at least a portion of a lung of a subject. The method may also include segmenting, from the target image, at least one target region each of which corresponds to a lung nodule of the subject. The method may further include generating an evaluation result with respect to the at least one lung nodule based on the at least one target region.
DYNAMIC 3D LUNG MAP VIEW FOR TOOL NAVIGATION INSIDE THE LUNG
A method for implementing a dynamic three-dimensional lung map view for navigating a probe inside a patient's lungs includes loading a navigation plan into a navigation system, the navigation plan including a planned pathway shown in a 3D model generated from a plurality of CT images, inserting the probe into a patient's airways, registering a sensed location of the probe with the planned pathway, selecting a target in the navigation plan, presenting a view of the 3D model showing the planned pathway and indicating the sensed location of the probe, navigating the probe through the airways of the patient's lungs toward the target, iteratively adjusting the presented view of the 3D model showing the planned pathway based on the sensed location of the probe, and updating the presented view by removing at least a part of an object forming part of the 3D model.
TISSUE NODULE DETECTION AND TISSUE NODULE DETECTION MODEL TRAINING METHOD, APPARATUS, DEVICE, AND SYSTEM
This application relates to a tissue nodule detection and tissue nodule detection model training method, apparatus, device, storage medium and system. The method for training a tissue nodule detection model includes: obtaining source domain data and target domain data, the source domain data comprising a source domain image and an image annotation, the target domain data comprising a target image, and the image annotation being used for indicating location information of a tissue nodule in the source domain image; performing feature extraction on the source domain image using a neural network model to obtain a source domain sampling feature, performing feature extraction on the target image using the neural network model to obtain a target sampling feature, and determining a model result according to the source domain sampling feature using the neural network model; determining a distance parameter between the source domain data and the target domain data according to the source domain sampling feature and the target sampling feature, the distance parameter being a parameter describing a magnitude of a data difference between the source domain data and the target domain data; determining, according to the model result and the image annotation, a loss function value corresponding to the source domain image; and training the neural network model to obtain a tissue nodule detection model by iteratively reducing a combination of the loss function value and the distance parameter. In this way, the detection accuracy can be improved.