Patent classifications
G06T2207/30032
Medical image displaying apparatus and a medical image diagnostic apparatus
A medical image displaying apparatus is provided that improves the workability of making a definitive diagnosis in capsule endoscopy. A medical image displaying apparatus of an embodiment is capable of displaying a virtual endoscopic image in a tube based on a viewpoint set inside the tube of the tubular body by using a three-dimensional image of the tubular body, and comprises a capsule endoscopic image storage and a display controller, and the capsule endoscopic image storage stores capsule endoscopic images acquired by a capsule endoscope passing inside the tube. The display controller displays the capsule endoscopic images based on the location of the viewpoint.
ENDOSCOPIC EXAMINATION SUPPORT APPARATUS, ENDOSCOPIC EXAMINATION SUPPORT METHOD, AND RECORDING MEDIUM
In the endoscopic examination support apparatus, three three-dimensional model generation means generates a three-dimensional model of a luminal organ in which an endoscope camera is placed, based on endoscopic images acquired by imaging an interior of the luminal organ with the endoscope camera. The unobserved area detection means detects an area estimated not to be observed by the endoscope camera, as an unobserved area, based on the three-dimensional model. The display image generation means generates a display image including information indicating an observation achievement degree for each of a plurality of sites of the luminal organ, based on the detection result of the unobserved area. The endoscopic examination support apparatus may be used to support user's decision making.
Techniques for segmentation of lymph nodes, lung lesions and other solid or part-solid objects
Techniques for segmentation include determining an edge of voxels in a range associated with a target object. A center voxel is determined. Target size is determined based on the center voxel. In some embodiments, edges near the center are suppressed, markers are determined based on the center, and an initial boundary is determined using a watershed transform. Some embodiments include determining multiple rays originating at the center in 3D, and determining adjacent rays for each. In some embodiments, a 2D field of amplitudes is determined on a first dimension for distance along a ray and a second dimension for successive rays in order. An initial boundary is determined based on a path of minimum cost to connect each ray. In some embodiments, active contouring is performed using a novel term to refine the initial boundary. In some embodiments, boundaries of part-solid target objects are refined using Markov models.
ARTIFICIAL INTELLIGENCE-BASED IMAGE PROCESSING METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
An artificial intelligence-based image processing method includes: obtaining a first sample image of a source domain and a second sample image of a target domain, the first sample image of the source domain carrying a corresponding target processing result; converting the first sample image into a target sample image, the target sample image carrying a corresponding target processing result; training a first image processing model based on the target sample image and the target processing result corresponding to the target sample image, to obtain a second image processing model; and inputting, in response to obtaining a human tissue image of the target domain, the human tissue image into the second image processing model, positioning, by the second image processing model, a target human tissue in the human tissue image, and outputting position information of the target human tissue in the human tissue image.
Reconstruction with Object Detection for Images Captured from a Capsule Camera
A method of processing images captured using a capsule camera is disclosed. According to one embodiment, two images designated as a reference image and a float image are received, where the float image corresponds to a captured capsule image and the reference image corresponds to a previously composite image or another captured capsule image prior to the float image. Automatic segmentation is applied to the float image and the reference image to detect any non-GI (non-gastrointestinal) region. The non-GI regions are excluded in match measure between the reference image and a deformed float image during the registration process. The two images are stitched together by rendering the two images at the common coordinate. In another embodiment, large area of non-GI regions are removed directly from the input image, and remaining portions are stitched together to form a new image without performing image registration.
SYSTEM AND METHOD FOR DETECTING POLYPS FROM LEARNED BOUNDARIES
A system and method for automated polyp detection in optical colonoscopy images is provided. In one embodiment, the system and method for polyp detection is based on an observation that image appearance around polyp boundaries differs from that of other boundaries in colonoscopy images. To reduce vulnerability against misleading objects, the image processing method localizes polyps by detecting polyp boundaries, while filtering out irrelevant boundaries, with a generative-discriminative model. To filter out irrelevant boundaries, a boundary removal mechanism is provided that captures changes in image appearance across polyp boundaries. Thus, in this embodiment the boundary removal mechanism is minimally affected by texture visibility limitations. In addition, a vote accumulation scheme is applied that enables polyp localization from fragmented edge segmentation maps without identification of whole polyp boundaries.
Systems and methods for training generative adversarial networks and use of trained generative adversarial networks
The present disclosure relates to computer-implemented systems and methods for training and using generative adversarial networks. In one implementation, a system for training a generative adversarial network may include at least one processor that may provide a first plurality of images including representations of a feature-of-interest and indicators of locations of the feature-of-interest and use the first plurality and indicators to train an object detection network. Further, the processor(s) may provide a second plurality of images including representation of the feature-of-interest, and apply the trained object detection network to the second plurality to produce a plurality of detections of the feature-of-interest. Additionally, the processor(s) may provide manually set verifications of true positives and false positives with respect to the plurality of detections, use the verifications tr train a generative adversarial network, and retrain the generative adversarial network using at least one further set of images, further detections, and further manually set verifications.
Autonomous navigation and intervention in the gastrointestinal tract
Implementations include herein are visual navigation strategies and systems for lumen center tracking comprising a high-level state machine for gross (i.e., left/right/center) region prediction and curvature estimation and multiple state-dependent controllers for center tracking, wall-avoidance and curve following. This structure allows a navigation system to navigate even under the presence of significant occlusion that occurs during turn navigation and to robustly recover from mistakes and disturbances that may occur while attempting to track the lumen center. This system comprises a high-level state machine for gross region prediction, a turn estimator for anticipating sharp turns, and several lower level controllers for heading adjustment.
Portable Edge AI-Assisted Diagnosis and Quality Control System for Gastrointestinal Endoscopy
In a decision-support system for gastrointestinal (GI) endoscopy, convolutional neural networks (CNNs) are set up to perform decision-support tasks according to endoscopic images. Each learnable kernel used in the CNNs is advantageously modeled as a linear combination of a set of fixed kernels for simplifying kernel learning, giving a lightweight kernel model to advantageously reduce required computation resources. Further computation-resource reduction can be made by CNN model compression via knowledge distillation and by using multi-task CNNs. It enables the decision-support system to be realized as an edge computing system near a site of performing endoscopic examinations. The system can be automatically configured for esophagogastroduodenoscopy (EGD) or colonoscopy. In the system, lesion-detection results and quality-control results can be seamlessly integrated to provide value-added results, which are more valuable to the endoscopist than separately considering the lesion-detection results and quality-control results.
Object detection model training method and apparatus, object detection method and apparatus, computer device, and storage medium
An object detection model training method includes: inputting an unannotated first sample image into an initial detection model of a current round, and outputting a first prediction result for a target object, transforming the first sample image and a first prediction position region within the first prediction result to obtain a second sample image and a prediction transformation result in the second sample image; inputting the second sample image into the initial detection model, and outputting a second prediction result for the target object; obtaining a loss value of unsupervised learning according to a difference between the second prediction result and the prediction transformation result; and adjusting model parameters of the initial detection model according to the loss value and returning to the operation of inputting a first sample image into an initial detection model of a current round to perform iterative training, to obtain an object detection model.