Patent classifications
G06V2201/032
Medical object detection and identification
An approach for improving determining a significant slice associated with a tumor from a volume of medical images is disclosed. The approach is based on the annotation of tumor range and the slice index in which the tumor appears to have the largest area. The approach infer a tumor growth classifier on sliding window of the volume slices and creates a discrete integral function out of the classifier predictions. The approach applies post processing on the discrete integral function which can include a smoothing function and a bias correction. The approach selects the slice index of maximum value from the post processing step.
DISEASE CHARACTERIZATION FROM FUSED PATHOLOGY AND RADIOLOGY DATA
Methods and apparatus distinguish invasive adenocarcinoma (IA) from in situ adenocarcinoma (AIS). One example apparatus includes a set of circuits, and a data store that stores three dimensional (3D) radiological images of tissue demonstrating IA or AIS. The set of circuits includes a classification circuit that generates an invasiveness classification for a diagnostic 3D radiological image, a training circuit that trains the classification circuit to identify a texture feature associated with IA, an image acquisition circuit that acquires a diagnostic 3D radiological image of a region of tissue demonstrating cancerous pathology and that provides the diagnostic 3D radiological image to the classification circuit, and a prediction circuit that generates an invasiveness score based on the diagnostic 3D radiological image and the invasiveness classification. The training circuit trains the classification circuit using a set of 3D histological reconstructions combined with the set of 3D radiological images.
APPARATUS, METHOD AND COMPUTER-READABLE STORAGE MEDIUM FOR DETECTING OBJECTS IN A VIDEO SIGNAL BASED ON VISUAL EVIDENCE USING AN OUTPUT OF A MACHINE LEARNING MODEL
Detections in video frames of a video signal, which are output from a machine learning model, are associated to generate a detection chain. Display of a detection in the video signal is caused based on a position of the detection in the detection chain, the confidence value of the detection and the location of the detection.
Fast 3D Radiography with Multiple Pulsed X-ray Sources by Deflecting Tube Electron Beam using Electro-Magnetic Field
An X-ray imaging system using multiple puked X-ray sources to perform highly efficient and ultrafast 3D radiography is presented. There are multiple puked X-ray sources mounted on a structure in motion to form an array of sources. The multiple X-ray sources move simultaneously relative to an object on a pre-defined arc track at a constant speed as a group. Electron beam inside each individual X-ray tube is deflected by magnetic or electrical field to move focal spot a small distance. When focal spot of an X-ray tube beam has a speed that is equal to group speed but with opposite moving direction, the X-ray source and X-ray flat panel detector are activated through an external exposure control unit so that source tube stay momentarily standstill equivalently. 3D scan can cover much wider sweep angle in much shorter time and image analysis can also be done in real-time.
SYSTEM AND METHOD FOR PREDICTING THE RISK OF FUTURE LUNG CANCER
Risk prediction models are trained and deployed to analyze images, such as computed tomography scans, for predicting future risk of lung cancer for one or more subjects. Individual risk prediction models are separately trained on nodule-specific and non-nodule specific features such that each risk prediction model can predict future risk of lung cancer across different time periods (e.g., 1 year, 3 years, or 5 years). Such risk prediction models are useful for developing preventive therapies for lung cancer by enabling clinical trial enrichment.
SYSTEMS AND METHODS OF DEEP LEARNING FOR COLORECTAL POLYP SCREENING
Disclosed are various embodiments of systems and methods of deep learning for colorectal polyp screening and providing a prediction of neoplasticity of a polyp. A video of a colonoscopy procedure can be captured. Frames from the video or images associated with the colonoscopy procedure can be extracted. A model for classifying objects that appear in the frames or the images can be obtained. A classification can be determined for a polyp that appears in at least one of the frames or images based on applying the frames or images to an input layer of the model.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing apparatus comprising at least one processor, wherein the at least one processor is configured to: determine a property of a predetermined property item from at least one image; generate a plurality of character strings related to the at least one image from the at least one image; and derive a recommended score indicating a degree of recommendation for describing the character string in a document for each of the plurality of character strings based on a result of the determination.
Image classification using a mask image and neural networks
Image classification using a generated mask image is performed by generating a mask image that extracts a target area from an input image, extracting an image feature map of the input image by inputting the input image in a first neural network including at least one image feature extracting layer, masking the image feature map by using the mask image, and classifying the input image by inputting the masked image feature map to a second neural network including at least one classification layer.
Systems and methods for processing real-time video from a medical image device and detecting objects in the video
The present disclosure relates to computer-implemented systems and methods for detecting a feature-of-interest in a video. In one implementation, a computer-implemented system may include a discriminator network and a generative network. The discriminator network may include a perception branch and an adversarial branch, the perception branch being configured to output detections of the feature-of-interest in the video. The generative network may be configured to receive detections of the feature-of-interest from the perception branch of the discriminator network and generate artificial representations of the feature-of-interest based on the detections from the perception branch. Further, the adversarial branch may be configured to provide an output identifying differences between the false representations and true representations of the feature-of-interest, and the perception branch may be further configured to be trained by the output of the adversarial branch so that false representations are not detected by the perception branch as true representations.
IMAGE ANNOTATION USING ONE OR MORE NEURAL NETWORKS
Apparatuses, systems, and techniques are presented to predict annotations for objects in images. In at least one embodiment, boundaries of an object within an image can be identified based, at least in part, on a user-generated outline of only a portion of this object or information about a size of this object provided by a user.