Patent classifications
G06T7/0014
Quality Control of Automated Whole-slide Analyses
The subject disclosure presents systems and methods for automatically selecting meaningful regions on a whole-slide image and performing quality control on the resulting collection of FOVs. Density maps may be generated quantifying the local density of detection results. The heat maps as well as combinations of maps (such as a local sum, ratio, etc.) may be provided as input into an automated FOV selection operation. The selection operation may select regions of each heat map that represent extreme and average representative regions, based on one or more rules. One or more rules may be defined in order to generate the list of candidate FOVs. The rules may generally be formulated such that FOVs chosen for quality control are the ones that require the most scrutiny and will benefit the most from an assessment by an expert observer.
ANALYZING POSTURE-BASED IMAGE DATA
Various embodiments are directed to systems and methods for determining whether an individual uses proper posture to perform a job duty/task. For example, systems may determine whether an individual utilizes proper posture when lifting a heavy item from a floor. Accordingly, various embodiments comprise an image capture device and a central computing entity configured to receive item information/data for an item to be moved by an individual and to determine whether the item information/data satisfies one or more image collection criteria. Upon determining the item information/data satisfies one or more of the image collection criteria, the computing entity may activate an image capture device to collect image information/data of individuals performing the job duty/task, to compare collected image information/data against a plurality of reference images, and to determine whether the collected image information/data is indicative of the individual performing the job duty/task according to proper posture considerations.
Systems and methods for processing electronic images of slides for a digital pathology workflow
A computer-implemented method of using a machine learning model to categorize a sample in digital pathology may include receiving one or more cases, each associated with digital images of a pathology specimen; identifying, using the machine learning model, a case as ready to view; receiving a selection of the case, the case comprising a plurality of parts; determining, using the machine learning model, whether the plurality of parts are suspicious or non-suspicious; receiving a selection of a part of the plurality of parts; determining whether a plurality of slides associated with the part are suspicious or non-suspicious; determining, using the machine learning model, a collection of suspicious slides, of the plurality of slides, the machine learning model having been trained by processing a plurality of training images; and annotating the collection of suspicious slides and/or generating a report based on the collection of suspicious slides.
SYSTEM OF JOINT BRAIN TUMOR AND CORTEX RECONSTRUCTION
System for performing fully automatic brain tumor and tumor-aware cortex reconstructions upon receiving multi-modal MRI data (T1, T1c, T2, T2-Flair). The system outputs imaging which delineates distinctions between tumors (including tumor edema, and tumor active core), from white matter and gray matter surfaces. In cases where existing MRI model data is insufficient then the model is trained on-the-fly for tumor segmentation and classification. A tumor-aware cortex segmentation that is adaptive to the presence of the tumor is performed using labels, from which the system reconstructs and visualizes both tumor and cortical surfaces for diagnostic and surgical guidance. The technology has been validated using a publicly-available challenge dataset.
Method and apparatus for mammographic multi-view mass identification
A method, applied to an apparatus for mammographic multi-view mass identification, includes receiving a main image, a first auxiliary image, and a second auxiliary image. The main image and the first auxiliary image are images of a breast of a person, and the second auxiliary image is an image of another breast of the person. The method further includes detecting the nipple location based on the main image and the first auxiliary image; generating a first probability map of the main image based on the main image, the first auxiliary image, and the nipple location; generating a second probability map of the main image based on the main image, the second auxiliary image, and the nipple location; and generating and outputting a fused probability map based on the first probability map and the second probability map.
DISEASE CHARACTERIZATION FROM FUSED PATHOLOGY AND RADIOLOGY DATA
Methods and apparatus distinguish invasive adenocarcinoma (IA) from in situ adenocarcinoma (AIS). One example apparatus includes a set of circuits, and a data store that stores three dimensional (3D) radiological images of tissue demonstrating IA or AIS. The set of circuits includes a classification circuit that generates an invasiveness classification for a diagnostic 3D radiological image, a training circuit that trains the classification circuit to identify a texture feature associated with IA, an image acquisition circuit that acquires a diagnostic 3D radiological image of a region of tissue demonstrating cancerous pathology and that provides the diagnostic 3D radiological image to the classification circuit, and a prediction circuit that generates an invasiveness score based on the diagnostic 3D radiological image and the invasiveness classification. The training circuit trains the classification circuit using a set of 3D histological reconstructions combined with the set of 3D radiological images.
SYSTEMS AND METHODS FOR IMAGE SEGMENTATION
Systems and methods for image segmentation are provided. The systems may obtain a target image and a template image relating to the target image. The template image may correspond to an initial mask reflecting initial segmentations of the template image. The systems may determine a first transformation and an intermediate template image by preliminarily registering the template image to the target image and generate an intermediate mask based on the initial mask and the first transformation. The systems may determine, based on the intermediate mask, one or more first regions from the target image and one or more second regions from the intermediate template image. The systems may determine a second transformation by registering each of the one or more second regions to a corresponding first region. The systems may determine a target mask according to which the target image can be segmented based on one or more second transformations.
METHODS AND APPARATUS FOR DETECTING INJURY USING MULTIPLE TYPES OF MAGNETIC RESONANCE IMAGING DATA
Methods and apparatus for predicting performance of an individual on a task, the method comprises receiving brain imaging data for the individual, wherein the brain imaging data comprises structural brain data, determining values for at least one characteristic of the structural brain data within regions of interest defined for a population of individuals having different performance levels, and predicting based on the determined values, a performance potential of the individual.
IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE, AND PROGRAM
An image processing method performed by a processor and including: a step of acquiring a choroidal vascular image; a step of detecting a vortex vein position from the choroidal vascular image; a step of identifying a choroidal vessel related to the vortex vein position; and a step of finding a size of the choroidal vessel.
SYSTEMS, METHODS, AND COMPUTER-READABLE MEDIA FOR DETECTING IMAGE DEGRADATION DURING SURGICAL PROCEDURES
Methods, systems, and computer-readable media for detecting image degradation during a surgical procedure are provided. A method includes receiving images of a surgical instrument; obtaining baseline images of an edge of the surgical instrument; comparing a characteristic of the images of the surgical instrument to a characteristic of the baseline images of the edge of the surgical instrument, the images of the surgical instrument being received subsequent to obtaining the baseline images of the edge of the surgical instrument and being received while the surgical instrument is disposed at a surgical site in a patient; determining whether the images of the surgical instrument are degraded, based on the comparing of the characteristic of the images of the surgical instrument and the characteristic of the baseline images of the surgical instrument; and generating an image degradation notification, in response to a determination that the images of the surgical instrument are degraded.