G06V10/449

AUTOMATED IDENTIFICATION OF NECROTIC REGIONS IN DIGITAL IMAGES OF MULTIPLEX IMMUNOFLUORESCENCE STAINED TISSUE

Embodiments disclosed herein generally relate to identifying necrotic tissue in a multiplex immunofluorescence image of a slice of specimen. Particularly, aspects of the present disclosure are directed to accessing a multiplex immunofluorescence image of a slice of specimen comprising a first channel for a nuclei marker and a second channel for an epithelial tumor marker, wherein the slice of specimen comprises one or more necrotic tissue regions; providing the multiplex immunofluorescence image to a machine-learning model; receiving an output of the machine-learning model corresponding to a prediction that the multiplex immunofluorescence image includes one or more necrotic tissue regions at one or more particular portions of the multiplex immunofluorescence image; generating a mask for subsequent image processing of the multiplex immunofluorescence image based on the output of the machine-learning model; and outputting the mask for the subsequent image processing.

Downscaler and method of downscaling

A hardware downscaling module and downscaling methods for downscaling a two-dimensional array of values. The hardware downscaling unit comprises a first group of one-dimensional downscalers; and a second group of one-dimensional downscalers; wherein the first group of one-dimensional downscalers is arranged to receive a two-dimensional array of values and to perform downscaling in series in a first dimension; and wherein the second group of one-dimensional downscalers is arranged to receive an output from the first group of one-dimensional downscalers and to perform downscaling in series in a second dimension.

Lingually constrained tracking of visual objects

A computer-implemented method for tracking with visual object constraints includes receiving a lingual constraint and a video. A word embedding is generated based on the lingual constraint. A set of features is extracted for one or more frames of the video. The word embedding is cross-correlated to the set of features for the one or more frames of the video. A prediction indicating whether the lingual constraint is in the one or more frames of the video is generated based on the cross-correlation.

LOW-POWER ALWAYS-ON FACE DETECTION, TRACKING, RECOGNITION AND/OR ANALYSIS USING EVENTS-BASED VISION SENSOR

Techniques disclosed herein utilize a vision sensor that integrates a special-purpose camera with dedicated computer vision (CV) computation hardware and a dedicated low-power microprocessor for the purposes of detecting, tracking, recognizing, and/or analyzing subjects, objects, and scenes in the view of the camera. The vision sensor processes the information retrieved from the camera using the included low-power microprocessor and sends events (or indications that one or more reference occurrences have occurred, and, possibly, associated data) for the main processor only when needed or as defined and configured by the application. This allows the general-purpose microprocessor (which is typically relatively high-speed and high-power to support a variety of applications) to stay in a low-power (e.g., sleep mode) most of the time as conventional, while becoming active only when events are received from the vision sensor.

OBJECT DETECTION AND CLASSIFICATION
20170372164 · 2017-12-28 ·

Object detection and classification across disparate fields of view are provided. A first image generated by a first recording device with a first field of view, and a second image generated by a second recording device with a second field of view can be obtained. An object detection component can detect a first object within the first field of view, and a second object within the second field of view. An object classification component can determine first and second level classification categories of the first object. A data processing system can create a data structure indicating a probability identifier for a descriptor of the first object. An object matching component can correlate the first object with the second object based on the descriptor of the first object, the probability identifier for the descriptor of the first object, or a descriptor of the second object.

RETINAL ENCODER FOR MACHINE VISION
20170255837 · 2017-09-07 ·

A method is disclosed including: receiving raw image data corresponding to a series of raw images; processing the raw image data with an encoder to generate encoded data, where the encoder is characterized by an input/output transformation that substantially mimics the input/output transformation of one or more retinal cells of a vertebrate retina; and applying a first machine vision algorithm to data generated based at least in part on the encoded data.

Spike domain convolution circuit

A convolution circuit includes: a plurality of input oscillators, each configured to: receive a corresponding analog input signal of a plurality of analog input signals; and output a corresponding spiking signal of a plurality of spiking signals, the corresponding spiking signal having a spiking rate in accordance with a magnitude of the corresponding analog input signal; a plurality of 1-bit DACs, each of the 1-bit DACs being configured to: receive the corresponding spiking signal of the plurality of spiking signals from a corresponding one of the input oscillators; and receive a corresponding weight of a convolution kernel comprising a plurality of weights; output a corresponding weighted output of a plurality of weighted outputs in accordance with the corresponding spiking signal and the corresponding weight; and an output oscillator configured to generate an output spike signal in accordance with the plurality of weighted outputs from the plurality of 1-bit DACs.

FIXATION GENERATION FOR MACHINE LEARNING
20170206440 · 2017-07-20 ·

The disclosure extends to methods, systems, and apparatuses for automated fixation generation and more particularly relates to generation of synthetic saliency maps. A method for generating saliency information includes receiving a first image and an indication of one or more sub-regions within the first image corresponding to one or more objects of interest. The method includes generating and storing a label image by creating an intermediate image having one or more random points. The random points have a first color in regions corresponding to the sub-regions and a remainder of the intermediate image having a second color. Generating and storing the label image further includes applying a Gaussian blur to the intermediate image.

Multimodality Mineralogy Segmentation System and Method
20170200290 · 2017-07-13 ·

A multimodality imaging system and method for mineralogy segmentation is disclosed. Image datasets of the sample are generated for one or more modalities, including x-ray and focused ion beam scanning electron microscope (FIB-SEM) modalities. Mineral maps are then created using Energy Dispersive X-ray spectroscopy (EDX) from at least part of the sample covered by the image datasets. The EDX mineral maps are applied as a mask to the image datasets to identify and label regions of minerals within the sample. Feature vectors are then extracted from the labeled regions via feature generators such as Gabor filters. Finally, machine learning training and classification algorithms such as Random Forest are applied to the extracted feature vectors to construct a segmented image representation of the sample that classifies the minerals within the sample.

APPARATUS AND METHOD FOR PROCESSING TEXTURED IMAGE

Disclosed herein are an apparatus and method for processing a textured image. The apparatus includes a filter unit for detecting an edge of an input image and transforming the input image into an image in which a density of the edge is represented; a smoothing unit for removing noise from the transformed image and smoothing the image; a clustering unit for changing a number of regions into which the smoothed image is to be segmented and clustering the smoothed image a preset number of times; and a cluster optimization unit for setting a final number of clusters for the input image by optimizing a number of clusters based on a previously learned ground truth, for selecting an image corresponding to the final number of clusters from results of clustering by which the image is segmented into a different number of regions, and for outputting the selected image.