Patent classifications
G06F18/2414
Method of Diagnosis
The invention relates to methods for determining the stage of a disease, particularly an ocular neurodegenerative disease such as Alzheimer's, Parkinson's, Huntington's and glaucoma, comprising the steps of identifying the status of microglial cells in the retina and relating that status to disease stage. Methods for identifying cells in the eye are also provided, as are labelled markers and the use thereof.
Dynamic quantization for deep neural network inference system and method
A method for dynamically quantizing feature maps of a received image. The method includes convolving an image based on a predicted maximum value, a predicted minimum value, trained kernel weights and the image data. The input data is quantized based on the predicted minimum value and predicted maximum value. The output of the convolution is computed into an accumulator and re-quantized. The re-quantized value is output to an external memory. The predicted min value and the predicted max value are computed based on the previous max values and min values with a weighted average or a pre-determined formula. Initial min value and max value are computed based on known quantization methods and utilized for initializing the predicted min value and predicted max value in the quantization process.
UTILIZING PREDICTION THRESHOLDS TO FACILITATE SPECTROSCOPIC CLASSIFICATION
In some implementations, a device may obtain a spectroscopic measurement associated with a sample. The device may generate, based on the spectroscopic measurement and a global classification model, a local classification model that includes a plurality of classes. The device may identify, based on the spectroscopic measurement, a particular class of the plurality of classes of the local classification model. The device may identify a prediction threshold associated with the particular class. The device may classify, based on the particular class and the prediction threshold, the spectroscopic measurement. The device may provide, based on classifying the spectroscopic measurement, information indicating whether the sample belongs to the particular class.
Methods and apparatus to improve accuracy of edge and/or a fog-based classification
Methods, apparatus, systems and articles of manufacture to improve accuracy of a fog/edge-based classifier system are disclosed. An example apparatus includes a transducer to mounted on a tracked object, the transducer to generate data samples corresponding to the tracked object; a discriminator to: generate a first classification using a first model based on a first calculated feature of the first data samples from the transducer, the first model corresponding to calculated features determined from second data samples, the second data samples obtained prior to the first data samples; generate an offset based on a difference between a first model feature the first model and a second model feature of a second model, the second model being different than the first model; and adjust the first calculated feature using the offset to generate an adjusted feature; a pattern matching engine to generate a second classification using vectors corresponding to the second model based on the adjusted feature; and a counter to, when the first classification matches the second classification, increment a count.
Domain adaptation and fusion using weakly supervised target-irrelevant data
Aspects include receiving a request to perform an image classification task in a target domain. The image classification task includes identifying a feature in images in the target domain. Classification information related to the feature is transferred from a source domain to the target domain. The transferring includes receiving a plurality of pairs of task-irrelevant images that each includes a task-irrelevant image in the source domain and in the target domain. The task-irrelevant image in the source domain has a fixed correspondence to the task-irrelevant image in the target domain. A target neural network is trained to perform the image classification task in the target domain. The training is based on the plurality of pairs of task-irrelevant images. The image classification task is performed in the target domain and includes applying the target neural network to an image in the target domain and outputting an identified feature.
IMAGE CLASSIFICATION METHOD, ELECTRONIC DEVICE, AND STORAGE MEDIUM
An image classification method is provided. The method includes: inputting a to-be-classified image into a plurality of neural network models; obtaining data output by multiple non-input layers specified by each neural network model to generate a plurality of image features corresponding to the plurality of neural network models; respectively inputting the plurality of corresponding image features into linear classifiers, each of the linear classifiers being trained by one of the plurality of neural network models for determining whether an image belongs to a preset class; obtaining, using each neural network model, a corresponding probability that the to-be-classified image comprises an object image of the preset class; and determining, according to each obtained probability, whether the to-be-classified image includes the object image of the preset class.
Computing systems with modularized infrastructure for training generative adversarial networks
Computing systems that provide a modularized infrastructure for training Generative Adversarial Networks (GANs) are provided herein. For example, the modularized infrastructure can include a lightweight library designed to make it easy to train and evaluate GANs. A user can interact with and/or build upon the modularized infrastructure to easily train GANs. The modularized infrastructure can include a number of distinct sets of code that handle various stages of and operations within the GAN training process. The sets of code can be modular. That is, the sets of code can be designed to exist independently yet be easily and intuitively combinable. Thus, the user can employ some or all of the sets of code or can replace a certain set of code with a set of custom-code while still generating a workable combination.
Computer Vision Based Driver Assistance Devices, Systems, Methods and Associated Computer Executable Code
The present invention includes computer vision based driver assistance devices, systems, methods and associated computer executable code (hereinafter collectively referred to as: “ADAS”). According to some embodiments, an ADAS may include one or more fixed image/video sensors and one or more adjustable or otherwise movable image/video sensors, characterized by different dimensions of fields of view. According to some embodiments of the present invention, an ADAS may include improved image processing. According to some embodiments, an ADAS may also include one or more sensors adapted to monitor/sense an interior of the vehicle and/or the persons within. An ADAS may include one or more sensors adapted to detect parameters relating to the driver of the vehicle and processing circuitry adapted to assess mental conditions/alertness of the driver and directions of driver gaze. These may be used to modify ADAS operation/thresholds.
METHOD, SYSTEM, AND IMAGE PROCESSING DEVICE FOR CAPTURING AND/OR PROCESSING ELECTROLUMINESCENCE IMAGES, AND AN AERIAL VEHICLE
A method (400) of capturing and processing electroluminescence (EL) images (1910) of a PV array (40) is disclosed herein. In a described embodiment, the method 400 includes controlling the aerial vehicle (20) to fly along a flight path to capture EL images (1910) of corresponding PV array subsections (512b) of the PV array (40), deriving respective image quality parameters from at least some of the captured EL images, dynamically adjusting a flight speed of the aerial vehicle along the flight path, based on the respective image quality parameters for capturing the EL images (1910) of the PV array subsections (512b), extracting a plurality of frames (1500) of the PV array subsection (512b) from the EL images (1910); determining a reference frame having a highest image quality of the PV array subsection (512b) from among the extracted frames (2100); performing image alignment of the extracted frames (2100) to the reference frame to generate image aligned frames (2130), and processing the image aligned frames (2130) to produce an enhanced image (2140) of the PV array subsection (512b) having a higher resolution than the reference frame. A system, image processing device, and aerial vehicle for the method thereof are also disclosed.
METHOD FOR VIDEO RECOGNITION AND RELATED PRODUCTS
A method for video recognition and related products are provided. The method includes the following. An original set of clip descriptors is obtained by providing multiple clips of a video as an input of a 3D CNN of a neural network, where the neural network includes the 3D CNN and at least one first fully connected layer, and each of the multiple clips includes at least one frame. An attention vector corresponding to the original set of clip descriptors is determined. An enhanced set of clip descriptors is obtained based on the original set of clip descriptors and the attention vector. The enhanced set of clip descriptors is input into the at least one first fully connected layer and video recognition is performed based on an output of the at least one first fully connected layer.