Patent classifications
G06V10/451
Method and apparatus for acquiring feature data from low-bit image
A processor-implemented method of generating feature data includes: receiving an input image; generating, based on a pixel value of the input image, at least one low-bit image having a number of bits per pixel lower than a number of bits per pixel of the input image; and generating, using at least one neural network, feature data corresponding to the input image from the at least one low-bit image.
DEVICE AND METHOD FOR RECOGNIZING IMAGE USING BRAIN-INSPIRED SPIKING NEURAL NETWORK AND COMPUTER READABLE PROGRAM FOR THE SAME
Disclosed are an image recognition device and method using a brain-inspired spiking neural network and a computer-readable program for the same. The image recognition device using a brain-inspired spiking neural network according to the present disclosure includes an input unit configured to receive an input image made up of at least one pixel, and a spiking neural network unit configured to recognize the input image, the spiking neural network unit including a plurality of neurons corresponding to the pixels of the image to generate spike signals when a membrane potential state value exceeds a preset threshold, and synapses connecting the plurality of neurons.
METHOD AND APPARATUS FOR TRACKING TARGET
A target tracking method and apparatus is provided. The target tracking apparatus includes a memory configured to store a neural network, and a processor configured to extract feature information of each of a target included in a target region in a first input image, a background included in the target region, and a searching region in a second input image, using the neural network, obtain similarity information of the target and the searching region and similarity information of the background and the searching region based on the extracted feature information, obtain a score matrix including activated feature values based on the obtained similarity information, and estimate a position of the target in the searching region from the score matrix.
Location processor for inferencing and learning based on sensorimotor input data
An inference system performs inference, such as object recognition, based on sensory inputs generated by sensors and control information associated with the sensory inputs. The sensory inputs describe one or more features of the objects. The control information describes movement of the sensors or known locations of the sensors relative to a reference point. For a particular object, an inference system learns a set of object-location representations of the object. An object-location representation is a unique characterization of an object-centric location relative to the particular object. The inference system also learns a set of feature-location representations associated with the object-location representation that indicate presence of features at the corresponding object-location pair. The inference system can perform inference on an unknown object by identifying candidate object-location representations consistent with feature-location representations observed from the sensory input data and control information.
SYSTEMS AND METHODS FOR LEARNING RICH NEAREST NEIGHBOR REPRESENTATIONS FROM SELF-SUPERVISED ENSEMBLES
Embodiments described herein provide a system and method for extracting information. The system receives, via a communication interface, a dataset of a plurality of data samples. The system determines, in response to an input data sample from the dataset, a set of feature vectors via a plurality of pre-trained feature extractors, respectively. The system retrieves a set of memory bank vectors that correspond to the input data sample. The system, generates, via a plurality of Multi-Layer-Perceptrons (MLPs), a mapped set of representations in response to an input of the set of memory bank vectors, respectively. The system determines a loss objective between the set of feature vectors and the combination of the mapped set of representations and a network of layers in the MLP. The system updates, the parameters of the plurality of MLPs and the parameters of the memory bank vectors by minimizing the computed loss objective.
Medical evaluation machine learning workflows and processes
Systems and methods for processing electronic imaging data obtained from medical imaging procedures are disclosed herein. Some embodiments relate to data processing mechanisms for medical imaging and diagnostic workflows involving the use of machine learning techniques such as deep learning, artificial neural networks, and related algorithms that perform machine recognition of specific features and conditions in imaging data. In an example, a deep learning model is selected for automated image recognition of a particular medical condition on image data, and applied to the image data to recognize characteristics of the particular medical condition. Based on the characteristics recognized by the automated image recognition on the image data, an electronic workflow for performing a diagnostic evaluation of the medical imaging study may be modified, updated, or prioritized.
METHOD AND APPARATUS FOR SEARCHING A DATABASE OF 3D ITEMS USING DESCRIPTORS
A method and apparatus for searching a database of 3D items using descriptors created for each item and for the 3D computer-generated model or the 3D physical object. The descriptor comprises a vector string comprising each of the feature vectors extracted from the images, and dimension-related features each representing a dimensional feature. The descriptor is created by obtaining 2D rendered images of the model by rotating the model about each different axis independently, and extracting a feature vector representing features. The model is rotated about the axes in angular steps which may be the same for each axis. The axes may comprise three mutually-perpendicular axes, where each angular step is in the range from 30° to 45°.
Artificial intelligence-based generation of anthropomorphic signatures and use thereof
The technology disclosed relates to authenticating users using a plurality of non-deterministic registration biometric inputs. During registration, a plurality of non-deterministic biometric inputs are given as input to a trained machine learning model to generate sets of feature vectors. The non-deterministic biometric inputs can include a plurality of face images and a plurality of voice samples of a user. A characteristic identity vector for the user can be determined by averaging feature vectors. During authentication, a plurality of non-deterministic biometric inputs are given as input to a trained machine learning model to generate a set of authentication feature vectors. The sets of feature vectors are projected onto a surface of a hyper-sphere. The system can authenticate the user when a cosine distance between the authentication feature vector and a characteristic identity vector for the user is less than a pre-determined threshold.
Reducing image resolution in deep convolutional networks
A method of reducing image resolution in a deep convolutional network (DCN) includes dynamically selecting a reduction factor to be applied to an input image. The reduction factor can be selected at each layer of the DCN. The method also includes adjusting the DCN based on the reduction factor selected for each layer.
System and method for detection of objects of interest in imagery
Described is a system for detecting objects of interest in imagery. The system is configured to receive an input video and generate an attention map. The attention map represents features found in the input video that represent potential objects-of-interest (OI). An eye-fixation map is generated based on a subject's eye fixations. The eye-fixation map also represents features found in the input video that are potential OI. A brain-enhanced synergistic attention map is generated by fusing the attention map with the eye-fixation map. The potential OI in the brain-enhanced synergistic attention map are scored, with scores that cross a predetermined threshold being used to designate potential OI as actual or final OI.