Patent classifications
G06F18/21355
Object detection and representation in images
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for object detection and representation in images. In one aspect, a method includes detecting occurrences of objects of a particular type in images captured within a first duration of time, and iteratively training an image embedding function to produce as output representations of features of the input images depicting occurrences of objects of the particular type, where similar representations of features are generated for images that depict the same instance of an object of a particular type captured within a specified duration of time, and dissimilar representations of features are generated for images that depict different instances of objects of the particular type.
SYSTEM AND METHOD FOR ENHANCING ENTITY PERFORMANCE
Systems and methods for enhancing entity performance include a resource manager in communication with a plurality of entities, the entities including one or more source acquirers and one or more resource issuers. The resource manager includes a processor, and a memory storing an analyzer having computer readable instructions that, when executed by the processor, operate to perform the following steps: organize the plurality of entities into a plurality of segments based on one or more parameters of the plurality of entities, differentiate each segment from other segments based on one or more differentiators, compare practices of an entity within a given segment to identify an action to enhance performance of the entity, and communicate the action to the entity. The parameters may include primary parameters that are extracted from a dataset, revised parameters that are extrapolated from the dataset, and then iteratively reduced until accurate segments are generated.
SYSTEMS AND METHODS FOR PRIVACY-ENABLED BIOMETRIC PROCESSING
In one embodiment, a set of feature vectors can be derived from any biometric data, and then using a deep neural network (DNN) on those one-way homomorphic encryptions (i.e., each biometrics' feature vector) can determine matches or execute searches on encrypted data. Each biometrics' feature vector can then be stored and/or used in conjunction with respective classifications, for use in subsequent comparisons without fear of compromising the original biometric data. In various embodiments, the original biometric data is discarded responsive to generating the encrypted values. In another embodiment, the homomorphic encryption enables computations and comparisons on cypher text without decryption. This improves security over conventional approaches. Searching biometrics in the clear on any system, represents a significant security vulnerability. In various examples described herein, only the one-way encrypted biometric data is available on a given device. Various embodiments restrict execution to occur on encrypted biometrics for any matching or searching.
Polynomial convolutional neural network with late fan-out
The invention proposes a method of training a convolutional neural network in which, at each convolutional layer, weights for one seed convolutional filter per layer are updated during each training iteration. All other convolutional filters are polynomial transformations of the seed filter, or, alternatively, all response maps are polynomial transformations of the response map generated by the seed filter.
Systems and methods for processing video streams
Embodiments of a method and system described herein enable capture of video data streams from multiple, different video data source devices and the processing of the video data streams. The video data streams are merged such that various data protocols can all be processed with the same worker processors on different types of operating systems, which are typically distributed.
METHODS, SYSTEMS AND MEDIA FOR JOINT MANIFOLD LEARNING BASED HETEROGENOUS SENSOR DATA FUSION
The present disclosure provides a method for joint manifold learning based heterogenous sensor data fusion, comprising: obtaining learning heterogeneous sensor data from a plurality sensors to form a joint manifold, wherein the plurality sensors include different types of sensors that detect different characteristics of targeting objects; performing, using a hardware processor, a plurality of manifold learning algorithms to process the joint manifold to obtain raw manifold learning results, wherein a dimension of the manifold learning results is less than a dimension of the joint manifold; processing the raw manifold learning results to obtain intrinsic parameters of the targeting objects; evaluating the multiple manifold learning algorithms based on the raw manifold learning results and the intrinsic parameters to determine one or more optimum manifold learning algorithms; and applying the one or more optimum manifold learning algorithms to fuse heterogeneous sensor data generated by the plurality sensors.
Systems and methods employing cooperative optimization-based dimensionality reduction
Dimensionality reduction systems and methods facilitate visualization, understanding, and interpretation of high-dimensionality data sets, so long as the essential information of the data set is preserved during the dimensionality reduction process. In some of the disclosed embodiments, dimensionality reduction is accomplished using clustering, evolutionary computation of low-dimensionality coordinates for cluster kernels, particle swarm optimization of kernel positions, and training of neural networks based on the kernel mapping. The fitness function chosen for the evolutionary computation and particle swarm optimization is designed to preserve kernel distances and any other information deemed useful to the current application of the disclosed techniques, such as linear correlation with a variable that is to be predicted from future measurements. Various error measures are suitable and can be used.
Fine-grained object recognition in robotic systems
A method for fine-grained object recognition in a robotic system is disclosed that includes obtaining an image of an object from an imaging device. Based on the image, a deep category-level detection neural network is used to detect pre-defined categories of objects. A feature map is generated for each pre-defined category of object detected by the deep category-level detection neural network. Embedded features are generated, based on the feature map, using a deep instance-level detection neural network corresponding to the pre-defined category of the object, wherein each pre-defined category of an object comprises a corresponding different instance-level detection neural network. An instance-level of the object is determined based on classification of the embedded features.
INTER-CLUSTER INTENSITY VARIATION CORRECTION AND BASE CALLING
The technology disclosed corrects inter-cluster intensity profile variation for improved base calling on a cluster-by-cluster basis. The technology disclosed accesses current intensity data and historic intensity data of a target cluster, where the current intensity data is for a current sequencing cycle and the historic intensity data is for one or more preceding sequencing cycles. A first accumulated intensity correction parameter is determined by accumulating distribution intensities measured for the target cluster at the current and preceding sequencing cycles. A second accumulated intensity correction parameter is determined by accumulating intensity errors measured for the target cluster at the current and preceding sequencing cycles. Based on the first and second accumulated intensity correction parameters, next intensity data for a next sequencing cycle is corrected to generate corrected next intensity data, which is used to base call the target cluster at the next sequencing cycle.
GENERATING OBJECT EMBEDDINGS FROM IMAGES
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training an object embedding system. In one aspect, a method comprises providing selected images as input to the object embedding system and generating corresponding embeddings, wherein the object embedding system comprises a thumbnailing neural network and an embedding neural network. The method further comprises backpropagating gradients based on a loss function to reduce the distance between embeddings for same instances of objects, and to increase the distance between embeddings for different instances of objects.