Patent classifications
G06F18/21355
Palette coding for color compression of point clouds
A method of compression of the color data of point clouds is described herein. A palette of colors that best represent the colors existing in the cloud is generated. Clustering is utilized for generating the palette. Once the palette is generated, an index to the palette is found for each point in the cloud. The indexes are coded using an entropy coder afterwards. A decoding process is then able to be used to reconstruct the point clouds.
Machine learning system for workload failover in a converged infrastructure
Systems and methods for analyzing a customer deployment in a converged or hyper-converged infrastructure are disclosed. A machine learning model is trained based upon historical usage data of other customer deployments. A k-means clustering is performed to generate a prediction as to whether a deployment is configured for optimal failover. Recommendations to improve failover performance can also be generated.
KERNEL LEARNING APPARATUS USING TRANSFORMED CONVEX OPTIMIZATION PROBLEM
In a kernel learning apparatus, a data preprocessing circuitry preprocesses and represents each data example as a collection of feature representations that need to be interpreted. An explicit feature mapping circuit designs a kernel function with an explicit feature map to embed the feature representations of data into a nonlinear feature space and to produce the explicit feature map for the designed kernel function to train a predictive model. A convex problem formulating circuitry formulates a non-convex problem for training the predictive model into a convex optimization problem based on the explicit feature map. An optimal solution solving circuitry solves the convex optimization problem to obtain a globally optimal solution for training an interpretable predictive model.
POLYNOMIAL CONVOLUTIONAL NEURAL NETWORK WITH EARLY FAN-OUT
The invention proposes a method of training a convolutional neural network in which, at each convolution layer, weights for one seed convolutional filter per layer are updated during each training iteration. All other convolutional filters are polynomial transformations of the seed filter, or, alternatively, all response maps are polynomial transformations of the response map generated by the seed filter.
POLYNOMIAL CONVOLUTIONAL NEURAL NETWORK WITH LATE FAN-OUT
The invention proposes a method of training a convolutional neural network in which, at each convolutional layer, weights for one seed convolutional filter per layer are updated during each training iteration. All other convolutional filters are polynomial transformations of the seed filter, or, alternatively, all response maps are polynomial transformations of the response map generated by the seed filter.
Name and face matching
Described are methods, systems, and computer-program product embodiments for selecting a face image based on a name. In some embodiments, a method includes receiving the name. Based on the name, a name vector is selected from a plurality of name vectors in a dataset that maps a plurality of names to a plurality of corresponding name vectors in a vector space, where each name vector includes representations associated with a plurality of words associated with each name. A plurality of face vectors corresponding to a plurality of face images is received. A face vector is selected from the plurality of face vectors based on a plurality of similarity scores calculated for the plurality of corresponding face vectors, where for each name vector, a similarity score is calculated based on the name vector and each face vector. The face image is output based on the selected face vector.
SYSTEM AND METHOD FOR TREATMENT OPTIMIZATION
A sequence of stimuli produced by an electric frac pump can be generated by a treatment optimization system. Well environment responses to the sequence of stimuli may be measured by sensors and respective sensor data may be received. The sensor data may be used to select a representative system model which can then be used to control the electric frac pump. The representative system model may be used to achieve well stage objectives such as particular cluster efficiencies, complexity factors, or proximity indices.
Inter-cluster intensity variation correction and base calling
The technology disclosed corrects inter-cluster intensity profile variation for improved base calling on a cluster-by-cluster basis. The technology disclosed accesses current intensity data and historic intensity data of a target cluster, where the current intensity data is for a current sequencing cycle and the historic intensity data is for one or more preceding sequencing cycles. A first accumulated intensity correction parameter is determined by accumulating distribution intensities measured for the target cluster at the current and preceding sequencing cycles. A second accumulated intensity correction parameter is determined by accumulating intensity errors measured for the target cluster at the current and preceding sequencing cycles. Based on the first and second accumulated intensity correction parameters, next intensity data for a next sequencing cycle is corrected to generate corrected next intensity data, which is used to base call the target cluster at the next sequencing cycle.
Abnormality detection device, learning device, abnormality detection method, and learning method
An abnormality detection device of an embodiment includes an encoder, a first identifier, a decoder, and a second identifier. The encoder is configured to compress input data using a compression parameter to generate a compressed data. The first identifier is configured to determine whether a distribution of the compressed data input by the encoder is a distribution of the compressed data or a prior distribution prepared in advance, and inputs a first identification result to the encoder. The decoder is configured to decode the compressed data using a compressing parameter to generate reconstructed data. The second identifier is configured to determine whether the reconstructed data input by the decoder is the reconstruction data or the input data and outputs a second identification result to the encoder and the decoder.
KERNEL LEARNING APPARATUS USING TRANSFORMED CONVEX OPTIMIZATION PROBLEM
In a kernel learning apparatus, a data preprocessing circuitry preprocesses and represents each data example as a collection of feature representations that need to be interpreted. An explicit feature mapping circuit designs a kernel function with an explicit feature map to embed the feature representations of data into a nonlinear feature space and to produce the explicit feature map for the designed kernel function to train a predictive model. A convex problem formulating circuitry formulates a non-convex problem for training the predictive model into a convex optimization problem based on the explicit feature map. An optimal solution solving circuitry solves the convex optimization problem to obtain a globally optimal solution for training an interpretable predictive model.