Patent classifications
G06N7/046
Intelligent control with hierarchical stacked neural networks
A neural network method, comprising: modeling an environment; implementing a policy based on the modeled environment, to perform an action by an agent within the environment, having at least one estimated dynamic parameter; receiving an observation and a temporally-associated cost or reward based on operation of the agent in the environment controlled according to the policy; and updating the policy, dependent on the received observation and the temporally-associated cost or reward, to improve the policy to optimize an expected future cumulative cost or reward. The policy may represent a set of parameters defining an artificial neural network having a plurality of hierarchical layers and having at least one layer which receives inputs representing aspects of the received observation indirectly from other neurons, and produce outputs to other neurons which indirectly implement the policy, the plurality of hierarchical layers being trained according to respectfully distinct training criteria.
Convolution streaming engine for deep neural networks
A method, an electronic device, and computer readable medium are provided. The method includes receiving an input into a neural network that includes a kernel. The method also includes generating, during a convolution operation of the neural network, multiple panel matrices based on different portions of the input. The method additionally includes successively combining each of the multiple panel matrices with the kernel to generate an output. Generating the multiple panel matrices can include mapping elements within a moving window of the input onto columns of an indexing matrix, where a size of the window corresponds to the size of the kernel.
Electronic device
An electronic device includes a camera to capture an image, and a processor to input an image acquired by photographing a detergent container into a trained model to acquire detergent information corresponding to the detergent container, and to guide an amount of detergent dispensed based on washing information corresponding to the detergent information. The trained model is a neural network trained using images of a plurality of detergent containers.
Apparatus and method for training deep neural network
A method for training a deep neural network according to an embodiment includes training a deep neural network model using a first data set including a plurality of labeled data and a second data set including a plurality of unlabeled data, assigning a ground-truth label value to some of the plurality of unlabeled data, updating the first data set and the second data set such that the data to which the ground-truth label value is assigned is included in the first data set, and further training the deep neural network model using the updated first data set and the updated second data set.
Compiler for implementing memory shutdown for neural network implementation configuration
Some embodiments provide a compiler for optimizing the implementation of a machine-trained network (e.g., a neural network) on an integrated circuit (IC). The compiler of some embodiments receives a specification of a machine-trained network including multiple layers of computation nodes and generates a graph representing options for implementing the machine-trained network in the IC. In some embodiments, the graph includes nodes representing options for implementing each layer of the machine-trained network and edges between nodes for different layers representing different implementations that are compatible. The compiler of some embodiments is also responsible for generating instructions relating to shutting down (and waking up) memory units of cores. In some embodiments, the memory units to shutdown are determined by the compiler based on the data that is stored or will be stored in the particular memory units.
Systems and methods for generating motion forecast data for actors with respect to an autonomous vehicle and training a machine learned model for the same
Systems and methods for generating motion forecast data for actors with respect to an autonomous vehicle and training a machine learned model for the same are disclosed. The computing system can include an object detection model and a graph neural network including a plurality of nodes and a plurality of edges. The computing system can be configured to input sensor data into the object detection model; receive object detection data describing the location of the plurality of the actors relative to the autonomous vehicle as an output of the object detection model; input the object detection data into the graph neural network; iteratively update a plurality of node states respectively associated with the plurality of nodes; and receive, as an output of the graph neural network, the motion forecast data with respect to the plurality of actors.
DATA CENTER SELF-HEALING
Systems and methods for data center operational monitoring are disclosed. In at least one embodiment, a root cause for one or more data center component failures is determined based, at least in part, upon data from one or more sensors.
Validation of models and data for compliance with laws
The present disclosure provides computing systems and techniques for validating a decision model against a cannon of regulation. A server can deconstruct a decision model into a number of branching decisions and also generate a Markov chain comprising a number of sequences from a cannon of regulation. The server can compare the branching decisions to the sequences and can validate the decision model with the cannon of regulation based on the comparison.
Compiler for optimizing filter sparsity for neural network implementation configuration
Some embodiments provide a compiler for optimizing the implementation of a machine-trained network (e.g., a neural network) on an integrated circuit (IC). In some embodiments, the compiler determines whether sparsity requirements of channels implemented on individual cores are met on each core. If the sparsity requirement is not met, the compiler, in some embodiments, determines whether the channels of the filter can be rearranged to meet the sparsity requirements on each core and, based on the determination, either rearranges the filter channels or implements a solution to non-sparsity.
Low entropy browsing history for content quasi-personalization
The present disclosure provides systems and methods for content quasi-personalization or anonymized content retrieval via aggregated browsing history of a large plurality of devices, such as millions or billions of devices. A sparse matrix may be constructed from the aggregated browsing history, and dimensionally reduced, reducing entropy and providing anonymity for individual devices. Relevant content may be selected via quasi-personalized clusters representing similar browsing histories, without exposing individual device details to content providers.