Patent classifications
G06F17/16
INPUT CIRCUITRY FOR ANALOG NEURAL MEMORY IN A DEEP LEARNING ARTIFICIAL NEURAL NETWORK
Numerous embodiments of input circuitry for an analog neural memory in a deep learning artificial neural network are disclosed.
INPUT CIRCUITRY FOR ANALOG NEURAL MEMORY IN A DEEP LEARNING ARTIFICIAL NEURAL NETWORK
Numerous embodiments of input circuitry for an analog neural memory in a deep learning artificial neural network are disclosed.
INFORMATION PROCESSING METHOD
An information processing system according to the present invention is an information processing system that sets a weight matrix. The weight matrix is generated by learning using a target matrix that is a matrix including an action status on an item in each of a plurality of setting statuses as an element of a column, includes a weight corresponding to an intersection of items as an element, and is multiplied by the target matrix. The information processing system includes: a similarity degree calculating unit configured to extract, from each column of the target matrix, some elements from among all elements of the column, and calculate a degree of similarity between the items based on the some elements of the each column; and a weight matrix setting unit configured to set the weight matrix that is a sparse matrix including a nonzero element based on the degree of similarity.
INFORMATION PROCESSING METHOD
An information processing system according to the present invention is an information processing system that sets a weight matrix. The weight matrix is generated by learning using a target matrix that is a matrix including an action status on an item in each of a plurality of setting statuses as an element of a column, includes a weight corresponding to an intersection of items as an element, and is multiplied by the target matrix. The information processing system includes: a similarity degree calculating unit configured to extract, from each column of the target matrix, some elements from among all elements of the column, and calculate a degree of similarity between the items based on the some elements of the each column; and a weight matrix setting unit configured to set the weight matrix that is a sparse matrix including a nonzero element based on the degree of similarity.
MOVEMENT OF TENSOR DATA DURING RESHAPE OPERATION
A method of performing a reshape operation specified in a reshape layer of a neural network model is described. The reshape operation reshapes an input tensor with an input tensor shape to an output tensor with an output tensor shape. The tensor data that has to be reshaped is directly routed between tile memories of the hardware accelerator in an efficient manner. This advantageously optimizes usage of memory space and allows any number and type of neural network models to be run on the hardware accelerator.
METHOD AND APPARATUS FOR OPERATING IMAGE DATA
The disclosure relates to method and apparatus for operating image data. The method includes: reading matrix data from the image data based on a matrix size, M rows and N columns, of an image operator (220); calculating column data in the matrix data with a single calculation instruction corresponding to the image operator, to obtain an intermediate calculation result (240); multiplexing and rearranging the intermediate calculation result into N rows of cached data (260); calculating matrix elements of a target column in the N rows of cached data with the single calculation instruction, to obtain a calculation result of the matrix data under the single calculation instruction (280); and outputting the calculation result as an image processing result of the matrix data by the image operator (300).
METHOD AND APPARATUS FOR OPERATING IMAGE DATA
The disclosure relates to method and apparatus for operating image data. The method includes: reading matrix data from the image data based on a matrix size, M rows and N columns, of an image operator (220); calculating column data in the matrix data with a single calculation instruction corresponding to the image operator, to obtain an intermediate calculation result (240); multiplexing and rearranging the intermediate calculation result into N rows of cached data (260); calculating matrix elements of a target column in the N rows of cached data with the single calculation instruction, to obtain a calculation result of the matrix data under the single calculation instruction (280); and outputting the calculation result as an image processing result of the matrix data by the image operator (300).
HIERARCHICAL REDUCED-ORDER MATRIX GENERATION DEVICE
During model-based development, a processing target is sometimes broken down into partial structures. At such time, a long calculation time and a large quantity of computer resources are required if each partial structure has a large number of degrees of freedom. The present invention is a hierarchical reduced-order matrix generation device 600 that generates a hierarchical reduced-order matrix for performing numerical analysis of a physical object, and has: a storage unit 62 that stores physical object data indicating properties of the physical object; and a computation unit 61 that generates a hierarchical reduced-order matrix for a model of the physical object data. The computation unit 61 divides the overall structure into a plurality of partial structures, and calculates the reduced-order matrix using a unique mode and a static mode of each of the divided plurality of partial structures.
HIERARCHICAL REDUCED-ORDER MATRIX GENERATION DEVICE
During model-based development, a processing target is sometimes broken down into partial structures. At such time, a long calculation time and a large quantity of computer resources are required if each partial structure has a large number of degrees of freedom. The present invention is a hierarchical reduced-order matrix generation device 600 that generates a hierarchical reduced-order matrix for performing numerical analysis of a physical object, and has: a storage unit 62 that stores physical object data indicating properties of the physical object; and a computation unit 61 that generates a hierarchical reduced-order matrix for a model of the physical object data. The computation unit 61 divides the overall structure into a plurality of partial structures, and calculates the reduced-order matrix using a unique mode and a static mode of each of the divided plurality of partial structures.
ACCELERATOR TO REDUCE DATA DIMENSIONALITY AND ASSOCIATED SYSTEMS AND METHODS
An device is disclosed. A first buffer to store a query data point, and a second buffer to store a matrix of candidate data points. A processing element may process the query data point and the matrix of candidate data points to identify candidate data points in the matrix of candidate data points that are nearest to the query data point.