G06F7/78

Least recently used (LRU) cache replacement implementation using a FIFO storing indications of whether a way of the cache was most recently accessed

A method and apparatus for calculating a victim way that is always the least recently used way. More specifically, in an m-set, n-way set associative cache, each way in a cache set comprises a valid bit that indicates that the way contains valid data. The valid bit is set when a way is written and cleared upon being invalidated, e.g., via a snoop address, The cache system comprises a cache LRU circuit which comprises an LRU logic unit associated with each cache set. The LRU logic unit comprises a FIFO of n-depth (in certain embodiments, the depth corresponds to the number of ways in the cache) and m-width. The FIFO performs push, pop and collapse functions. Each entry in the FIFO contains the encoded way number that was last accessed.

Methods and systems for providing enhancements to a business networking feed
09817637 · 2017-11-14 · ·

Methods, systems, and apparatus facilitate social and business networking in a multi-tenant database. An application can provide each user with the ability view targeted data of interest. The data of interest can be supplied in a feed associated with the user created list, which compiles the feed items, e.g., comments, posts, stories, etc., of the object feeds subscribed to by the list. Lists can include entity feeds of objects, on the database well as child records associated with those objects. Accordingly, a user can create tailored feeds and can organize related information into feed for that list. In further embodiments, applications are provided which allow users are able to view filtered selections of other users and objects on the database system. In one embodiments, a connector application allows users to modify subscriptions to other users and objects, a dashboard application allows users to view user profiles and analytics regarding the user profile, and a search application allows users to perform field based searches on the records of the users and objects. Additional applications which allow users to navigate and view records on the database system are also provided.

Methods and systems for providing enhancements to a business networking feed
09817637 · 2017-11-14 · ·

Methods, systems, and apparatus facilitate social and business networking in a multi-tenant database. An application can provide each user with the ability view targeted data of interest. The data of interest can be supplied in a feed associated with the user created list, which compiles the feed items, e.g., comments, posts, stories, etc., of the object feeds subscribed to by the list. Lists can include entity feeds of objects, on the database well as child records associated with those objects. Accordingly, a user can create tailored feeds and can organize related information into feed for that list. In further embodiments, applications are provided which allow users are able to view filtered selections of other users and objects on the database system. In one embodiments, a connector application allows users to modify subscriptions to other users and objects, a dashboard application allows users to view user profiles and analytics regarding the user profile, and a search application allows users to perform field based searches on the records of the users and objects. Additional applications which allow users to navigate and view records on the database system are also provided.

Efficient matrix data format applicable for artificial neural network

Many computing systems process data organized in a matrix format. For example, artificial neural networks (ANNs) perform numerous computations on data organized into matrices using conventional matrix arithmetic operations. One such operation, which is commonly performed, is the transpose operation. Additionally, many such systems need to process many matrices and/or matrices that are large in size. For sparse matrices that hold few significant values and many values that can be ignored, transmitting and processing all the values in such matrices is wasteful. Thus, techniques are introduced for storing a sparse matrix in a compressed format that allows for a matrix transpose operation to be performed on the compressed matrix without having to first decompress the compressed matrix. By utilizing the introduced techniques, more matrix operations can be performed than conventional systems.

Efficient matrix data format applicable for artificial neural network

Many computing systems process data organized in a matrix format. For example, artificial neural networks (ANNs) perform numerous computations on data organized into matrices using conventional matrix arithmetic operations. One such operation, which is commonly performed, is the transpose operation. Additionally, many such systems need to process many matrices and/or matrices that are large in size. For sparse matrices that hold few significant values and many values that can be ignored, transmitting and processing all the values in such matrices is wasteful. Thus, techniques are introduced for storing a sparse matrix in a compressed format that allows for a matrix transpose operation to be performed on the compressed matrix without having to first decompress the compressed matrix. By utilizing the introduced techniques, more matrix operations can be performed than conventional systems.

MATRIX TRANSPOSE AND MULTIPLY

Embodiments for a matrix transpose and multiply operation are disclosed. In an embodiment, a processor includes a decoder and execution circuitry. The decoder is to decode an instruction having a format including an opcode field to specify an opcode, a first destination operand field to specify a destination matrix location, a first source operand field to specify a first source matrix location, and a second source operand field to specify a second source matrix location. The execution circuitry is to, in response to the decoded instruction, transpose the first source matrix to generate a transposed first source matrix, perform a matrix multiplication using the transposed first source matrix and the second source matrix to generate a result, and store the result in a destination matrix location.

MATRIX TRANSPOSE AND MULTIPLY

Embodiments for a matrix transpose and multiply operation are disclosed. In an embodiment, a processor includes a decoder and execution circuitry. The decoder is to decode an instruction having a format including an opcode field to specify an opcode, a first destination operand field to specify a destination matrix location, a first source operand field to specify a first source matrix location, and a second source operand field to specify a second source matrix location. The execution circuitry is to, in response to the decoded instruction, transpose the first source matrix to generate a transposed first source matrix, perform a matrix multiplication using the transposed first source matrix and the second source matrix to generate a result, and store the result in a destination matrix location.

EIGENVALUE DECOMPOSITION WITH STOCHASTIC OPTIMIZATION

A computer-implemented method for Eigenpair computation is provided. The method includes computing an Eigenvector and respective Eigenvalues of the Eigenvector by using a Stochastic Optimization process. The computing step includes storing the matrix in a Resistive Processing Unit (RPU) crossbar array.

EIGENVALUE DECOMPOSITION WITH STOCHASTIC OPTIMIZATION

A computer-implemented method for Eigenpair computation is provided. The method includes computing an Eigenvector and respective Eigenvalues of the Eigenvector by using a Stochastic Optimization process. The computing step includes storing the matrix in a Resistive Processing Unit (RPU) crossbar array.

Device and method for transposing matrix, and display device
11204741 · 2021-12-21 · ·

The present disclosure relates to a matrix transposition device and method, and a display device. The matrix transposition device includes a first counting unit, an input module, second counting units, and a first data selection unit. The first counting unit numbers a matrix element and outputs a first signal. The input module is coupled to the first counting unit, and is input with the matrix element after receiving the first signal; each. Each column of matrix elements corresponds to one of the second counting units, each of the second counting units outputs a set of second signals, and each set of the second signals includes number information of the matrix elements in a column corresponding to the second counting unit. The first data selection unit receives the second signals in an order of columns of a matrix, and orderly outputs column elements of the matrix as row elements of a transposed matrix.