Patent classifications
G06F12/0871
Memory management based on read-miss events
Aspects of the present disclosure relate to asynchronous memory management. In embodiments, an input/output (IO) workload is received at a storage array. Further, one or more read-miss events corresponding to the IO workload are identified. Additionally, at least one of the storage array's cache slots is bound to a track identifier (TID) corresponding to the read-miss events based on one or more of the read-miss events' two-dimensional metrics.
Managing client devices associated with storage nodes in a scale-out storage system
Client devices associated with scale-out storage nodes can be managed based on scale-out storage nodes having backup power supplies. For example, a management node of a scale-out storage system can determine, from among a plurality of storage nodes of the scale-out system, that a first storage node is uncoupled to a backup power supply and that a second storage node is coupled to the backup power supply. The management node can receive device characteristics describing a type of workload and a configuration for a client device associated with the first storage node. The management node can determine the client device satisfies a migration policy based on the device characteristics. The management node can migrate the client device to the second storage node based on the client device satisfying the migration policy.
USE OF PREDEFINED BLOCK POINTERS TO REDUCE DUPLICATE STORAGE OF CERTAIN DATA IN A STORAGE SUBSYSTEM OF A STORAGE SERVER
A method and system for eliminating the redundant allocation and deallocation of special data on disk, wherein the redundant allocation and deallocation of special data on disk is eliminated by providing an innovate technique for specially allocating special data of a storage system. Specially allocated data is data that is pre-allocated on disk and stored in memory of the storage system. “Special data” may include any pre-decided data, one or more portions of data that exceed a pre-defined sharing threshold, and/or one or more portions of data that have been identified by a user as special. For example, in some embodiments, a zero-filled data block is specially allocated by a storage system. As another example, in some embodiments, a data block whose contents correspond to a particular type document header is specially allocated.
Computing device and method
The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
Computing device and method
The present disclosure provides a computation device. The computation device is configured to perform a machine learning computation, and includes an operation unit, a controller unit, and a conversion unit. The storage unit is configured to obtain input data and a computation instruction. The controller unit is configured to extract and parse the computation instruction from the storage unit to obtain one or more operation instructions, and to send the one or more operation instructions and the input data to the operation unit. The operation unit is configured to perform operations on the input data according to one or more operation instructions to obtain a computation result of the computation instruction. In the examples of the present disclosure, the input data involved in machine learning computations is represented by fixed-point data, thereby improving the processing speed and efficiency of training operations.
Graphics processors and graphics processing units having dot product accumulate instruction for hybrid floating point format
Described herein is a graphics processing unit (GPU) comprising a first processing cluster to perform parallel processing operations, the parallel processing operations including a ray tracing operation and a matrix multiply operation; and a second processing cluster coupled to the first processing cluster, wherein the first processing cluster includes a floating-point unit to perform floating point operations, the floating-point unit is configured to process an instruction using a bfloat16 (BF16) format with a multiplier to multiply second and third source operands while an accumulator adds a first source operand with output from the multiplier.
Graphics processors and graphics processing units having dot product accumulate instruction for hybrid floating point format
Described herein is a graphics processing unit (GPU) comprising a first processing cluster to perform parallel processing operations, the parallel processing operations including a ray tracing operation and a matrix multiply operation; and a second processing cluster coupled to the first processing cluster, wherein the first processing cluster includes a floating-point unit to perform floating point operations, the floating-point unit is configured to process an instruction using a bfloat16 (BF16) format with a multiplier to multiply second and third source operands while an accumulator adds a first source operand with output from the multiplier.
DATA PROCESSING METHOD AND APPARATUS BASED ON BLOCKCHAIN NETWORK
This disclosure relates to data processing method and apparatus based on a blockchain network. The method may include receiving a data acquisition request transmitted by a target service node. The data acquisition request may carry a data type of data requested by the target service node and a data identifier set. The method may further include determining a target node set from the nodes in the blockchain network according to the data type, the data identifier set, and recorded data storage information of the nodes. The method may further include transmitting feedback information carrying the node information in the target node set to the target service node. The feedback information is for instructing the target service node to acquire the requested data from a node according to the node information in the target node set.
DATA PROCESSING METHOD AND APPARATUS BASED ON BLOCKCHAIN NETWORK
This disclosure relates to data processing method and apparatus based on a blockchain network. The method may include receiving a data acquisition request transmitted by a target service node. The data acquisition request may carry a data type of data requested by the target service node and a data identifier set. The method may further include determining a target node set from the nodes in the blockchain network according to the data type, the data identifier set, and recorded data storage information of the nodes. The method may further include transmitting feedback information carrying the node information in the target node set to the target service node. The feedback information is for instructing the target service node to acquire the requested data from a node according to the node information in the target node set.
CAT AWARE LOADS AND SOFTWARE PREFETCHES
In one embodiment, a method of selectively reserving portions of a last level cache (LLC) for a multi-core processor, the method comprising: allocating, by an executive system, plural classes of service to the portions of the LLC, wherein the portions comprise ways, and wherein each of the plural classes of service are allocated to one or more of the ways; assigning, by the executive system, one of the plural classes of service to an application as a default class of service, wherein the assignment controls which of the ways the application can allocate into; and overriding, by the application, the default class of service to enable allocation by the application to the one or more of the ways associated with a non-default class of service.