Patent classifications
G06F2212/2515
Handling cache and non-volatile storage (NVS) out of sync writes
Provided are techniques for handling cache and Non-Volatile Storage (NVS) out of sync writes. At an end of a write for a cache track of a cache node, a cache node uses cache write statistics for the cache track of the cache node and Non-Volatile Storage (NVS) write statistics for a corresponding NVS track of an NVS node to determine that writes to the cache track and to the corresponding NVS track are out of sync. The cache node sets an out of sync indicator in a cache data control block for the cache track. The cache node sends a message to the NVS node to set an out of sync indicator in an NVS data control block for the corresponding NVS track. The cache node sets the cache track as pinned non-retryable due to the write being out of sync and reports possible data loss to error logs.
Technologies for efficiently performing scatter-gather operations
Technologies for efficiently performing scatter-gather operations include a device with circuitry configured to associate, with a template identifier, a set of non-contiguous memory locations of a memory having a cross point architecture. The circuitry is additionally configured to access, in response to a request that identifies the non-contiguous memory locations by the template identifier, the memory locations.
DYNAMIC LOG LEVEL WITH AUTOMATIC RESET
Aspects define a dynamic threshold filter data structure that includes a pairing of an override log level value to a key value; in response to an incoming processing request, identify a user identification value that is linked to the request, wherein the user identification value is associated to a default logging level within a thread context map for logging data associated with executing processes in satisfaction of the processing request, and wherein the default logging level is different from the override log level; and in response to determining that the user identification value matches the key value, log data associated with executing processes in satisfaction of the processing request to the override log level.
Fine Grain Data Migration to or from Borrowed Memory
Systems, methods and apparatuses of fine grain data migration in using Memory as a Service (MaaS) are described. For example, a memory status map can be used to identify the cache availability of sub-regions (e.g., cache lines) of a borrowed memory region (e.g., a borrowed remote memory page). Before accessing a virtual memory address in a sub-region, the memory status map is checked. If the sub-region has cache availability in the local memory, the memory management unit uses a physical memory address converted from the virtual memory address to make memory access. Otherwise, the sub-region is cached from the borrowed memory region to the local memory, before the physical memory address is used.
ACCELERATED DATA ACCESS FOR TRAINING
Methods and systems for storing and accessing training example data for a machine learning procedure. The systems and methods described pre-process data to store it in a non-transient memory in a random order. During training, a set of the data is retrieved and stored in a random access memory. One or more subsets of the data may then be retrieved from the random access memory and used to train a machine learning model.
Combined transparent/non-transparent cache
In one embodiment, a memory that is delineated into transparent and non-transparent portions. The transparent portion may be controlled by a control unit coupled to the memory, along with a corresponding tag memory. The non-transparent portion may be software controlled by directly accessing the non-transparent portion via an input address. In an embodiment, the memory may include a decoder configured to decode the address and select a location in either the transparent or non-transparent portion. Each request may include a non-transparent attribute identifying the request as either transparent or non-transparent. In an embodiment, the size of the transparent portion may be programmable. Based on the non-transparent attribute indicating transparent, the decoder may selectively mask bits of the address based on the size to ensure that the decoder only selects a location in the transparent portion.
Data processing system and data processing method
First type metadata is associated with unstructured data included in an unstructured data source. A data processing system performs an extraction process. This extraction process includes: (a) creating, for each of a plurality of selected pieces of unstructured data in the unstructured data source, second type metadata, which is metadata including content information representing one or more content attributes of the piece of unstructured data; and (b) associating the created second type metadata with the first type metadata of the piece of unstructured data.
MEMORY HAVING A STATIC CACHE AND A DYNAMIC CACHE
The present disclosure includes memory having a static cache and a dynamic cache. A number of embodiments include a memory, wherein the memory includes a first portion configured to operate as a static single level cell (SLC) cache and a second portion configured to operate as a dynamic SLC cache when the entire first portion of the memory has data stored therein.
SEMICONDUCTOR DEVICE
A semiconductor device is provided. The semiconductor device comprises a first memory unit including a first memory area, and a first logic area electrically connected to the first memory area, the first logic area including a cache memory and an interface port. The first memory unit executes a data transmission and reception operation with a memory unit adjacent to the first memory unit via the first interface port and the cache memory.
Processor with memory array operable as either cache memory or neural network unit memory
A processor comprising a mode indicator, a plurality of processing cores, and a neural network unit (NNU), comprising a memory array, an array of neural processing units (NPU), cache control logic, and selection logic that selectively couples the plurality of NPUs and the cache control logic to the memory array. When the mode indicator indicates a first mode, the selection logic enables the plurality of NPUs to read neural network weights from the memory array to perform computations using the weights. When the mode indicator indicates a second mode, the selection logic enables the plurality of processing cores to access the memory array through the cache control logic as a cache memory.