G06F2212/281

Supporting fault information delivery

A processor implementing techniques to supporting fault information delivery is disclosed. In one embodiment, the processor includes a memory controller unit to access an enclave page cache (EPC) and a processor core coupled to the memory controller unit. The processor core to detect a fault associated with accessing the EPC and generate an error code associated with the fault. The error code reflects an EPC-related fault cause. The processor core is further to encode the error code into a data structure associated with the processor core. The data structure is for monitoring a hardware state related to the processor core.

Mitigating busy time in a high performance cache

Various embodiments mitigate busy time in a hierarchical store-through memory cache structure including a cache directory associated with a memory cache. The cache directory is divided into a plurality of portions each associated with a portion of memory cache. A determination is made that a first subpipe of a shared cache pipeline comprises a non-store request. The shared pipeline is communicatively coupled to the plurality of portions of the cache directory. A store command is prevented from being placed in a second subpipe of the shared cache pipeline based on determining that a first subpipe of the shared cache pipeline comprises a non-store request. Simultaneous cache lookup operations are supported between the plurality of portions of the cache directory and cache write operations. Two or more store commands simultaneously processed in a shared cache pipeline communicatively coupled to the plurality of portions of the cache directory.

Dynamically-Adjusted Host Memory Buffer
20170293562 · 2017-10-12 ·

Host memory buffer is dynamically adjusted based on performance. As memory pages are accessed, one or more counts of the memory pages are maintained. If the counts indicate some of the memory pages are identical, then a portion of host system memory allocated to buffer cache may be reduced or decremented in response to repetitive access. However, if the counts indicate different memory pages are accessed, then the host system memory allocated to the buffer cache may be increased or incremented.

Mechanical shock mitigation for data storage

A device adapted to capture vehicle data or surveillance data that includes a disk and a Non-Volatile Solid-State Memory (NVSM). The vehicle or surveillance data is received in a buffer of the device for storage on the disk, and an input is received indicating a level of mechanical shock. It is determined whether the input indicates the level of mechanical shock exceeds a first threshold indicative of an impact. If the input indicates the level of mechanical shock exceeds the first threshold, the vehicle or surveillance data is stored in the NVSM from the buffer and a status is determined for storing data on the disk.

Read command processing for data storage system based on previous writes

A read command is received from a host requesting data from a portion of a first memory of a data storage system and it is determined whether one or more sections of the first memory including the portion have previously been written to by the host. If it is determined that the one or more sections have not previously been written to by the host, predetermined data is sent to the host in response to the read command without reading the portion of the first memory. According to another aspect, the requested data from the read command is cached in a second memory of the data storage system based on whether the one or more sections of the first memory have previously been written to by the host.

Optimized Read Cache For Persistent Cache On Solid State Devices

Systems and methods for a content addressable cache that is optimized for SSD use are disclosed. In some embodiments, the cache utilizes an identifier array where identification information is stored for each entry in the cache. However, the size of the bit field used for the identification information is not sufficient to uniquely identify the data stored at the associated entry in the cache. A smaller bit field increases the likelihood of a “false positive”, where the identification information indicates a cache hit when the actual data does not match the digest. A larger bit field decreases the probability of a “false positive”, at the expense of increased metadata memory space. Thus, the architecture allows for a compromise between metadata memory size and processing cycles.

Write reordering in a hybrid disk drive

A hybrid drive and associated methods increase the rate at which data are transferred to a nonvolatile storage medium in the hybrid drive. By using a large nonvolatile solid state memory device as cache memory for a magnetic disk drive, a very large number of write commands can be cached and subsequently reordered and executed in an efficient manner. In addition, strategic selection and reordering of only a portion of the write commands stored in the nonvolatile solid state memory device increases efficiency of the reordering process.

PROVIDING SCALABLE DYNAMIC RANDOM ACCESS MEMORY (DRAM) CACHE MANAGEMENT USING DRAM CACHE INDICATOR CACHES

Providing scalable dynamic random access memory (DRAM) cache management using DRAM cache indicator caches is provided. In one aspect, a DRAM cache management circuit is provided to manage access to a DRAM cache in high-bandwidth memory. The DRAM cache management circuit comprises a DRAM cache indicator cache, which stores master table entries that are read from a master table in a system memory DRAM and that contain DRAM cache indicators. The DRAM cache indicators enable the DRAM cache management circuit to determine whether a memory line in the system memory DRAM is cached in the DRAM cache of high-bandwidth memory, and, if so, in which way of the DRAM cache the memory line is stored. Based on the DRAM cache indicator cache, the DRAM cache management circuit may determine whether to employ the DRAM cache and/or the system memory DRAM to perform a memory access operation in an optimal manner

Hybrid exclusive multi-level memory architecture with memory management

Hybrid multi-level memory architecture technologies are described. A System on Chip (SOC) includes multiple functional units and a multi-level memory controller (MLMC) coupled to the functional units. The MLMC is coupled to a hybrid multi-level memory architecture including a first-level dynamic random access memory (DRAM) (near memory) that is located on-package of the SOC and a second-level DRAM (far memory) that is located off-package of the SOC. The MLMC presents the first-level DRAM and the second-level DRAM as a contiguous addressable memory space and provides the first-level DRAM to software as additional memory capacity to a memory capacity of the second-level DRAM. The first-level DRAM does not store a copy of contents of the second-level DRAM.

Caching systems and methods for hard disk drives and hybrid drives
09733841 · 2017-08-15 · ·

A system includes a read/write module and a caching module. The read/write module is configured to access a first portion of a recording surface of a rotating storage device. Data is stored on the first portion of the recording surface of the rotating storage device at a first density. The caching module is configured to cache data on a second portion of the recording surface of the rotating storage device at a second density. The second portion of the recording surface of the rotating storage device is separate from the first portion of the recording surface of the rotating storage device. The second density is less than the first density.