G06F2212/1041

APPLICATION PROGRAMMING INTERFACE TO DISASSOCIATE A VIRTUAL ADDRESS

Apparatuses, systems, and techniques to manage memory arrays. In at least one embodiment an application programming interface (API) is performed to disassociate a virtual address indicated by the API from a corresponding physical address.

Direct map memory extension for storage class memory

An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to receive a first request to allocate a direct swap file associated with an application stored in a system memory on a persistent storage media, and map a linear and continuous space of the persistent storage media to the direct swap file associated with the application in response to the first request. Other embodiments are disclosed and claimed.

DATA RECOVERY BASED ON PARITY DATA IN A MEMORY SUB-SYSTEM
20230214298 · 2023-07-06 ·

An error associated with host data written to a page of a storage area of a memory sub-system is detected. A determination is made that parity data corresponding to the host data is stored in a cache memory of the memory sub-system. A data recovery operation is performed based on the parity data stored in the cache memory.

Method, apparatus, and system for memory bandwidth aware data prefetching

An apparatus, method, and system for memory bandwidth aware data prefetching is presented. The method may comprise monitoring a number of request responses received in an interval at a current prefetch request generation rate, comparing the number of request responses received in the interval to at least a first threshold, and adjusting the current prefetch request generation rate to an updated prefetch request generation rate by selecting the updated prefetch request generation rate from a plurality of prefetch request generation rates, based on the comparison. The request responses may be NACK or RETRY responses. The method may further comprise either retaining a current prefetch request generation rate or selecting a maximum prefetch request generation rate as the updated prefetch request generation rate in response to an indication that prefetching is accurate.

Technologies for assigning workloads to balance multiple resource allocation objectives

Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.

Victim cache that supports draining write-miss entries

A caching system including a first sub-cache and a second sub-cache in parallel with the first sub-cache, wherein the second sub-cache includes a set of cache lines, line type bits configured to store an indication that a corresponding cache line of the set of cache lines is configured to store write-miss data, and an eviction controller configured to flush stored write-miss data based on the line type bits.

System and method for memory management

Embodiments of the disclosure provide methods and systems for memory management. The method can include: receiving a request for allocating target node data to a memory space, wherein the memory space includes a buffer and an external memory and the target node data comprises property data and structural data and represents a target node of a graph having a plurality of nodes and edges; determining a node degree associated with the target node data; allocating the target node data to the memory space based on the determined node degree.

WRITE-BEHIND OPTIMIZATION OF COVERING CACHE

Data base performance is improved using write-behind optimization of covering cache. Non-volatile memory data cache includes a full copy of stored data file(s). Data cache and storage writes, checkpoints, and recovery may be decoupled (e.g., with separate writes, checkpoints and recoveries). A covering data cache supports improved performance by supporting database operation during storage delays or outages and/or by supporting reduced I/O operations using aggregate writes of contiguous data pages (e.g., clean and dirty pages) to stored data file(s). Aggregate writes reduce data file fragmentation and reduce the cost of snapshots. Performing write-behind operations in a background process with optimistic concurrency control may support improved database performance, for example, by not interfering with write operations to data cache. Data cache may store (e.g., in metadata) data cache checkpoint information and storage checkpoint information. A stored data file may store storage checkpoint information (e.g., in a file header).

Managing high performance storage systems with hybrid storage technologies

There is provided a method for managing a solid state storage system with hybrid storage technologies. The method includes monitoring one or more storage request streams to identify operating mode characteristics therein from among a set of possible operating mode characteristics. The set of possible operating mode characteristics correspond to a set of available operating modes of the hybrid storage technologies. The method further includes identifying a current operating mode from among the set of available operating modes responsive to the identified operating mode characteristics. The method also includes predicting a likely future operating mode responsive to variations in workload requirements to generate at least one future operating mode prediction. The method additionally includes controlling at least one of data placement, wear leveling, and garbage collection, responsive to the at least one future operating mode prediction.

Reordering a descriptor queue while searching the queue of descriptors corresponding to map segments
11537515 · 2022-12-27 · ·

Provided herein may be a memory controller configured to control a memory device. The memory controller may include a map buffer, a descriptor queue, and a descriptor controller. The map buffer may sequentially store map segments of a plurality of map segments stored in the memory device. The descriptor queue may store descriptors corresponding to the respective map segments, based on a plurality of addresses of the map buffer. The descriptor controller may search for a target descriptor among the stored descriptors based on a logical address received from a host, and reorder the stored descriptors while searching for the target descriptor.