G06F2212/202

STORAGE SYSTEM AND METHOD FOR ACCESSING SAME
20230049799 · 2023-02-16 ·

A data access system including a processor and a storage system including a main memory and a cache module. The cache module includes a FLC controller and a cache. The cache is configured as a FLC to be accessed prior to accessing the main memory. The processor is coupled to levels of cache separate from the FLC. The processor generates, in response to data required by the processor not being in the levels of cache, a physical address corresponding to a physical location in the storage system. The FLC controller generates a virtual address based on the physical address. The virtual address corresponds to a physical location within the FLC or the main memory. The cache module causes, in response to the virtual address not corresponding to the physical location within the FLC, the data required by the processor to be retrieved from the main memory.

Mapping table management method, memory control circuit unit and memory storage device
11573908 · 2023-02-07 · ·

A mapping table management method, a memory control circuit unit, and a memory storage device are provided. The method includes: receiving a read command from a host system, wherein the read command indicates reading a first data stored in at least one first logical address; and searching whether a relation management information reflects that a first group static mapping table recording the first logical address is related to a dynamic mapping table. In response to a search result reflecting that the first group static mapping table is related to the dynamic mapping table, the dynamic mapping table is searched to obtain a first physical address mapped by the first logical address. And if not related, the first group static mapping table among group static mapping tables is searched to obtain a second physical address mapped by the first logical address.

HASH INDEX

Example implementations disclosed herein can be used to build, maintain, and use a hash table distributed across the plurality multiple nodes in a multi-node computing system. The hash table can include data pages associated by corresponding pointers according to a tree data structure. The data pages include leaf data pages. Each leaf data page can be associated with a corresponding hash value and include a tag bitmap. When a transaction associated with a key is executed, a hash value and a tag value are generated based on the key. The leaf data pages can be searched using the hash value. A probability that a leaf data page includes the key can be determined based on a comparison tag value with the tag bitmap.

Multi-level wear leveling for non-volatile memory
11704024 · 2023-07-18 · ·

A memory sub-system performs a first media management operation among a plurality of individual data units of a memory device after a first interval, the first media management operation comprising a first algebraic mapping function, and performs a second media management operation among a first plurality of groups of data units of the memory device after a second interval, wherein a first group of the first plurality of groups comprises the plurality of individual data units, the second media management operation comprising a second algebraic mapping function.

Coalescing read commands by location from a host queue

Method and apparatus for managing data in a storage device, such as a solid-state drive (SSD). A non-volatile memory (NVM) is arranged into multiple garbage collection units (GCUs) each separately erasable and allocatable as a unit. Read circuitry applies read voltages to memory cells in the GCUs to sense a programmed state of the memory cells. Calibration circuitry groups different memory cells from different GCUs into calibration groups that share a selected set of read voltages. A read command queue accumulates pending read commands to transfer data from the NVM to a local read buffer. Read command coalescing circuitry coalesces selected read commands from the queue into a combined command for execution as a single batch command. The combined batch command may include read voltages for use in retrieval of the requested data.

Technologies for assigning workloads to balance multiple resource allocation objectives

Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.

STORAGE DEVICE AND OPERATING METHOD THEREOF
20220391318 · 2022-12-08 ·

Disclosed is a method of operating a storage device which includes a non-volatile memory device. The method includes informing a host that a designation functionality for designating a data criticality and a priority to namespaces of the non-volatile memory device is possible, enabling the designation functionality in response to receiving an approval of the designation functionality, receiving, from the host, a first request for designating a data criticality and a priority for a first namespace of the namespaces, and generating a namespace mapping table in response to the first request.

SYSTEMS, METHODS, AND APPARATUS FOR PAGE MIGRATION IN MEMORY SYSTEMS
20220382478 · 2022-12-01 ·

A method for managing a memory system may include monitoring a page of a first memory of a first type, determine a usage of the page based on the monitoring, and migrating the page to a second memory of a second type based on the usage of the page. Monitoring the page may include monitoring a mapping of the page. Monitoring the mapping of the page may include monitoring a mapping of the page from a logical address to a physical address. Determining the usage of the page may include determining an update frequency of the page. Determining the usage of the page may include comparing the update frequency of the page to a threshold. Migrating the page may include sending an interrupt to a device driver. Migrating the page may include setting a write protection status for the page.

Technologies for switching network traffic in a data center

Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.

SYSTEM AND METHOD TO EXTEND NVME QUEUES TO USER SPACE
20230054866 · 2023-02-23 ·

An embodiment includes a system, comprising: a processor configured to: read a stride parameter from a device coupled to the processor; and map registers associated with the device into virtual memory based on the stride parameter; wherein: the stride parameter is configured to indicate a stride between the registers associated with the device; and the processor is configured to map at least one of the registers to user space virtual memory in response to the stride parameter.