G06F2212/1024

PROBE FILTER RETENTION BASED LOW POWER STATE
20230039289 · 2023-02-09 · ·

A data fabric routes requests between the plurality of requestors and the plurality of responders. The data fabric includes a crossbar router, a coherent slave controller coupled to the crossbar router, and a probe filter coupled to the coherent slave controller and tracking the state of cached lines of memory. Power state control circuitry operates, responsive to detecting any of a plurality of designated conditions, to cause the probe filter to enter a retention low power state in which a clock signal to the probe filter is gated while power is maintained to the probe filter. Entering the retention low power state is performed when all in-process probe filter lookups are complete.

DATA PROCESSING SYSTEM HAVING MASTERS THAT ADAPT TO AGENTS WITH DIFFERING RETRY BEHAVIORS

A data processing system includes a plurality of snoopers, a processing unit including master, and a system fabric communicatively coupling the master and the plurality of snoopers. The master sets a retry operating mode for an interconnect operation in one of alternative first and second operating modes. The first operating mode is associated with a first type of snooper, and the second operating mode is associated with a different second type of snooper. The master issues a memory access request of the interconnect operation on the system fabric of the data processing system. Based on receipt of a combined response representing a systemwide coherence response to the request, the master delays an interval having a duration dependent on the retry operating mode and thereafter reissues the memory access request on the system fabric.

SPECULATIVE DELIVERY OF DATA FROM A LOWER LEVEL OF A MEMORY HIERARCHY IN A DATA PROCESSING SYSTEM

A multiprocessor data processing system includes multiple vertical cache hierarchies supporting a plurality of processor cores, a system memory, and an interconnect fabric coupled to the system memory and the multiple vertical cache hierarchies. Based on a request of a requesting processor core among the plurality of processor cores, a master in the multiprocessor data processing system issues, via the interconnect fabric, a read-type memory access request. The master receives via the interconnect fabric at least one beat of conditional data issued speculatively on the interconnect fabric by a controller of the system memory prior to receipt by the controller of a systemwide coherence response for the read-type memory access request. The master forwards the at least one beat of conditional data to the requesting processor core.

DETECTION OF MEMORY ACCESSES
20230044342 · 2023-02-09 ·

Examples described herein relate to dynamically adjust a manner of identifying hot pages in a remote memory pool based on adjustment of parameters of a data structure. In some examples, the parameters of the data structure include a range of number of access counts and a number of pages associated with the range.

Maintaining an active track data structure to determine active tracks in cache to process

Provided are a computer program product for managing tracks in a storage in a cache. An active track data structure indicates tracks in the cache that have an active status. An active bit in a cache control block for a track is set to indicate active for the track indicated as active in the active track data structure. In response to processing the cache control block, a determination is made, from the cache control block for the track, whether the track is active or inactive to determine processing for the cache control block.

SYSTEM AND METHOD FOR DATA WAREHOUSE STORAGE CAPACITY OPTIMIZATION BASED ON USAGE FREQUENCY OF DATA OBJECTS

A system for optimizing memory utilization receives a query statement that indicated to retrieve data objects. For a first data object, the system determines whether the first data object is stored in a high-grade data repository or a low-grade data repository. The system determines whether a recent usage frequency of the first data object exceeds a usage frequency threshold. If the system determines that the first data object is stored in the high-grade data repository and that the recent usage frequency is less than the usage frequency threshold, the system moves the first data object to the low-grade data repository. If the system determines that the first data object is stored in the low-grade data repository and that the recent usage frequency is more than the usage frequency threshold, the system moves the first data object to the high-grade data repository. The system outputs the data objects.

Dynamic data placement for collision avoidance among concurrent write streams
11573742 · 2023-02-07 · ·

A memory sub-system configured to dynamically generate a media layout to avoid media access collisions in concurrent streams. The memory sub-system can identify plurality of media units that are available to write data concurrently, select commands from the plurality of streams for concurrent execution in the available media units, generate and store a portion of a media layout dynamically in response to the commands being selected for concurrent execution in the plurality of media units, and executing the selected commands concurrently by storing data into the memory units according to physical addresses to which logical addresses used in the selected commands are mapped in the dynamically generated portion of the media layout.

Transparent self-replicating page tables in computing systems
11573904 · 2023-02-07 · ·

An example method of managing memory in a computer system implementing non-uniform memory access (NUMA) by a plurality of sockets each having a processor component and a memory component is described. The method includes replicating page tables for an application executing on a first socket of the plurality of sockets across each of the plurality of sockets; associating metadata for pages of the memory storing the replicated page tables in each of the plurality of sockets; and updating the replicated page tables using the metadata to locate the pages of the memory that store the replicated page tables.

Garbage collection command scheduling

Systems and methods are disclosed for the intelligent scheduling of garbage collection operations on a solid state memory. In certain embodiments, a method may comprise initiating a garbage collection process for a solid state memory (SSM) having a multiple die architecture, determining an order of die access for the garbage collection process based on an activity table indicating a use of one or more die in the multiple die architecture, and performing the garbage collection process based on the determined order of die access. Garbage collection reads may be directed to idle die to avoid conflicts with die busy performing other operations, thereby improving system performance.

TWO-LEVEL SYSTEM MAIN MEMORY
20180004432 · 2018-01-04 ·

Embodiments of the invention describe a system main memory comprising two levels of memory that include cached subsets of system disk level storage. This main memory includes “near memory” comprising memory made of volatile memory, and “far memory” comprising volatile or nonvolatile memory storage that is larger and slower than the near memory.

The far memory is presented as “main memory” to the host OS while the near memory is a cache for the far memory that is transparent to the OS, thus appearing to the OS the same as prior art main memory solutions. The management of the two-level memory may be done by a combination of logic and modules executed via the host CPU. Near memory may be coupled to the host system CPU via high bandwidth, low latency means for efficient processing. Far memory may be coupled to the CPU via low bandwidth, high latency means.