G06F12/084

PRIORITY-BASED CACHE-LINE FITTING IN COMPRESSED MEMORY SYSTEMS OF PROCESSOR-BASED SYSTEMS

A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.

PRIORITY-BASED CACHE-LINE FITTING IN COMPRESSED MEMORY SYSTEMS OF PROCESSOR-BASED SYSTEMS

A compressed memory system includes a memory region that includes cache lines having priority levels. The compressed memory system also includes a compressed memory region that includes compressed cache lines. Each compressed cache line includes a first set of data bits configured to hold, in a first direction, either a portion of a first cache line or a portion of the first cache line after compression, the first cache line having a first priority level. Each compressed cache line also includes a second set of data bits configured to hold, in a second direction opposite to the first direction, either a portion of a second cache line or a portion of the second cache line after compression, the second cache line having a priority level lower than the first priority level. The first set of data bits includes a greater number of bits than the second set of data bits.

System and method for optimizing DRAM bus switching using LLC
11567885 · 2023-01-31 · ·

The present disclosure relates to a system and method for optimizing switching of a DRAM bus using LLC. An embodiment of the disclosure includes sending a first type request from a first type queue to the second memory via the memory bus if a direction setting of the memory bus is in a first direction corresponding to the first type request, decrementing a current direction credit count by a first type transaction decrement value, if the decremented current direction credit count is greater than zero, sending another first type request to the second memory via the memory bus and decrementing the current direction credit count again by the first type transaction decrement value, and if the decremented current direction credit count is zero, switching the direction setting of the memory bus to a second direction and resetting the current direction credit count to a second type initial value.

System and method for optimizing DRAM bus switching using LLC
11567885 · 2023-01-31 · ·

The present disclosure relates to a system and method for optimizing switching of a DRAM bus using LLC. An embodiment of the disclosure includes sending a first type request from a first type queue to the second memory via the memory bus if a direction setting of the memory bus is in a first direction corresponding to the first type request, decrementing a current direction credit count by a first type transaction decrement value, if the decremented current direction credit count is greater than zero, sending another first type request to the second memory via the memory bus and decrementing the current direction credit count again by the first type transaction decrement value, and if the decremented current direction credit count is zero, switching the direction setting of the memory bus to a second direction and resetting the current direction credit count to a second type initial value.

COMPUTER-IMPLEMENTED METHOD FOR MANAGING CACHE UTILIZATION

A computer-implemented method for managing cache utilization of at least a first processor when sharing a cache with a further processor. The method includes: executing, during a first regulation interval, a first application on a first processor, wherein the first application causes at least one block to be mapped from an external memory to a shared cache according to a cache utilization policy associated with the first application; monitoring a utilization of the shared cache by the first processor during the first regulation interval; comparing the utilization of the shared cache by the first processor to a cache utilization condition associated with the first processor; and adjusting the cache utilization policy associated with the first application, when the utilization of the shared cache by the first processor exceeds the cache utilization condition associated with the first processor.

COMPUTER-IMPLEMENTED METHOD FOR MANAGING CACHE UTILIZATION

A computer-implemented method for managing cache utilization of at least a first processor when sharing a cache with a further processor. The method includes: executing, during a first regulation interval, a first application on a first processor, wherein the first application causes at least one block to be mapped from an external memory to a shared cache according to a cache utilization policy associated with the first application; monitoring a utilization of the shared cache by the first processor during the first regulation interval; comparing the utilization of the shared cache by the first processor to a cache utilization condition associated with the first processor; and adjusting the cache utilization policy associated with the first application, when the utilization of the shared cache by the first processor exceeds the cache utilization condition associated with the first processor.

ON-DEMAND SHARED DATA CACHING METHOD, COMPUTER PROGRAM, AND COMPUTER READABLE MEDIUM APPLICABLE FOR DISTRIBUTED DEEP LEARNING COMPUTING
20230236980 · 2023-07-27 ·

Disclosed are an on-demand shared data caching method, a computer program, and a computer readable medium applicable for distributed deep learning computing. The method includes a step of dynamically building a distributed shared memory cache space, in which a distributed shared memory deployment and data file access management module is added to a deep learning framework to build the distributed shared memory cache space by a memory set of a multiple of computing nodes of a cluster computer; and a distributed deep learning computing step, in which the computing node overrides a Dataset API of the deep learning framework to execute the distributed deep learning computing. When reading a data file, if the data file exists in the distributed shared memory cache space, then it will be accessed directly, or else it will be obtained from an original specified directory location and stored in the distributed shared memory cache space.

ON-DEMAND SHARED DATA CACHING METHOD, COMPUTER PROGRAM, AND COMPUTER READABLE MEDIUM APPLICABLE FOR DISTRIBUTED DEEP LEARNING COMPUTING
20230236980 · 2023-07-27 ·

Disclosed are an on-demand shared data caching method, a computer program, and a computer readable medium applicable for distributed deep learning computing. The method includes a step of dynamically building a distributed shared memory cache space, in which a distributed shared memory deployment and data file access management module is added to a deep learning framework to build the distributed shared memory cache space by a memory set of a multiple of computing nodes of a cluster computer; and a distributed deep learning computing step, in which the computing node overrides a Dataset API of the deep learning framework to execute the distributed deep learning computing. When reading a data file, if the data file exists in the distributed shared memory cache space, then it will be accessed directly, or else it will be obtained from an original specified directory location and stored in the distributed shared memory cache space.

TECHNIQUES TO SHARE MEMORY ACROSS NODES IN A SYSTEM

Techniques to shared system memory across nodes in a system. Circuitry is arranged to provide a mechanism to share a memory region of a memory maintained at a first host CPU at a first node across multiple other host CPUs at multiple other nodes using various links and protocols described in one or more revisions of the Compute Express Link (CXL) specification.

TECHNIQUES TO SHARE MEMORY ACROSS NODES IN A SYSTEM

Techniques to shared system memory across nodes in a system. Circuitry is arranged to provide a mechanism to share a memory region of a memory maintained at a first host CPU at a first node across multiple other host CPUs at multiple other nodes using various links and protocols described in one or more revisions of the Compute Express Link (CXL) specification.