Patent classifications
G06F12/0882
COMPUTER-IMPLEMENTED METHOD FOR MANAGING CACHE UTILIZATION
A computer-implemented method for managing cache utilization of at least a first processor when sharing a cache with a further processor. The method includes: executing, during a first regulation interval, a first application on a first processor, wherein the first application causes at least one block to be mapped from an external memory to a shared cache according to a cache utilization policy associated with the first application; monitoring a utilization of the shared cache by the first processor during the first regulation interval; comparing the utilization of the shared cache by the first processor to a cache utilization condition associated with the first processor; and adjusting the cache utilization policy associated with the first application, when the utilization of the shared cache by the first processor exceeds the cache utilization condition associated with the first processor.
COMPUTER-IMPLEMENTED METHOD FOR MANAGING CACHE UTILIZATION
A computer-implemented method for managing cache utilization of at least a first processor when sharing a cache with a further processor. The method includes: executing, during a first regulation interval, a first application on a first processor, wherein the first application causes at least one block to be mapped from an external memory to a shared cache according to a cache utilization policy associated with the first application; monitoring a utilization of the shared cache by the first processor during the first regulation interval; comparing the utilization of the shared cache by the first processor to a cache utilization condition associated with the first processor; and adjusting the cache utilization policy associated with the first application, when the utilization of the shared cache by the first processor exceeds the cache utilization condition associated with the first processor.
Firmware loading for a memory controller
Methods, systems, and devices that support efficient upload of firmware from memory are described. Multiple copies of a set of firmware may be stored across multiple planes of a memory device, such as with one respective copy within each of a set of planes. The copies may be staggered or otherwise offset in terms of page locations within the respective planes such that like-addressed pages within different planes store different subsets of the set of firmware. A controller may concurrently retrieve different subsets of the set of firmware, each of the different subsets included in a different copy, by concurrently retrieving the subsets stored at like-addressed pages within different memory planes. Upon loading the firmware code, the controller execute the firmware code to perform one or more further operations.
Firmware loading for a memory controller
Methods, systems, and devices that support efficient upload of firmware from memory are described. Multiple copies of a set of firmware may be stored across multiple planes of a memory device, such as with one respective copy within each of a set of planes. The copies may be staggered or otherwise offset in terms of page locations within the respective planes such that like-addressed pages within different planes store different subsets of the set of firmware. A controller may concurrently retrieve different subsets of the set of firmware, each of the different subsets included in a different copy, by concurrently retrieving the subsets stored at like-addressed pages within different memory planes. Upon loading the firmware code, the controller execute the firmware code to perform one or more further operations.
Security aware prefetch mechanism
An apparatus to facilitate data cache security is disclosed. The apparatus includes a cache memory to store data; and prefetch hardware to pre-fetch data to be stored in the cache memory, including a cache set monitor hardware to determine critical cache addresses to monitor to determine processes that retrieve data from the cache memory; and pattern monitor hardware to monitor cache access patterns to the critical cache addresses to detect potential side-channel cache attacks on the cache memory by an attacker process.
Extended cache for efficient object store access by a database
Disclosed herein are system, method, and computer program product embodiments for utilizing an extended cache to access an object store efficiently. An embodiment operates by executing a database transaction, thereby causing pages to be written from a buffer cache to an extended cache and to an object store. The embodiment determines a transaction type of the database transaction. The transaction type can a read-only transaction or an update transaction. The embodiment determines a phase of the database transaction based on the determined transaction type. The phase can be an execution phase or a commit phase. The embodiment then applies a caching policy to the extended cache for the evicted pages based on the determined transaction type of the database transaction and the determined phase of the database transaction.
Extended cache for efficient object store access by a database
Disclosed herein are system, method, and computer program product embodiments for utilizing an extended cache to access an object store efficiently. An embodiment operates by executing a database transaction, thereby causing pages to be written from a buffer cache to an extended cache and to an object store. The embodiment determines a transaction type of the database transaction. The transaction type can a read-only transaction or an update transaction. The embodiment determines a phase of the database transaction based on the determined transaction type. The phase can be an execution phase or a commit phase. The embodiment then applies a caching policy to the extended cache for the evicted pages based on the determined transaction type of the database transaction and the determined phase of the database transaction.
SGL PROCESSING ACCELERATION METHOD AND STORAGE DEVICE
Disclosed are the SGL processing acceleration method and the storage device. The disclosed SGL processing acceleration method includes: obtaining the SGL associated with the IO command; generating the host space descriptor list and the DTU descriptor list according to the SGL; obtaining one or more host space descriptors of the host space descriptor list according to the DTU descriptor of the DTU descriptor list; and initiating the data transmission according to the obtained one or more host space descriptors.
IN-MEMORY DATABASE (IMDB) ACCELERATION THROUGH NEAR DATA PROCESSING
An accelerator is disclosed. The accelerator may include an on-chip memory to store a data from a database. The on-chip memory may include a first memory bank and a second memory bank. The first memory bank may store the data, which may include a first value and a second value. A computational engine may execute, in parallel, a command on the first value in the data and the command on the second value in the data in the on-chip memory. The on-chip memory may be configured to load a second data from the database into the second memory bank in parallel with the computation engine executing the command on the first value in the data and executing the command on the second value in the data.
IN-MEMORY DATABASE (IMDB) ACCELERATION THROUGH NEAR DATA PROCESSING
An accelerator is disclosed. The accelerator may include an on-chip memory to store a data from a database. The on-chip memory may include a first memory bank and a second memory bank. The first memory bank may store the data, which may include a first value and a second value. A computational engine may execute, in parallel, a command on the first value in the data and the command on the second value in the data in the on-chip memory. The on-chip memory may be configured to load a second data from the database into the second memory bank in parallel with the computation engine executing the command on the first value in the data and executing the command on the second value in the data.