Patent classifications
G06F2212/1048
Hinting Mechanism for Efficient Accelerator Services
Solid State Drive (SSD) devices with hardware accelerators and methods for apportioning storage resources in the SSD are disclosed. SSDs typically comprise an array of non-volatile memory devices and a controller which manages access to the memory devices. The controller may also comprise one or more accelerators to either improve the performance of the SSD itself or to offload specialized computation workloads of a host-computing device. Different accelerators may be dynamically assigned portions of the non-volatile memory array according to the type of data being accessed and/or the throughput required. Provision is also made for the data to be accessed directly by the accelerators bypassing the controller and for a hinting mechanism to improve accelerator performance.
Memory system
A memory system includes: a first memory module including first volatile memories; a second memory module including second volatile memories, non-volatile memories and a module controller; a memory controller controlling the first and second memory modules through second and third control buses, respectively; and a switch array electrically coupling the second and third control buses, wherein the module controller controls the switch array to electrically couple the second and third control buses in a backup operation for backing up data of the first volatile memories to the non-volatile memories, wherein the first and second memory modules include one or more first memory stacks and one or more second memory stacks, respectively, wherein the first volatile memories are stacked in the first memory stacks, and wherein the second volatile memories, the non-volatile memories and the module controller are stacked in the second memory stacks.
CACHE MEMORY ARCHITECTURE AND MANAGEMENT
Aspects of the present disclosure relate to data cache management. In embodiments, a storage array's memory is provisioned with cache memory, wherein the cache memory includes one or more sets of distinctly sized cache slots. Additionally, a logical storage volume (LSV) is established with at least one logical block address (LBA) group. Further, at least one of the LSV's LBA groups is associated with two or more distinctly sized cache slots based on an input/output (IO) workload received by the storage array.
CACHE MEMORY ARCHITECTURE AND MANAGEMENT
Aspects of the present disclosure relate to data cache management. In embodiments, a logical block address (LBA) bucket is established with at least one logical LBA group. Additionally, at least one LBA group is associated with two or more distinctly sized cache slots based on an input/output (IO) workload received by the storage array. Further, the association includes binding the two or more distinctly sized cache slots with at least one LBA group and mapping the bound distinctly sized cache slots in a searchable data structure. Furthermore, the searchable data structure identifies relationships between slot pointers and key metadata.
Dynamic cache size management of multi-tenant caching systems
Cache memory requirements between normal and peak operation may vary by two orders of magnitude or more. A cache memory management system for multi-tenant computing environments monitors memory requests and uses a pattern matching classifier to generate patterns which are then delivered to a neural network. The neural network is trained to predict near-future cache memory performance based on the current memory access patterns. An optimizer allocates cache memory among the tenants to ensure that each tenant has sufficient memory to meet its required service levels while avoiding the need to provision the computing environment with worst-case scenario levels of cache memory. System resources are preserved while maintaining required performance levels.
METHOD AND SYSTEM FOR STORING MEASUREMENT DATA DETECTED BY A SENSOR DEVICE AND INDICATIVE OF AN ANALYTE IN A SAMPLE OF A BODILY FLUID
A method for storing measurement data detected by a sensor and indicative of an analyte in a body fluid sample using a system having a processor and a memory. First and second measurement data indicative of first and second measurement values measured by a sensor, respectively, are provided. A relative measurement value is determined by the processor and is indicative of a value difference between the first and second measurement values. The first measurement value is stored in a first storage area having a first storage size in the memory. The relative measurement value is stored in a second storage area having a second storage size in the memory that is smaller than the first storage size. An indicator is also stored in the memory and is assigned to the relative measurement storage data in the memory and is indicative of a characteristic of the relative measurement storage data.
Banked memory architecture for multiple parallel datapath channels in an accelerator
The present disclosure relates to devices and methods for using a banked memory structure with accelerators. The devices and methods may segment and isolate dataflows in datapath and memory of the accelerator. The devices and methods may provide each data channel with its own register memory bank. The devices and methods may use a memory address decoder to place the local variables in the proper memory bank.
Logical-to-physical data structures
An example apparatus comprises a controller coupled to a non-volatile memory (NVM) device. The controller may be configured to cause a logical block address (LBA) to be stored in a first logical-to-physical (L2P) data structure in the NVM device and a physical block address (PBA) to be stored in a second L2P data structure in the NVM device The first L2P data structure and the second L2P data structure may have a same size associated therewith.
Method and apparatus for using a storage system as main memory
A data access system including a processor, multiple cache modules for the main memory, and a storage drive. The cache modules include a FLC controller and a main memory cache. The multiple cache modules function as main memory. The processor sends read/write requests (with physical address) to the cache module. The cache module includes two or more stages with each stage including a FLC controller and DRAM (with associated controller). If the first stage FLC module does not include the physical address, the request is forwarded to a second stage FLC module. If the second stage FLC module does not include the physical address, the request is forwarded to the storage drive, a partition reserved for main memory. The first stage FLC module has high speed, lower power operation while the second stage FLC is a low-cost implementation. Multiple FLC modules may connect to the processor in parallel.
Unified address translation for virtualization of input/output devices
Embodiments of apparatuses, methods, and systems for unified address translation for virtualization of input/output devices are described. In an embodiment, an apparatus includes first circuitry to use at least an identifier of a device to locate a context entry and second circuitry to use at least a process address space identifier (PASID) to locate a PASID-entry. The context entry is to include at least one of a page-table pointer to a page-table translation structure and a PASID. The PASID-entry is to include at least one of a first-level page-table pointer to a first-level translation structure and a second-level page-table pointer to a second-level translation structure. The PASID is to be supplied by the device. At least one of the apparatus, the context entry, and the PASID entry is to include one or more control fields to indicate whether the first-level page-table pointer or the second-level page-table pointer is to be used.