Patent classifications
G06F3/0679
DATA MANAGEMENT SYSTEM AND METHOD OF CONTROLLING
A storage system. In some embodiments, the storage system includes a plurality of object stores, and a plurality of data managers, connected to the object stores. The plurality of data managers may include a plurality of processing circuits. A first processing circuit of the plurality of processing circuits may be configured to process primarily input-output operations, and a second processing circuit of the plurality of processing circuits may be configured to process primarily input-output completions.
Hardware Interconnect With Memory Coherence
Aspects of the disclosure are directed to hardware interconnects and corresponding devices and systems for non-coherently accessing data in shared memory devices. Data produced and consumed by devices implementing the hardware interconnect can read and write directly to a memory device shared by multiple devices, and limit coherent memory transactions to relatively smaller flags and descriptors used to facilitate data transmission as described herein. Devices can communicate less data on input/output channels, and more data on memory and cache channels that are more efficient for data transmission. Aspects of the disclosure are directed to devices configured to process data that is read from the shared memory device. Devices, such as hardware accelerators, can receive data indicating addresses for different data buffers with data for processing, and non-coherently read or write the contents of the data buffers on a memory device shared between the accelerators and a host device.
ACCELERATOR TO REDUCE DATA DIMENSIONALITY AND ASSOCIATED SYSTEMS AND METHODS
An device is disclosed. A first buffer to store a query data point, and a second buffer to store a matrix of candidate data points. A processing element may process the query data point and the matrix of candidate data points to identify candidate data points in the matrix of candidate data points that are nearest to the query data point.
TECHNIQUES FOR MANAGING TEMPORARILY RETIRED BLOCKS OF A MEMORY SYSTEM
Methods, systems, and devices for techniques for managing temporarily retired blocks of a memory system are described. In some examples, aspects of a memory system or memory device may be configured to determine an error for a block of memory cells. For example, a controller may determine an existence of the error and may temporarily retire the block. A media management operation may be performed on the temporarily retired block and, depending on one or more characteristics of the error, the temporarily retired block may be enabled or retired.
STORAGE DEVICE AND HOST DEVICE FOR OPTIMIZING MODEL FOR CALCULATING DELAY TIME OF THE STORAGE DEVICE
A storage device according to the present technology may include a memory device for storing data, a buffer memory configured to temporarily store data to be stored in the memory device, and a memory controller configured to determine a delay time based on a plurality of parameters upon receipt of a write request from a host, and transmit a data request to the host after the delay time has elapsed.
MEMORY SYSTEM AND METHOD OF OPERATING THE SAME
A memory controller, a memory system and a method of operating a memory controller controlling a memory device are described. The memory controller may include a workload manager in communication with the memory device in which data is written and is read, the workload manager configured to acquire an amount of write data written to the memory device during a preset reference time, calculate a workload parameter indicating a ratio of the amount of write data to a reference write amount, and store the workload parameter for the preset reference time, and a performance manager configured to control, based on the workload parameter, a certain background operation performed by the memory device during a period corresponding to the workload parameter.
MEMORY EXPANSION WITH PERSISTENT PREDICTIVE PREFETCHING
A memory device with non-volatile memory and persistent predictive prefetching provides highspeed storage to a computer system. The memory device uses a non-volatile memory to store data and a volatile memory to cache the data from the non-volatile memory. The computer system sends access requests to obtain data in the non-volatile memory. A prediction engine in the memory device receives the access requests. The prediction engine compute access histories based on the access requests and stores them in an access history table. The prediction engine computes prediction of non-volatile memory addresses that will be accessed in the future based on the stored access history table. The prediction engine causes to store the data from the predicted addresses of the non-volatile memory in the volatile memory. The memory device stores the prediction in the non-volatile memory so the past predictions can be used after restarting the computer system.
SYSTEM SUPPORTING VIRTUALIZATION OF SR-IOV CAPABLE DEVICES
An apparatus supports single root input/output virtualization (SR-IOV) capable devices. The apparatus includes input/output ports, and SR-IOV capable PCIe devices. Each SR-IOV capable PCIe device has one or more namespaces or controller memory buffers. The SR-IOV capable PCIe device provides one or more physical functions and virtual functions that can access the one or more namespaces or controller memory buffers. A PCIe switch controller communicates with host servers coupled to the input/output ports, and assigns one or more virtual functions to each host device, and enables the host devices to access one or more namespaces or controller memory buffers through the assigned virtual functions. The PCIe device is configured to attach one or more namespaces or one or more partitions of one or more controller memory buffers to each virtual function, set at least one namespace or controller memory buffer to a shared state and allow different host devices to access the same namespace or controller memory buffer using respective assigned virtual functions.
REDUCING WRITE AMPLIFICATION AND OVER-PROVISIONING USING FLASH TRANSLATION LAYER SYNCHRONIZATION
A host Flash Translation Layer (FTL) synchronizes host FTL operations with the drive FTL operations to reduce write amplification and over-provisioning. Embodiments of FTL synchronization map, at the host FTL software (SW) stack level, logical bands in which data is managed, referred to as host bands, to the physical bands on a drive where data is stored. The host FTL tracks validity levels of data managed in host bands to determine validity levels of data stored in corresponding physical bands, and optimizes defragmentation operations (such as garbage collection processes and trim operations) applied by the host FTL SW stack to the physical bands based on the tracked validity levels.
USING PER MEMORY BANK LOAD CACHES FOR REDUCING POWER USE IN A SYSTEM ON A CHIP
In various examples, a VPU and associated components may be optimized to improve VPU performance and throughput. For example, the VPU may include a min/max collector, automatic store predication functionality, a SIMD data path organization that allows for inter-lane sharing, a transposed load/store with stride parameter functionality, a load with permute and zero insertion functionality, hardware, logic, and memory layout functionality to allow for two point and two by two point lookups, and per memory bank load caching capabilities. In addition, decoupled accelerators may be used to offload VPU processing tasks to increase throughput and performance, and a hardware sequencer may be included in a DMA system to reduce programming complexity of the VPU and the DMA system. The DMA and VPU may execute a VPU configuration mode that allows the VPU and DMA to operate without a processing controller for performing dynamic region based data movement operations.