Patent classifications
G06F12/00
Memory access techniques in memory devices with multiple partitions
Methods, systems, and devices for operating a memory array are described. A memory controller may be configured to provide enhanced bandwidth on a command/address (C/A) bus, which may have a relatively low pin count, through use of a next partition command that may repeat an array command from a current partition at a different partition indicated by the next partition command. Such a next partition command may use fewer clock cycles than a command that includes a complete instruction and memory location information.
Dynamic scheduling of distributed storage management tasks using predicted system characteristics
Systems and methods for scheduling storage management tasks over predicted user tasks in a distributed storage system. A method commences upon receiving a set of historical stimulus records that characterize management tasks that are run in the storage system. A corresponding set of historical response records comprising system metrics associated with execution of the system tasks is also received. A learning model is formed from the stimulus records and the response records and formatted to be used as a predictor. A set of forecasted user tasks is input as new stimulus records to the predictor to determine a set of forecasted system metrics that would result from running the forecasted user tasks. Management tasks are selected so as not to impact the forecasted user tasks. Management tasks can be selected based on non-contentions resource usage between historical management task resource usage and predictions of resource usage by the user tasks.
Scalable cloud-based backup method
A computer-implemented system and method of backing up and restoring a containerized application or a cloud-based application using a datamover service includes determining a stateful set of services of the containerized application or cloud-based application to be backed up. A persistent volume associated with the determined stateful set of services of the containerized application or cloud-based application is identified. Then, a snapshot of the identified persistent volume is created and a new persistent volume is created from the snapshot. The created new persistent volume is attached to a datamover service. Data from the created new persistent volume is then copied to a network file system or storage system using the datamover service, thereby creating backup data stored in a storage system.
System and method for indexing image backups
A backup manager for providing backup services includes persistent storage and a backup orchestrator. The persistent storage includes protection policies. The backup orchestrator generates a backup for a client based on the protection policies; identifies a portion of the backup that includes an allocation scheme; extracts system metadata from the backup using the allocation scheme; generates an index for the backup using the system metadata; and stores the backup and the index in backup storage.
System, device and method for accessing device-attached memory
A device connected to a host processor via a bus includes: an accelerator circuit configured to operate based on a message received from the host processor; and a controller configured to control an access to a memory connected to the device, wherein the controller is further configured to, in response to a read request received from the accelerator circuit, provide a first message requesting resolution of coherence to the host processor and prefetch first data from the memory.
Method, apparatus, and system for run-time checking of memory tags in a processor-based system
A data processing system includes a store datapath configured to perform tag checking in a store operation to a store address associated with a cache line in a memory. The store datapath includes a cache lookup circuit configured to pre-load a store cache line that is to be updated in the store operation, wherein the store cache line comprises the cache line in the memory to be updated in the store operation. The store datapath also includes a tag check circuit configured to compare a store address tag associated with the store address to a store operation tag associated with the store operation. The data processing system may include a load datapath configured to perform tag checking in a load operation from a load cache line in the memory by comparing a load address tag associated with the load address to a load operation tag associated with the load operation.
Instruction caching scheme for memory devices
Methods, systems, and devices for an enhanced instruction caching scheme are described. A memory controller may include a first closely-coupled memory component that is associated with storing data and control information and a second closely-coupled memory component that is associated with storing control information. The memory controller may be configured to retrieve data from the first memory closely-coupled component and control information from a second closely-coupled memory component. Control information may be stored in the first closely-coupled memory component, and a memory controller may access the control information stored in the first closely-coupled memory component by transferring, from the first closely-coupled memory component, the control information into the second closely-coupled memory component. After transferring the control information into the second closely-coupled memory component, the memory controller may access the control information from the second closely-coupled memory component.
Methods and apparatuses for addressing memory caches
A cache memory includes cache lines to store information. The stored information is associated with physical addresses that include first, second, and third distinct portions. The cache lines are indexed by the second portions of respective physical addresses associated with the stored information. The cache memory also includes one or more tables, each of which includes respective table entries that are indexed by the first portions of the respective physical addresses. The respective table entries in each of the one or more tables are to store indications of the second portions of respective physical addresses associated with the stored information.
Thermal event prediction in hybrid memory modules
A controller of a non-volatile, dual, in-line memory modules (NVDIMM). A NVDIMM is configured to predict thermal events associated with save and restore operations prior to starting the save or restore operation. The controller of the NVDIMM includes a thermal event prediction circuit to predict whether a thermal event will occur in response to a request to perform a save or restore operation, and to cause the controller to perform an action in response to a determination that a thermal event is likely to occur. To predict the thermal event, the controller may be configured to predict a peak temperature of the save or restore operation based on a predicted temperature increase from an initial or starting temperature. The predicted temperature increase may be based on a rate of temperature change during the save or restore operation and a duration of the save or restore operation.
Systems and methods for backing up volatile storage devices
A method for backing up data, that includes making a detection, by a volatile storage firmware, that data communication to a volatile storage component is degraded, initiating a direct memory access (DMA) engine to copy the data from the volatile storage component to a non-volatile storage device, and in response to initiating copying of the data, initiating a shutdown of the volatile storage component.