Patent classifications
G06F9/5016
SYSTEM AND METHODS FOR TRANSACTION-BASED PROCESS MANAGEMENT
Systems and methods for transaction/file-based management of a plurality of processes associated with various jobs are provided. Through the management of discrete applications, a file distribution manager/scheduler orchestrates automated execution of different types of jobs. The processes executed for the various processes can vary based on job type, or other parameters.
MEMORY POOLING BETWEEN SELECTED MEMORY RESOURCES
Apparatuses, systems, and methods related to memory pooling between selected memory resources are described. A system using a memory pool formed as such may enable performance of functions, including automated functions critical for prevention of damage to a product, personnel safety, and/or reliable operation, based on increased access to data that may improve performance of a mission profile. For instance, one apparatus described herein includes a memory resource, a processing resource coupled to the memory resource, and a transceiver resource coupled to the processing resource. The memory resource, the processing resource, and the transceiver resource are configured to enable formation of a memory pool between the memory resource and another memory resource at another apparatus responsive to a request to access the other memory resource transmitted from the processing resource via the transceiver.
ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF
Disclosed is an electronic apparatus including: a first memory; a second memory; and a processor configured to: load a plurality of processes of an application into the first memory, identify a process switched to an inactivated state among the plurality of processes loaded into the first memory, store data of the process switched to the inactivated state in an area of the second memory by a sequential access method, and load the data of the process stored in the area of the second memory into the first memory based on the process being restored from the inactivated state to an activated state.
MEMORY ALLOCATION METHOD, RELATED DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
This application provides a memory allocation method. The method includes: obtaining a computation graph corresponding to a neural network; sequentially allocating memory space to M pieces of tensor data based on a sorting result of the M pieces of tensor data, where if at least a part of the allocated memory space can be reused for one of the M pieces of tensor data, the at least a part of the memory space that can be reused for the tensor data is allocated to the tensor data, the allocated memory space is memory space that has been allocated to the M pieces of tensor data before the tensor data, the sorting result indicates a sequence of allocating memory space to the M pieces of tensor data, and the sorting result is related to information about each of the M pieces of tensor data.
RESOURCE ALLOCATION METHOD AND APPARATUS, AND STORAGE MEDIUM
The method includes: obtaining, in response to a write request for a target data block, a value of a first sub-counter corresponding to the target data block in an integrity tree, where the first sub-counter is a sub-counter of a first shared counter, and a first storage resource of the first sub-counter belongs to a storage resource of the first shared counter; and allocating a second storage resource to the first sub-counter when it is detected that a value obtained after a first value is added to the value of the first sub-counter is greater than a maximum storage value of the first storage resource. In this way, the adjusted storage resource of the first sub-counter is increased, thereby further preventing overflow of the first sub-counter and improving performance of data integrity verification of the integrity tree.
ALLOCATING MEMORY AND REDIRECTING MEMORY WRITES IN A CLOUD COMPUTING SYSTEM BASED ON TEMPERATURE OF MEMORY MODULES
Systems and methods for allocating memory and redirecting data writes based on temperature of memory modules in a cloud computing system are described. A method includes maintaining temperature profiles for a first plurality of memory modules and a second plurality of memory modules, The method includes automatically redirecting a first request to write to memory from a first compute entity being executed by the first processor to a selected one of a first plurality of memory chips, whose temperature does not meet or exceed the temperature threshold, included in at least the first plurality of memory modules and automatically redirecting a second request to write to memory from a second compute entity being executed by the second processor to a selected one of the second plurality of memory chips, whose temperature does not meet or exceed the temperature threshold, included in at least the second plurality of memory modules.
Distributed Processing System
A distributed processing system to which a plurality of distributed nodes are connected, each of the distributed nodes including a plurality of arithmetic devices and an interconnect device, wherein, in the interconnect device and/or the arithmetic devices of one of the distributed nodes, memory areas are assigned to each job to be processed by the distributed processing system, and direct memory access between memories for processing the job is executed at least between interconnect devices, between arithmetic devices or between an interconnect device and an arithmetic device.
Memory management in graphics and compute application programming interfaces
Methods are provided for creating objects in a way that permits an API client to explicitly participate in memory management for an object created using the API. Methods for managing data object memory include requesting memory requirements for an object using an API and expressly allocating a memory location for the object based on the memory requirements. Methods are also provided for cloning objects such that a state of the object remains unchanged from the original object to the cloned object or can be explicitly specified.
Deep learning heterogeneous computing method based on layer-wide memory allocation and system thereof
A deep learning heterogeneous computing method based on layer-wide memory allocation, at least comprises steps of: traversing a neural network model so as to acquire a training operational sequence and a number of layers L thereof; calculating a memory room R.sub.1 required by data involved in operation at the i.sup.th layer of the neural network model under a double-buffer configuration, where 1≤i≤L; altering a layer structure of the i.sup.th layer and updating the training operational sequence; distributing all the data across a memory room of the CPU and the memory room of the GPU according to a data placement method; performing iterative computation at each said layer successively based on the training operational sequence so as to complete neural network training.
Configurable NVM set to tradeoff between performance and user space
An embodiment of an electronic apparatus may include one or more substrates, and logic coupled to the one or more substrates, the logic to determine a set of requirements for a persistent storage media based on input from an agent, dedicate one or more banks of the persistent storage media to the agent based on the set of requirements, and configure at least one of the dedicated one or more banks of the persistent storage media at a program mode width which is narrower than a native maximum program mode width for the persistent storage media. Other embodiments are disclosed and claimed.