Patent classifications
G06F9/5061
Unbalanced partitioning of database for application data
Provided is a database system and method in which storage is partitioned in an unbalanced format for faster access. In one example, the method may include one or more of receiving a request to store a data record, identifying a partition from among a plurality of partitions of a database based on a shard identifier in the request, automatically determining a unique range of data identifiers designated to the partition from the plurality of partitions, respectively, based on an unbalanced partitioning, determining whether the data identifier is available within the unique range of data identifiers of the identified partition, and storing the data record at the identified partition in response to determining the data identifier is available. The unbalanced partitioning according to various embodiments reduces the partitions that need to be checked during a data insert/access operation of the database.
LOCKING AND SYNCHRONIZATION FOR HIERARCHICAL RESOURCE RESERVATION IN A DATA CENTER
An example method of reserving a resource of virtualized infrastructure in a data center on behalf of a client includes: obtaining, by a resource lock manager from a topology manager, a sub-topology for the resource from a resource topology of the virtualized infrastructure; setting, by the resource lock manager, an exclusive lock on the resource and on each of at least one descendant in the sub-topology for the resource, each exclusive lock disallowing any other lock on its respective resource; setting, by the resource lock manager, a limited lock on each ancestor in the sub-topology for the resource, each limited lock allowing any other limited lock on its respective resource; and notifying the client that a reservation of the resource is granted.
Resource monitor for monitoring long-standing computing resources
Disclosed herein are system, apparatus, article of manufacture, method, and/or computer program product embodiments for monitoring long-standing computing resources. An apparatus may operate by receiving a cloud monitoring notification, where the cloud monitoring notification may indicate an occurrence of a monitored condition. The apparatus may then operate by scanning a cluster computing system for resource having a client assigned resource identifier and a computing resource attribute based on a resource identifier scan parameter and a resource attribute scan parameter. The apparatus may further operate by generating a resource notification request based on the scanning of the cluster computing system and transmitting the resource notification request to a communications system to notify a user that the resource has a computing resource attribute that match the resource attribute scan parameter.
Method and apparatus for stateless parallel processing of tasks and workflows
In a method for parallel processing of a data stream, a processing task is received to process the data stream that includes a plurality of segments. A split operation is performed on the data stream to split the plurality of segments into N sub-streams. Each of the N sub-streams includes one or more segments of the plurality of segments. The N is a positive integer. N sub-processing tasks are performed on the N sub-streams to generate N processed sub-streams. A merge operation is performed on the N processed sub-streams based on a merge buffer to generate a merged output data stream. The merge buffer includes an output iFIFO buffer and N sub-output iFIFO buffers coupled to the output iFIFO buffer. The merged output data stream is identical to an output data stream that is generated when the processing task is applied directly to the data stream without the split operation.
EDGE FUNCTION BURSTING
One example method includes determining that local resources at an edge site are inadequate to support performance of a function needed by software running on the edge site, invoking a client agent, in response to invoking the client agent, receiving an execution manifest, determining, by the client agent, where to execute the function, wherein the determining comprises identifying a target execution environment for the function and the determining is based in part on information contained in the execution manifest, and transmitting, by the client agent, the execution manifest to a server agent of the target execution environment, and the execution manifest facilitates execution of the function in the target execution environment.
System and method for low latency node local scheduling in distributed resource management
A system for allocation of resources and processing jobs within a distributed system includes a processor and a memory coupled to the processor. The memory includes at least one process and at least one resource allocator. The process is adapted for processing jobs within a distributed system which receives jobs to be processed. The resource allocator is communicably coupled with at least one process, and is adapted to generate one or more sub-processes within a limit of one or more resources allocated to the process for processing jobs.
CITY MANAGEMENT SUPPORT APPARATUS, CITY MANAGEMENT SUPPORT METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
The city management support apparatus is an apparatus supporting management of a city in which a plurality of services sharing a physical resource are provided. The city management support apparatus receives an input of information on a provision status of the resource, and receives an input of a service definition for each of the plurality of services. The city management support apparatus calculates a time transition of dependency of the plurality of services on the resource based on the service definition for each of the plurality of services, and detects a competition for acquisition of the resource among the plurality of services based on the time transition of the dependency. The city management support apparatus generates a proposed amendment to the service definition for at least one of the plurality of services so as to optimize the competition for the acquisition of the resource among the plurality of services.
SYSTEM AND METHOD FOR ALLOCATION OF A SPECIALIZED WORKLOAD BASED ON AGGREGATION AND PARTITIONING INFORMATION
A method for managing specialized hardware resources includes obtaining, by a resource partitioning agent, a request for a specialized workload, in response to the request: obtaining aggregation capability information corresponding to the specialized hardware resources in an information handling system, obtaining partitioning capability information associated with the specialized hardware resources, and initiating allocation of a set of specialized hardware resources to the specialized workload based on the aggregation capability information and the partitioning capability information.
Allocation and placement of resources for network computation
Techniques for operating a computing system to perform neural network operations are disclosed. In one example, a method comprises receiving a neural network model, determining a sequence of neural network operations based on data dependency in the neural network model, and determining a set of instructions to map the sequence of neural network operations to the processing resources of the neural network processor. The method further comprises determining, based on a set of memory access operations included in the set of instructions, a first set of memory references associated with a first location of an external memory to store the input data and a second set of memory references associated with a second location of the external memory to store the output data, and generating an instruction file including the set of instructions, the first set of memory references and the second set of memory references.
Communication optimizations for distributed machine learning
Embodiments described herein provide a system to configure distributed training of a neural network, the system comprising memory to store a library to facilitate data transmission during distributed training of the neural network; a network interface to enable transmission and receipt of configuration data associated with a set of worker nodes, the worker nodes configured to perform distributed training of the neural network; and a processor to execute instructions provided by the library. The instructions cause the processor to create one or more groups of the worker nodes, the one or more groups of worker nodes to be created based on a communication pattern for messages to be transmitted between the worker nodes during distributed training of the neural network. The processor can transparently adjust communication paths between worker nodes based on the communication pattern.