G06F9/5011

METHOD AND SYSTEM FOR BUSINESS YIELD AWARE WORKLOAD DISTRIBUTION

A disclosed workload distribution method determines a yield index for each microservice associated with a containerized application executing on a potentially heterogeneous cluster information handling systems. Each microservice is then assigned to one of N priority categories based on its yield index, where three is an acceptable, but not exclusive, value of N. Resource configuration profiles are maintained for each of the priority categories. Each resource configuration profile assigns a resource configuration to each microservice. An information handling resource associated with a particular microservice is configured in accordance with the resource configuration assigned to the particular microservice by a particular resource configuration profile corresponding to the yield index. In this manner, workloads can be assigned and resources configured in accordance with the containerized application's priorities as exposed by the value indices.

CONFIGURING A RESOURCE FOR EXECUTING A COMPUTATIONAL OPERATION

A computing node is disclosed. The computing node comprises processing circuitry configured to cause the computing node to receive a message (102) comprising configuration information for a resource of a data object that is hosted at the computing node and is associated with a computational operation, which computational operation is executable by the computing node. The processing circuitry is further configured to cause the computing node to configure (104) the resource of the data object on the computing node in accordance with the received configuration information, and to execute (106) the computational operation in accordance with the configured resource. Also disclosed are a corresponding server node and methods of operating a computing node and a server node. The computing node may comprise a Lightweight Machine to Machine (LwM2M) client and the server node may comprise an LwM2M server.

Resource dependency system and graphical user interface

A resource dependency system displays two dynamically interactive interfaces in a resource dependency user interface, a hierarchical resource repository and a dependency graph user interface. User interactions on each interface can dynamically update either interface. For example, a selection of a particular resource in the dependency graph user interface causes the system to update the dependency graph user interface to indicate the selection and also updates the hierarchical resource repository to navigate to the appropriate folder corresponding to the stored location of the selected resource. In another example, a selection of a particular resource in the hierarchical resource repository causes the system to update the hierarchical resource repository to indicate the selection and also updates the dependency graph user interface to display an updated graph, indicate the selection and, in some embodiments, focus on the selected resource by zooming into a portion of the graph.

ALLOCATING RESOURCES FOR NETWORK FUNCTION VIRTUALIZATION

Controlling allocation of resources in network function virtualization. Data defining a pool of available physical resources is maintained. Data defining one or more resource allocation rules is identified. An application request is received. Physical resources from the pool are allocated to virtual resources to implement the application request, on the basis of the maintained data, the identified data and the received application request.

SYSTEM AND METHOD FOR BALANCING CONTAINERIZED APPLICATION OFFLOADING AND BURST TRANSMISSION FOR THERMAL CONTROL

An information handling system executing a containerized application and burst transmission thermal balance system may comprise a processor executing containerized software applications, the processor executing code instructions to determine a skin surface temperature of a portion of the chassis is approaching a preset limit, based on a temperature measured by one of a plurality of temperature sensors in the information handling system chassis at a first location, determine whether the first location is closer to the antenna or the processor to determine a causal heat source in the information handling system, and a load balancing driver to offload the execution of the containerized software applications to an edge computing resource via an antenna, when the processor is determined as the causal heat source.

Precisely tracking memory usage in multi-process computing environment

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for precisely tracking memory usage in a multi-process computing environment. One of the methods includes implementing an instance of a memory usage tracker (MUT) in each process running in a node of a computer system. A MUT can maintain an account of memory usage for each of multiple logical owners running on a process on which the MUT is running. The MUT can determine an actual memory quota for each owner, and enforce the actual memory quota of the owner. Enforcing the actual memory quota of the owner can include receiving each memory allocation request, checking each allocation request and a current state of the account against the actual quota, approving or rejecting each allocation request, communicating the approval or rejection to an underlying memory manager, and updating the owner account for each approved allocation request.

Storage allocation enhancement of microservices based on phases of a microservice run

Method and system are provided for storage allocation enhancement of microservices. A method carried out at a microservice orchestrator, includes: identifying distinct phases of a run of a microservice container; categorizing the phases of a run of a microservice container, wherein the categorization defines a predicted storage behavior of the microservice container input/output operations in the phase of the microservice container; and providing the categorization in association with the microservice container input/output operations in the phase to a storage system for use in storage allocation of the input/output operations.

Method, system, and computer program product for dynamically scheduling machine learning inference jobs with different quality of services on a shared infrastructure

A method, system, and computer program product for dynamically scheduling machine learning inference jobs receive or determine a plurality of performance profiles associated with a plurality of system resources, wherein each performance profile is associated with a machine learning model; receive a request for system resources for an inference job associated with the machine learning model; determine a system resource of the plurality of system resources for processing the inference job associated with the machine learning model based on the plurality of performance profiles and a quality of service requirement associated with the inference job; assign the system resource to the inference job for processing the inference job; receive result data associated with processing of the inference job with the system resource; and update based on the result data, a performance profile of the plurality of the performance profiles associated with the system resource and the machine learning model.

Dynamic adjustment of response time

Examples described herein relate to method and system for determining a response time for an action. A request for an action may be communicated from a source entity to a target entity. The action is generated by the source entity and which is to be responded by the target entity. Further, a response time corresponding to the action may be determined based on prior execution experience of one or more jobs associated with the action and a learning rate. Thereafter, the source entity may be allowed to wait for a response corresponding to a completion of the action from the target entity for at least a time duration corresponding to the response time.

Signal processing coordination among digital voice assistant computing devices
11705127 · 2023-07-18 · ·

Coordinating signal processing among computing devices in a voice-driven computing environment is provided. A first and second digital assistant can detect an input audio signal, perform a signal quality check, and provide indications that the first and second digital assistants are operational to process the input audio signal. A system can select the first digital assistant for further processing. The system can receive, from the first digital assistant, data packets including a command. The system can generate, for a network connected device selected from a plurality of network connected devices, an action data structure based on the data packets, and transmit the action data structure to the selected network connected device.