G06F2209/503

SCHEDULING WORKLOADS ON PARTITIONED RESOURCES OF A HOST SYSTEM IN A CONTAINER-ORCHESTRATION SYSTEM
20220326986 · 2022-10-13 ·

Techniques of scheduling workload(s) on partitioned resources of host systems are described. The techniques can be used, for example, in a container-orchestration system. One technique includes retrieving information characterizing at least one schedulable partition and determining an availability and a suitability of one or more of the schedulable partition(s) for executing a workload in view of the information. Each of the schedulable partition(s) includes resources of one or more host systems. The technique also includes selecting one or more of the schedulable partition(s) to execute the workload in view of the availability and the suitability.

Allocation of accelerator resources based on job type
11663026 · 2023-05-30 · ·

A resource use method, an electronic device, and a computer program product are provided in embodiments of the present disclosure. The method includes determining a plurality of jobs requesting to use accelerator resources to accelerate data processing. The plurality of jobs are initiated by at least one virtual machine. The method further includes allocating available accelerator resources to the plurality of jobs based on job types of the plurality of jobs. The method further includes causing the plurality of jobs to be executed using the allocated accelerator resources. With the embodiments of the present disclosure, accelerator resources can be dynamically allocated, thereby improving the overall performance of a system.

PERFORMANCE TUNING IN A NETWORK SYSTEM

A container-based orchestration system includes a master node and a plurality of worker nodes. The master node can receive, from each agent executing on a corresponding worker node, node characteristics associated with the worker node. The master node can determine, for each worker node, one or more parameters corresponding to the node characteristics associated with the corresponding worker node and a node profile of the worker node and provide the parameters to the agent executing on the corresponding worker node. The agent configures the worker node in accordance with the parameters. In response to receiving a request to deploy a pod to a worker node, the master node can select a worker node to receive the pod based on the node characteristics and the pod characteristics. The agent can configure the selected worker node to execute workloads of the pod in accordance with the one or more parameters.

PLATFORM RESOURCE SELCTION FOR UPSCALER OPERATIONS

Compound processing of an upscaler operation using platform resources includes: identifying a plurality of platform resources available to perform an upscaling operation, wherein the plurality of platform resources includes one or more graphics processor units (GPUs) and one or more accelerated processing units (APUs); and dynamically assigning workloads of the upscaling operation to one or more of the platform resources based on a modality of the upscaling operation; and processing the workloads of the upscaling operation by the platform resources to which the workloads are assigned.

ANALYTIC IMAGE FORMAT FOR VISUAL COMPUTING

In one embodiment, an apparatus comprises a storage device and a processor. The storage device stores a plurality of images captured by a camera. The processor: accesses visual data associated with an image captured by the camera; determines a tile size parameter for partitioning the visual data into a plurality of tiles; partitions the visual data into the plurality of tiles based on the tile size parameter, wherein the plurality of tiles corresponds to a plurality of regions within the image; compresses the plurality of tiles into a plurality of compressed tiles, wherein each tile is compressed independently; generates a tile-based representation of the image, wherein the tile-based representation comprises an array of the plurality of compressed tiles; and stores the tile-based representation of the image on the storage device.

NON-TRANSITORY COMPUTER-READABLE MEDIUM, SERVICE MANAGEMENT DEVICE, AND SERVICE MANAGEMENT METHOD
20230061892 · 2023-03-02 · ·

The present disclosure relates to a non-transitory computer-readable recording medium storing an analysis program that causes a computer to execute a process. The process includes determining whether a resource usage of a machine executing a service exceeds a threshold when the machine processes a request for the service, notifying the machine of the request when it is determined that the resource usage does not exceed the threshold, and scaling out the machine when it is determined that the resource usage exceeds the threshold.

PREVENTING SCHEDULING OR EXECUTING A RESOURCE ON AN INCONSISTENT HOST NODE

Examples relate to preventing scheduling or executing a resource on an inconsistent host node in a networked system. Some examples track a taint status of the host node and identify whether the host node is inconsistent based on the taint status of the host node over a predefined period of time. Upon identifying that the host node is inconsistent, a master taint is applied on the inconsistent host node, which prevents scheduling or executing a resource on the identified inconsistent host node.

SYSTEM AND METHOD FOR WORKLOAD MANAGEMENT IN A DISTRIBUTED SYSTEM

Methods and systems for managing workloads in a distributed computing environment are disclosed. The distributed computing environment may include dedicated infrastructure for performing the workloads and on-demand infrastructure that may be used to perform the workloads with attached increment cost for performance of workloads. It may be preferable to host the workloads with the dedicated infrastructure. However, the dedicated infrastructure may not always have sufficient computing resources to host all of the workloads that are to be performed over time. The disclosed methods and system may provide for workload management by automating workload instantiation, migration, and resource expansion of dedicated infrastructure.

Ann training through processing power of parked vehicles

A system for ANN training through processing power of parked vehicles. The system can include a master computing device having a controller configured to control training of an ANN. The training can be performed at least partially in separate parts by computing devices of parked vehicles. The controller can be configured to separate computing tasks of training the ANN into separated tasks. Also, the controller can be configured to assign at least some of the separated tasks to selected computing devices of parked vehicles. The controller can also be configured to receive and assemble results of the separated tasks to train the ANN. The controller can also be configured to train the ANN according to the results. The master computing device can be configured to send the assigned tasks to the selected devices of the vehicles as well as receive, from the selected devices, the results of the assigned tasks.

Dynamic workload shifting within a connected vehicle

A system for dynamic job shifting includes an interface and a processor. The interface is configured to receive a job request to perform a job. The processor is configured to monitor available resources for performing the job. The available resources include a set of vehicle carried systems accessible to a vehicle event recorder via a communication link. The vehicle event recorder is coupled to a vehicle. The processor is further configured to determine a vehicle carried system of the set of vehicle carried systems for performing the job; provide the job to the vehicle carried system, where the job is configured to create one or more checkpoint data files; and receive an indication of creation of a checkpoint data file of the one or more checkpoint data files.