G06F2209/486

System and method for increasing robustness of heterogeneous computing systems

Disclosed is a method for task pruning that can be utilized in existing resource allocation systems to improve the systems' robustness without requiring changing to existing mapping heuristics. The pruning mechanism leverages a probability model, which calculates the probability of a task competing before its deadline in the presence of task dropping, and only schedules tasks that are likely to succeed. Pruning tasks whose chance of success is low improves the chance of success for other tasks. Tasks that are unlikely to succeed are either deferred from current scheduling event or are preemptively dropped from the system. The pruning method can benefit service providers by allowing them to utilize their resources more efficiently and use them only for tasks that can meet their deadlines. The pruning method further helps end users by making the system more robust in allowing more tasks to complete on time.

Management apparatus and management system

A management system includes a storage unit that stores schedule information that indicates schedules of a plurality of tasks to be performed by the same flight vehicle. The schedule information includes types of the plurality of tasks, date/times of the plurality of tasks, and locations at which the plurality of tasks are to be performed. When the types, the date/times, and the locations that are included in the schedule information stored in the storage unit satisfy an integration condition, an integration unit integrates the schedules of the plurality of tasks. An output unit outputs integrated schedule information indicating the schedules of the integrated tasks.

Managing I/O operations in a shared file system

A method for managing I/O operations in a shared file system environment. The method includes receiving for each of a plurality of compute nodes, information associated with I/O accesses to a shared file system, and applications for executing the I/O accesses. The method includes creating application profiles, based, at least in part, on the received information. The method then includes determining execution priorities for the application, based, at least in part, on the created application profiles.

POWER-PERFORMANCE BASED SYSTEM MANAGEMENT
20210397476 · 2021-12-23 ·

A method comprises receiving a workload for a computer system; sweeping at least one parameter of the computer system while executing the workload; monitoring one or more characteristics of the computer system while sweeping the at least one parameter, the one or more characteristics including total power consumption of the computer system; generating a power profile for the workload that indicates a respective selected value for the at least one parameter based on analysis of the monitored total power consumption of the computer system while sweeping the at least one parameter; and executing the workload based on the respective selected value of the at least one parameter.

PREDICTIVE SCHEDULED BACKUP SYSTEM AND METHOD

Embodiments for predictive scheduling of backups in a data protection system by initiating a first backup job in a series of scheduled consecutive backup jobs, wherein a second backup job is allowed to begin only after the first backup job is finished and not active, detecting whether or not the first backup job is still active when a second job is to start, and if so, estimating an amount of additional time required to finish the first backup job. The second backup job is then rescheduled to start at least at the end of the additional time. The estimated amount of additional time is determined using a throughput to target storage device parameter. This parameter is periodically checked to determine if there is a change to the estimated amount of additional time, and if so, the estimated time is recalculated based on the changed parameter.

Task scheduling in a GPU using wakeup event state data

A method of scheduling tasks within a GPU or other highly parallel processing unit is described which is both age-aware and wakeup event driven. Tasks which are received are added to an age-based task queue. Wakeup event bits for task types, or combinations of task types and data groups, are set in response to completion of a task dependency and these wakeup event bits are used to select an oldest task from the queue that satisfies predefined criteria.

Information processing system, information processing method, and information processing apparatus
11200088 · 2021-12-14 · ·

An information processing system, an information processing method, and an information processing apparatus. The information processing system includes at least one memory configured to store a plurality of jobs in order, by type of processing to be executed and a plurality of processors assigned to a specific type of processing to be executed, processes a job assigned to other processor stored in the memory in substitution for the other processor based on a determination that the job of the assigned type of processing is not stored in the memory, and cancels substituting of the processing of the job assigned to the other processor according to a processing status of at least one of other processors.

SERVER, APPARATUS, AND METHOD FOR ACCELERATING FILE INPUT-OUTPUT OFFLOAD FOR UNIKERNEL

Disclosed herein are an apparatus and method for accelerating file I/O offload for a unikernel. The method, performed by the apparatus and server for accelerating file I/O offload for the unikernel, includes; executing, by the apparatus, an application in the unikernal and calling, by the thread of the application, a file I/O function; generating, by the unikernal, a file I/O offload request using the file I/O function; transmitting, by the unikernal, the file I/O offload request to Linux of the server; receiving, by Linux, the file offload request from the thread of the unikernel and processing, by Linux, the file I/O offload request; transmitting, by Linux, a file FO offload result for the file I/O I/O offload request to the unikernel; and delivering the file I/O offload result to the thread of the application.

TASK GRAPH SCHEDULING FOR WORKLOAD PROCESSING

Techniques for scheduling operations for a task graph on a processing device are provided. The techniques include receiving a task graph that specifies one or more passes, one or more resources, and one or more directed edges between passes and resources; identifying independent passes and dependent passes of the task graph; based on performance criteria of the processing device, scheduling commands to execute the passes; and transmitting scheduled commands to the processing device for execution as scheduled.

METHOD AND APPARATUS FOR DETERMINING HARDWARE USAGE, STORAGE MEDIUM, AND ELECTRONIC DEVICE

Disclosed are a method and apparatus for determining hardware usage, storage medium and electronic device. The method includes: determining initial period duration and initial task duration corresponding to current period, and executing at least one task in the current period by a hardware module; receiving a usage invoking request at current time point; determining current task duration corresponding to the current time point based on task duration of tasks executed by the hardware module at the current time point and the initial task duration; determining current period duration based on the initial period duration and a difference between the current time point and a starting time point corresponding to the current period; and determining, based on the current task duration and the current period duration, usage of requested hardware module. Thus, real-time performance of usage determining is improved while accuracy of hardware usage is ensured.