Patent classifications
G06F9/4887
Scheduling tasks in a multi-threaded processor
A processor comprising: an execution unit for executing a respective thread in each of a repeating sequence of time slots; and a plurality of context register sets, each comprising a respective set of registers for representing a state of a respective thread. The context register sets comprise a respective worker context register set for each of the number of time slots the execution unit is operable to interleave, and at least one extra context register set. The worker context register sets represent the respective states of worker threads and the extra context register set being represents the state of a supervisor thread. The processor is configured to begin running the supervisor thread in each of the time slots, and to enable the supervisor thread to then individually relinquish each of the time slots in which it is running to a respective one of the worker threads.
Information processing apparatus, job scheduling method, and non-transitory computer-readable storage medium
An information processing apparatus includes a memory and a processor couple to the memory and configured to generate one or more job groups by grouping multiple jobs of execution targets in descending order of priority, and perform a control for scheduling execution timings regarding the multiple jobs such that scheduling of respective jobs included in a specific job group including a job having a higher priority is implemented by priority over scheduling of respective jobs included in other job groups. The processor performs the control for scheduling the execution timings of the respective jobs included in the specific job group such that an execution completion time of all the jobs included in the specific job group satisfies a predetermined condition.
Performing runbook operations for an application based on a runbook definition
The disclosure herein describes automating runbook operations associated with an application within an application host on an application platform. A runbook definition associated with the application is accessed by a processor, wherein the runbook definition includes trigger events and runbook operations associated with the trigger events. A runbook operator is executed on the application platform based on the accessed runbook definition and a runbook sidecar container is added to the application host by the runbook operator, wherein the runbook operator is enabled to perform the runbook operations within the application host via the runbook sidecar container. Based on detecting a trigger event, a runbook operation associated with the detected trigger event is performed by the runbook operator, via the runbook sidecar container, whereby the application is maintained based on performance of the runbook operations from within the application host.
Optimizing placements of workloads on multiple platforms as a service based on costs and service levels
A computer-implemented method, a computer program product, and a computer system for optimizing workload placements in a system of multiple platforms as a service. A computer first places respective workloads on respective platforms that yield lowest costs for the respective workloads. The computer determines whether mandatory constraints are satisfied. The computer checks best effort constraints, in response to the mandatory constraints being satisfied. The computer determines a set of workloads for which the best effort constraints are not satisfied and determines a set of candidate platforms that yield the lowest costs and enable the best effort constraints to be satisfied. From the set of workloads, the computer selects a workload that has a lowest upgraded cost and updates the workload by setting an upgraded platform index.
ALLOCATING OF COMPUTING RESOURCES FOR APPLICATIONS
A method for performing scheduling includes extracting information from at least one log file for an application. The method also includes determining an allocation of cloud resources for the application based on the information from the log file(s).
Sensor device, sensor device management system, and sensor device management method
A processor of a sensor device performs measurement processing by one or a plurality of sensors and transmission processing of sensor data generated by the measurement processing. The sensor device includes a processing routine table that stores a processing routine configured to include, corresponding to an identifier for identifying processing performed by a processor, a type of the processing, an execution trigger of the processing, and trigger information that prescribes a trigger for transmitting the sensor data. The processor controls processing in a processing routine of the processing routine table, based on trigger information, so that the sensor data subjected to measurement processing is immediately transmitted, or temporarily stored in a buffer and transmitted after a predetermined time.
METHOD AND AUTOMATION SYSTEM FOR CONTROLLING AND/OR MONITORING A MACHINE AND/OR INSTALLATION
The invention pertains to a procedure to control and/or monitor a machine or system using an automation system, whereby functions of the automation system are controlled by a computer. The control function is executed by programs which are executed on the computer, whereby the computer is equipped with a real-time-capable operating system, specifically, a real-time-capable Linux operating system. The programs include both real-time programs and non-real-time programs which are executed in a runtime environment that is superordinate to the operating system.
TECHNIQUES TO ENABLE QUALITY OF SERVICE CONTROL FOR AN ACCELERATOR DEVICE
Examples include techniques to enable quality of service (QoS) control for an accelerator device. Circuitry at an accelerator device implements QoS control responsive to receipt of a submission descriptor for a work request to execute a workload for an application hosted by a compute device coupled with the accelerator device. An example QoS control includes accepting the submission descriptor to a work queue at the accelerator device based on a work size of submission descriptor submissions of the application to the work queue over a unit of time not exceeding a submission rate threshold. The work queue is associated with an operational unit at the accelerator device to execute the workload based on information included in the submission descriptor. The work queue to be shared with at least one other application hosted by the compute device.
SYSTEMS AND METHODS FOR IDENTIFYING UNDERUTILIZED ELECTRONIC-DEVICE FEATURES
Aspects of the disclosure include a non-transitory computer-readable medium storing thereon sequences of computer-executable instructions for automatically identifying underutilized features of an electronic device, the sequences of computer-executable instructions including instructions that instruct at least one processor to determine whether at least one underutilized-feature trigger is satisfied, identify, based on determining that the at least one underutilized-feature trigger is satisfied, at least one underutilized feature of the electronic device, identify the at least one underutilized feature of the electronic device to a user of the electronic device, and store an indication of the identification of the at least one underutilized feature of the electronic device to the user.
ADAPTIVE CONTROL OF DEADLINE-CONSTRAINED WORKLOAD MIGRATIONS
Adaptive control of deadline-constrained workload migrations can include monitoring migrations of workloads forming a wave migrating from a source computing node to a target computing node. The monitoring can be performed in real time. The migrations can be performed by transferring image replications of each workload over a data communication network. Based on an expected bandwidth availability, a likelihood that a cutover deadline associated with the wave is exceeded prior to completing a migration of each of the wave's workloads can be predicted. Migration of one or more selected workloads can be suspended in response to determining that exceeding the cutover deadline prior to completing migration of each of the wave's workloads is likely.