Patent classifications
G06F9/5038
CONTROLLING DATA PROCESSING TASKS
Information representative of a graph-based program specification has a plurality of components, each of which corresponds to a task, and directed links between ports of said components. A program corresponding to said graph-based program specification is executed. A first component includes a first data port, a first control port, and a second control port. Said first data port is configured to receive data to be processed by a first task corresponding to said first component, or configured to provide data that was processed by said first task corresponding to said first component. Executing a program corresponding to said graph-based program specification includes: receiving said first control information at said first control port, in response to receiving said first control information, determining whether or not to invoke said first task, and after receiving said first control information, providing said second control information from said second control port.
ITERATIVE AND HIERARCHICAL PROCESSING OF REQUEST PARTITIONS
Methods and systems disclosed herein relate generally to temporally prioritizing queries of queue-task partitions based on distributions of flags assigned to bits corresponding to access rights.
Work conserving, load balancing, and scheduling
A system and method are described for work conserving, load balancing, and scheduling by a network processor. For example, one embodiment of a system includes a plurality of processing cores, including a scheduling circuit, at least one source processing core that generates at least one task and at least one destination processing core that receives and processes the at least one task, and generates a response. The scheduling circuit of the exemplary system receives the at least one task and conducts a load balancing to select the at least one destination processing core. In an embodiment, the scheduling circuit further detects a critical sequences of tasks, schedules those tasks to be processed by a single destination processing core, and, upon completion of the critical sequence, conducts another load balancing to potentially select a different processing core to process more tasks.
JOB SCHEDULER TEST PROGRAM, JOB SCHEDULER TEST METHOD, AND INFORMATION PROCESSING APPARATUS
A non-transitory computer-readable storage medium storing therein a job scheduler test program that causes a computer to execute a process includes: determining whether or not a state of every thread of a test-target job scheduler is a standby state; and changing a time of a system referenced when the thread executes a process to a time that is put forward in a case where the state of every thread is the standby state.
DETERMINING WHEN TO RELEASE A LOCK FROM A FIRST TASK HOLDING THE LOCK TO GRANT TO A SECOND TASK WAITING FOR THE LOCK
Provided are a computer program product, system, and method for determining when to release a lock from a first task holding the lock to grant to a second task waiting for the lock. A determination is made as to whether a holding of a lock to the resource by a first task satisfies a condition and whether the lock is swappable. The lock is released from the first task and granted to a second task waiting in a queue for the lock in response to determining that the holding of the lock satisfies the condition and that the lock is swappable. The first task is indicated in the queue waiting for the lock in response to granting the lock to the second task.
FPGA acceleration for serverless computing
In one embodiment, a method for FPGA accelerated serverless computing comprises receiving, from a user, a definition of a serverless computing task comprising one or more functions to be executed. A task scheduler performs an initial placement of the serverless computing task to a first host determined to be a first optimal host for executing the serverless computing task. The task scheduler determines a supplemental placement of a first function to a second host determined to be a second optimal host for accelerating execution of the first function, wherein the first function is not able to accelerated by one or more FPGAs in the first host. The serverless computing task is executed on the first host and the second host according to the initial placement and the supplemental placement.
USING A LANE-STRUCTURED DYNAMIC ENVIRONMENT FOR RULE-BASED AUTOMATED CONTROL
Specifications are input, comprising: a plurality of lanes in an environment for a controlled system; a plurality of lane maneuvers associated with the plurality of lanes; a plurality of lane subconditions associated with the controlled system; and a rule set comprising a plurality of rules, wherein a rule in the rule set specifies a rule condition and a rule action to take when the rule condition is satisfied, wherein the rule condition comprises a corresponding set of lane subconditions, and wherein the rule action comprises a corresponding lane maneuver. The controlled system is automatically navigated dynamically, at least in part by: monitoring the plurality of lane subconditions; evaluating rule conditions associated with the plurality of rules in the rule set to determine one or more rules whose corresponding rule conditions has been met; and executing one or more lane maneuvers that correspond to the one or more determined rules.
DATA PREPROCESSING FOR A SUPERVISED MACHINE LEARNING PROCESS
A computer-implemented data processing method, including the steps of: providing a first program including a group of operations arranged to satisfy a first set of operation dependencies, the group of operations being adapted for computing data from at least one data source, generating a second program including the group of operations, arranged to satisfy a second set of operation dependencies, and processing the data from the at least one data source with the second program. The group of operations includes a first operation, a second operation, and a third operation. The first set of operation dependencies includes a first dependency between the first operation and the second operation, a second dependency between the first operation and the third operation, and a third dependency between the second operation and the third operation.
DYNAMIC ALLOCATION OF RESOURCES IN SURGE DEMAND
Embodiments of the present disclosure provide methods, apparatus, systems, computing devices, computing entities, and/or the like for the generation of a recommendation for one or more resource transformation actions to be performed based at least in part on an optimized resource transformation scenario. The optimized resource transformation scenario can be identified based at least in part on a hybrid resource transformation scenario that can be based at least in part on a resource priority score for a residual resource and a downgrade-only resource transformation scenario. The downgrade set of a plurality of resources can be determined based at least in part on resource transformation data associated with the plurality of resources.
Allocation of Resources to Tasks
A method of managing resources in a graphics processing pipeline includes conditionally suspending a task when the task reaches a phase boundary during execution of a program within a texture/shading unit. Suspending the task comprises freeing resources allocated to the task and resources are subsequently re-allocated to the task, such that the task is ready to continue execution, only after determining that the conditions associated with un-suspending the task are satisfied.