Patent classifications
G06F9/4806
EXITLESS TIMER ACCESS FOR VIRTUAL MACHINES
A system and method of scheduling timer access includes a first physical processor with a first physical timer executing a first guest virtual machine. A hypervisor determines an interrupt time remaining before an interrupt is scheduled and determines the interrupt time is greater than a threshold time. Responsive to determining that the interrupt time is greater than the threshold time, the hypervisor designates a second physical processor as a control processor with a control timer and sends, to the second physical processor, an interval time, which is a specific time duration. The hypervisor grants, to the first guest virtual machine, access to the first physical timer. The second physical processor detects that the interval time expires. Responsive to detecting that the interval time expired, an inter-processor interrupt is sent from the second physical processor to the first physical processor, triggering the first guest virtual machine to exit to the hypervisor.
INSTRUCTION PRE-FETCHING
Pre-fetching instructions for tasks of an operating system (OS) is provided by calling a task scheduler that determines a load start time for a set of instructions for a particular task corresponding to a task switch condition. The OS calls, and in response to the load start time, a loader entity module that generates a pre-fetch request that loads the set of instructions for the particular task from a non-volatile memory circuit into a random access memory circuit. The OS calls the task scheduler to switch to the particular task.
HARDWARE CONTROLLED INSTRUCTION PRE-FETCHING
A task control circuit maintains, in response to task event information, a task information queue that includes task information for a plurality of tasks. Based upon the task information in the task information queue, a future task switch condition is identified as corresponding to a task switch time for a particular task of the plurality of tasks. A load start time is determined for a set of instructions for the particular task. A pre-fetch request is generated to load the set of instructions for the particular task into the memory circuit. The pre-fetch request is forwarded to a hardware loader circuit. In response to the task switch time, a task event trigger is generated for the particular task. The hardware loader circuit is used to load, in response to the pre-fetch request, the set of instructions from a non-volatile memory into the memory circuit.
SYSTEM AND METHODS FOR TRANSACTION-BASED PROCESS MANAGEMENT
Systems and methods for transaction/file-based management of a plurality of processes associated with various jobs are provided. Through the management of discrete applications, a file distribution manager/scheduler orchestrates automated execution of different types of jobs. The processes executed for the various processes can vary based on job type, or other parameters.
Data Object Delivery for Distributed Cluster Computing
Methods and systems for delivering data for cluster computing are described herein. A worker device may receive a dataset and store the dataset in a local storage media. This may prevent the need for the dataset to be sent over a network each time the applications are used to perform a task. Each application may be able to access the dataset in the local storage area. This may prevent the need to copy the dataset to memory associated with each application. A worker device may store a dataset, for example, if it determines that the frequency of updates to the dataset satisfy a threshold. The worker device may receive updates to the dataset via a messaging system and may store the updated data in the local storage media.
Techniques for remotely controlling an application
According to at least one aspect, a system for remotely controlling an application installed on a device is provided. The system includes at least one processor and at least one computer-readable storage medium storing instructions which program the at least one processor to identify a task for the application installed on the device to perform, transmit a binary short message service (SMS) message to the device including a task code associated with the identified task, receive an information request from the device responsive to the binary SMS message, and transmit task information to the device responsive to receiving the information request.
Compute cluster preemption within a general-purpose graphics processing unit
Embodiments described herein provide techniques enable a graphics processor to continue processing operations during the reset of a compute unit that has experienced a hardware fault. Threads and associated context state for a faulted compute unit can be migrated to another compute unit of the graphics processor and the faulting compute unit can be reset while processing operations continue.
Communication protocol, and a method thereof for accelerating artificial intelligence processing tasks
A system and method for communicating artificial intelligence (AI) tasks between AI resources are provided. The method comprises establishing a connection between a first AI resource and a second AI resource; encapsulating a request to process an AI task in at least one request data frame compliant with a communication protocol, wherein the at least one request data frame is encapsulated at the first AI resource; and transporting the at least one request data frame over a network using a transport protocol to the second AI resource, wherein the transport protocol provisions the transport characteristics of the AI task, and wherein the transport protocol is different than the communication protocol.
EDGE FUNCTION BURSTING
One example method includes determining that local resources at an edge site are inadequate to support performance of a function needed by software running on the edge site, invoking a client agent, in response to invoking the client agent, receiving an execution manifest, determining, by the client agent, where to execute the function, wherein the determining comprises identifying a target execution environment for the function and the determining is based in part on information contained in the execution manifest, and transmitting, by the client agent, the execution manifest to a server agent of the target execution environment, and the execution manifest facilitates execution of the function in the target execution environment.
Fault-tolerant and highly available configuration of distributed services
Fault-tolerant and highly available configuration of distributed services including a computer-implemented method for role-based configuration discovery comprising receiving a request comprising an identifier of a role; identifying a first key, in a replica of a distributed configuration store, comprising a first value that matches the role identifier; identifying one or more other key-value pairs associated in the replica with the first key; and returning a response to an entity that sent the request comprising the value of at least one key-value pair that is specific to the role the service has. Also disclosed are techniques for log forwarding.