Patent classifications
G06F9/4818
METHOD FOR HANDLING EXCEPTION OR INTERRUPT IN HETEROGENEOUS INSTRUCTION SET ARCHITECTURE AND APPARATUS
A method for handling an exception or interrupt in a heterogeneous instruction set architecture is provided. A physical host to which the method is applied can support two instruction set architectures. When a secondary architecture virtual machine triggers an exception or interrupt, a virtual machine monitor may translate code of the exception or interrupt in a secondary instruction set architecture into code of the exception or interrupt in a primary instruction set architecture. The virtual machine monitor) may identify the code of the exception or interrupt in the primary instruction set architecture. The virtual machine monitor identifies, based on the translated code, a type of the exception or interrupt triggered by the secondary architecture virtual machine, to handle the exception or interrupt.
SYSTEMS AND METHODS FOR DYNAMIC RESOURCE ORCHESTRATION
Aspects of the present disclosure relate to a dynamic resource management system. In various examples, a dynamic resource management system enables resource-consuming tasks to be assigned to resources at least according to resource profiles and resource groupings. Resources may include primary resources and secondary resources and may be grouped into resource groups such that each resource group have a selection of primary resources and secondary resources. In some instances, resource groups may be updated to have updated selections of primary resources and secondary resources. To assign tasks to resources, the dynamic resource management system may first identify multiple resource-consuming tasks associated with a resource-consuming event then assign each resource-consuming task to a resource. The task assignment may be performed based on resource information such as availability, performance, and/or experience.
DATA TRANSFER PRIORITIZATION FOR SERVICES IN A SERVICE CHAIN
An apparatus comprises at least one processing device configured to monitor, by a first service in a service chain, a first set of processing queues comprising two or more different processing queues associated with two or more different priority levels. The processing device is also configured to process, by the first service, a given portion of data stored in at least one of the two or more different processing queues in the first set of processing queues. The processing device is further configured to determine prioritization information associated with the given portion of the data and to select, based on the prioritization information, a given one of two or more different processing queues in a second set of processing queues associated with a second service in the service chain, and to store the given portion of the data in the given processing queue in the second set of processing queues.
PRIORITIZATION OF THREADS IN A SIMULTANEOUS MULTITHREADING PROCESSOR CORE
A first instruction for processing by a processor core is received. Whether the instruction is a larx is determined. Responsive to determining the instruction is a larx, whether a cacheline associated with the larx is locked is determined. Responsive to determining the cacheline associated with the larx is not locked, the cacheline associated with the larx is locked and a counter associated with a first thread of the processor core is started. The first thread is processing the first instruction.
SCENE ENTITY PROCESSING USING FLATTENED LIST OF SUB-ITEMS IN COMPUTER GAME
Embodiments relate to storing hierarchically structured sub-items of scene entities in a flattened list of sub-items and performing time-constrained tasks on the sub-items in the flattened list. By storing the sub-items in the flattened list, an approximate time for processing the sub-items can be estimated more accurately, and therefore, reduces the likelihood of making overly conservative estimate of time for processing the sub-items. One or more sub-items of updated scene entities are extracted by a plurality of collectors that are executed in parallel to store the one or more sub-items in the flattened list. The sub-items are then accessed by multiple tasks executed in parallel to determine priority information associated with inclusion and rendering in subsequent frames. Sub-items with higher priority according to the priority information is given higher priority for retrieving from secondary memory and saving in primary memory.
Managing execution of data processing jobs in a virtual computing environment
A device may receive a job request associated with a data processing job, including job timing data specifying a time at which the data processing job is to be executed by a virtual computing environment. The device may receive user data associated with the job request and validate the data processing job based on the user data. In addition, the device may identify a priority associated with the data processing job, based on the user data and the job timing data. The device may provide, to a job queue, job data that corresponds to the data processing job, and monitor the virtual computing environment to determine when virtual resources are available. The device may also determine, based on the monitoring, that a virtual resource is available and, based on the determination and the priority, provide the virtual resource with data that causes execution of the data processing job.
OBSERVABILITY BASED WORKLOAD PLACEMENT
Techniques are described for using observability to allocate and deploy workloads for execution by computing resources in a cloud network. The workloads may be allocated and deployed to the computing resources based on metrics. The workloads may be deployed to the computing resources, based on the computing resources providing a number of types of observability that matches the number of metrics. The workloads may be deployed to the computing resources, further based on each of the computing resources matching a corresponding one of the metrics. Deployment of the workloads may be further based on availability of the computing resources. The workloads may be redeployed to other computing resources that provide different types of observability associated with the metrics, in comparison to the initial computing resources. The workloads may be allocated and deployed based on intent based descriptions indicating characteristics utilized to determine types of metrics for providing observability.
Interrupt handling method, computer system, and non-transitory storage medium that resumes waiting threads in response to interrupt signals from I/O devices
The present invention provides an interrupt handling system for handling interrupts in a computer system is provided. The interrupt handling system captures and processes the interrupts in a user space of the computer system. The present invention also provides for an interrupt registration method that facilitates interrupt handling in the user space during porting of user applications from one platform to another.
ELECTRONIC DEVICE, CONTROL METHOD OF ELECTRONIC DEVICE, AND RECORDING MEDIUM
An electronic device includes: a memory configured to a program; and one or more processors configured to execute the program stored in the memory. The one or more processors are connected to a first circuit and a second circuit, the first circuit being configured to execute a first operation cyclically and output an interrupt signal corresponding to the first operation, the second circuit being configured to execute a second operation in response to an operation instruction. In response to receiving the interrupt signal, the one or more processors output the operation instruction to the second circuit such that the second operation is not executed in a period in which the first operation affects the second operation.
Realtime hypervisor with priority interrupt support
A problem with conventional art is that, in an environment wherein a plurality of interrupts having different priorities for processing occur in an overlapping manner from external devices, responding to high-priority interrupts while ensuring execution intervals of periodic tasks has been difficult. A partition execution control device according to the present invention comprises: a first management table which stores, for each partition, initial time slices, remaining time slices, execution priorities, execution states, and an interrupt disable level for suppressing the interrupts from the external devices; and a second management table which stores the interrupt priorities of the external devices and partitions to which the interrupts are to be output. This partition execution control device controls the execution of the partitions using the execution priorities and the time slices stored in the management tables, and controls the execution of the interrupts using the interrupt disable levels and the interrupt priorities.