Patent classifications
G06F9/48
Logical Slot to Hardware Slot Mapping for Graphics Processors
Disclosed techniques relate to work distribution in graphics processors. In some embodiments, an apparatus includes circuitry that implements a plurality of logical slots and a set of graphics processor sub-units that each implement multiple distributed hardware slots. The circuitry may determine different distribution rules for first and second sets of graphics work and map logical slots to distributed hardware slots based on the distribution rules. In various embodiments, disclosed techniques may advantageously distribute work efficiently across distributed shader processors for graphics kicks of various sizes.
Affinity-based Graphics Scheduling
Techniques are disclosed relating to affinity-based scheduling of graphics work. In disclosed embodiments, first and second groups of graphics processor sub-units may share respective first and second caches. Distribution circuitry may receive a software-specified set of graphics work and a software-indicated mapping of portions of the set of graphics work to groups of graphics processor sub-units. The distribution circuitry may assign subsets of the set of graphics work based on the mapping. This may improve cache efficiency, in some embodiments, by allowing graphics work that accesses the same memory areas to be assigned to the same group of sub-units that share a cache.
LEDGER-BASED VERIFIABLE CODE EXECUTION
A system includes a ledger on which a task giver may register a task. The task may include executable code. A task solver may accept the task and execute the code to produce a solver output that is recorded on the ledger. Verifiers may provide competing verifier outputs which may also be recorded on the ledger. The solver and verifiers may compare their outputs to determine if there is agreement. Agreement may signify consistent and accurate execution of the code. Disagreement may indicate the presence of errors. In some cases, the solver and verifiers may compete in a contention-based protocol where a solver may assert control of tokens where the solver identifies an error in verifier execution. Additionally or alternatively, a verifier may assert control of tokens where the verifier identifies an error in solver execution.
CONFIGURABLE LOGIC PLATFORM WITH RECONFIGURABLE PROCESSING CIRCUITRY
An architecture for a load-balanced groups of multi-stage manycore processors shared dynamically among a set of software applications, with capabilities for destination task defined intra-application prioritization of inter-task communications (ITC), for architecture-based ITC performance isolation between the applications, as well as for prioritizing application task instances for execution on cores of manycore processors based at least in part on which of the task instances have available for them the input data, such as ITC data, that they need for executing.
RESOURCE PROVISIONING SYSTEMS AND METHODS
A method for a first set of processors and a second set of processors comprises, the first set of processors processing a set of queries, as a result of a change in utilization of the first set of processors, processing the set of queries using the second set of processors. The change in processors is independent of a change in storage resources, the storage resources shared by the first set of processors and the second set of processors.
CONFIGURABLE SCHEDULER WITH PRE-FETCH AND INVALIDATE THREADS IN A GRAPH STREAM PROCESSING SYSTEM
Systems, apparatuses, and methods are disclosed for scheduling threads comprising of code blocks in a graph streaming processor (GSP) system. One system includes a scheduler for scheduling plurality of prefetch threads, main threads, invalidate threads. The plurality of prefetch threads includes prefetching data from main memory required for execution of the main threads of the next stage. The plurality of main threads includes a set of instructions operating on the graph streaming processors of GSP system. The plurality of the invalidate threads includes invalidating data location/s consumed by the plurality of the main threads of the previous stage. A portion of the scheduler is implemented in hardware.
ENVOY FOR MULTI-TENANT COMPUTE INFRASTRUCTURE
A data management and storage (DMS) cluster of peer DMS nodes manages data of a tenant of a multi-tenant compute infrastructure. The compute infrastructure includes an envoy connecting the DMS cluster to virtual machines of the tenant executing on the compute infrastructure. The envoy provides the DMS cluster with access to the virtual tenant network and the virtual machines of the tenant connected via the virtual tenant network for DMS services such as data fetch jobs to generate snapshots of the virtual machines. The envoy sends the snapshot from the virtual machine to a peer DMS node via the connection for storage within the DMS cluster. The envoy provides the DMS cluster with secure access to authorized tenants of the compute infrastructure while maintaining data isolation of tenants within the compute infrastructure.
METHOD FOR DATA PROCESSING, AND COMMUNICATION DEVICE
A method for data processing method and a communication device are provided. The method includes the following operations. First configuration information is acquired. The first configuration information is used for configuring N split modes and a jth part corresponding to an ith split mode among the N split modes. N is an integer greater than or equal to 1, i is greater than or equal to 1 and less than or equal to N, j is greater than or equal to 1 and less than or equal to M, and M is an integer greater than 1. The N split modes includes a split mode for splitting a data processing model into at least two sub-processing models by presetting a split position.
METHOD, APPARATUS, AND STORAGE MEDIUM FOR SCHEDULING TASKS
The present disclosure provides a method, an apparatus, and a non-transitory computer readable medium for scheduling tasks. The method includes acquiring task information of a current task to be executed, the task information describing the current task to be executed; determining an execution time for the current task to be executed according to the task information; and comparing the execution time with a preset scheduling time corresponding to the current task to be executed, and adjusting an actual scheduling time corresponding to a next task to be executed according to the comparison result so as to determine whether to schedule the next task to be executed
FOCUS CONTROLLING METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM
A focus controlling method that determines, in response to scrolling of a display interface of an electronic device, control information of a first control having focus in a first display interface before the scrolling; and displays, according to the control information of the first control and an interface scrolling direction, a second control to receive focus of a second display interface after the scrolling.