Patent classifications
G06F2209/5017
PARALLEL PROCESSING OF BLOCKCHAIN PROCEDURES
A client computer may split a process into sub-processes, send each sub-processes to a different group of peers in a blockchain network, wherein each group has at least one peer from each essential organization in the blockchain network, receive processed sub-transactions from the peers in the blockchain network, validate each sub-transaction, and validate the transaction based on the validation of all sub-transactions, wherein all sub-transaction must be valid for the transaction to be valid.
Tile Assignment to Processing Cores Within a Graphics Processing Unit
A graphics processing unit configured to process graphics data using a rendering space which is sub-divided into a plurality of tiles, the graphics processing unit comprising: a plurality of processing cores configured to render graphics data; cost indication logic configured to obtain a cost indication for each of a plurality of sets of one or more tiles of the rendering space, wherein the cost indication for a set of one or more tiles is suggestive of a cost of processing the set of one or more tiles; similarity indication logic configured to obtain similarity indications between sets of one or more tiles of the rendering space, wherein the similarity indication between two sets of one or more tiles is indicative of a level of similarity between the two sets of tiles according to at least one processing metric; and scheduling logic configured to assign the sets of one or more tiles to the processing cores for rendering in dependence on the cost indications and the similarity indications.
POWER MANAGEMENT SYSTEM AND POWER MANAGEMENT METHOD
A power management system including a management apparatus configured to assign divided computation processing constituting at least a part of predetermined computation processing to a distributed computing device placed in a facility, wherein the management apparatus includes a receiver configured to receive a message including an information element indicating a type of corresponding computation processing that the distributed computing device is capable of handling, and a controller configured to perform assignment processing to assign the divided computation processing to the distributed computing device based on the type of the corresponding computation processing.
Asynchronous and parallel application processing
Input records are obtained based on a request to process an application. References to the input records are stored as entries in a queue. A total thread number for threads and a total server number for servers are determined. The threads are initiated on the servers and are asynchronously processed in parallel on the servers. Each thread obtains a reference from a unique entry of the queue, marks that entry as being addressed, processes the input record corresponding to the entry using values provided with a request to process the application, stores results associated with processing the input record in a data store, and iterates back to obtain a next unique entry from the queue until every entry of the queue is marked as having been addressed. When every entry of the queue is marked, a reference is returned to the data store as application results for processing the application.
Parallel Processing in Cloud
Methods and systems for distributing and concurrently executing various portions of a linearly programmed computing task in multiple cloud instances in cloud computing platforms are described herein. Upon receiving a request to execute the linearly programmed computing task, the requested task is added to a task queue. Various portions of the task may be determined based on the data structure of the data to be processed during the execution of the task. Then the portions may be distributed to multiple cloud instances for concurrent executions of the portions. Alternately, the task may be distributed to a cloud instance, which may determine the various portions based on the data structure of the data to be processed by the task, execute one or more portions, and then add requests for the other portions to the task queue such that the other portions can be distributed to other cloud instances for execution.
Search time estimate in a data intake and query system
Systems and methods are described for determining a query execution time in a data intake and query system. The system parses a query to identify different portions of the query that are executed by different components of the data intake and query system. The system determines a query execution time for the different portions of the query based on the corresponding components. Based on the query execution time of the different portions for the query, the system determines a query execution time for the query.
Scheduling vehicle task offloading and triggering a backoff period
System, methods, and other embodiments described herein relate to improving scheduling of computing tasks in a mobile environment for a vehicle. In one embodiment, a method includes receiving an offloading request associated with a computing task from the vehicle, wherein the offloading request includes context information and a task descriptor related to the computing task. The method also includes scheduling the computing task to execute on a server if the context information and the task descriptor satisfy criteria for using computing resources associated with the server for the vehicle. The method also includes partitioning the computing task into subtasks if the context information satisfies the criteria. A machine learning module may decide partitions of the computing task according to the context information. The method also includes sending a scheduling signal including a scheduling message to the vehicle and the scheduling message includes scheduling information and task partition information associated with offloading the subtasks.
METHOD FOR AUTOMATIC SCHEDULING OF TASKS, ELECTRONIC DEVICE EMPLOYING METHOD, AND COMPUTER READABLE STORAGE MEDIUM
A method for the automatic scheduling of tasks obtains data processing tasks and data sources. A job queue is formed based on the data processing tasks. The job tasks are extracted in order from the job queue. Computing resources are distributed based on the extracted job tasks. A result of the data processing task is obtained by the pre-trained model based on the data source. An electronic device and a computer readable storage medium applying the method are also provided.
HETEROGENEOUS COMPUTING-BASED TASK PROCESSING METHOD AND SOFTWARE AND HARDWARE FRAMEWORK SYSTEM
A heterogeneous computing-based task processing method and a software and hardware framework system. The task processing method includes: breaking down an artificial intelligent analysis task into one stage or multiple stages of sub-tasks, and completing, by one or more analysis function unit services corresponding to the one stage or multiple stages of sub-tasks, the artificial intelligent analysis task by means of a hierarchical data flow, wherein different stages of sub-tasks have different types, one type of sub-tasks corresponds to one analysis function unit service, and each analysis function unit service uniformly schedules a plurality of heterogeneous units to execute a corresponding sub-task.
Systems and Methods to Leverage Unused Compute Resource for Machine Learning Tasks
Systems and methods relating to leveraging inactive computing resources are discussed. An example system may include one or more computing nodes having an active state and an inactive state, one or more processors, and a memory. The memory may contain instructions therein that, when executed, cause the one or more processors to identify a task to be performed by the one or more computing nodes based upon a received request. The instructions may further cause the one or more processors to create one or more sub-tasks based upon the task and schedule the one or more sub-tasks for execution on the one or more computing nodes during the inactive state. The instructions may further cause the one or more processors to collate the one or more sub-tasks into a completed task, and generate a completed task notification based upon the completed task.