Patent classifications
G06F2209/5017
FEDERATED LEARNING
A federated learning method and apparatus, a device and a medium are provided, and relates to the field of artificial intelligence, in particular to the field of federated learning and machine learning. The federated learning method includes: receiving data related to a federated learning task of a target participant, wherein the target participant at least includes a first computing device for executing the federated learning task; determining computing resources of the first computing device that are able to be used to execute the federated learning task; and generating a first deployment scheme for executing the federated learning task in response to determining that the data and the computing resources meet a predetermined condition, wherein the first deployment scheme instructs to generate at least a first work node and a second work node on the first computing device.
INFORMATION PROCESSING SYSTEM AND INFORMATION PROCESSING METHOD
One or more information processing apparatuses to process information are provided. The information processing apparatus includes: a division function that divides processing information into a plurality of pieces, under a division condition that designates parallel processing among the information processing apparatuses, the processing information indicating a data processing procedure from a plurality of start points to one or more end points; a determination function that uniquely determines an assignee of each piece of the processing information divided by the division function, as any of the information processing apparatuses; and an execution function that executes a process in the information processing apparatus determined by the determination function.
APPARATUS AND METHOD FOR TREE STRUCTURE DATA REDUCTION
Apparatus and method for tree structure data reduction. For example, one embodiment of an apparatus comprises: a plurality of compute units; bounding volume hierarchy (BVH) processing logic to update a BVH responsive to changes associated with leaf nodes of the BVH, the BVH processing logic comprising: treelet generation logic to arrange nodes of the BVH into a plurality of treelets, the treelets including a plurality of bottom treelets and a tip treelet, each treelet having a number of nodes selected based on workgroup processing resources of the compute units; a dispatcher to dispatch workgroups to compute units to process the treelets, wherein a separate workgroup comprising a separate plurality of threads is dispatched to process each treelet.
System, method, and computer program product for processing large data sets by balancing entropy between distributed data segments
Systems, methods, and computer program products are provided for load balancing for processing large data sets. The method includes identifying a number of segments and a transaction data set comprising transaction data for a plurality of transactions, the transaction data for each transaction of the plurality of transactions comprising a transaction value, determining an entropy of the transaction data set based on the transaction value of each transaction of the plurality of transactions, segmenting the transaction data set into the number of segments based on the entropy of the transaction data set and balancing respective entropies of each segment of the number of segments, and distributing processing tasks associated with each segment of the number of segments to at least one processor of a plurality of processors to process each transaction in each respective segment.
RESERVOIR SIMULATION UTILIZING HYBRID COMPUTING
Hybrid computing that utilizes a computer processor coupled to one or more graphical processing units (GPUs) is configured to perform computations that generate outputs related to reservoir simulations associated with formations that may include natural gas and oil reservoirs.
CROSS-PLATFORM CONTEXT-SPECIFIC AUTOMATION SCHEDULING
A frontend of a platform of a multiplatform system can be monitored for user input. Upon receiving a user input that includes particular content, a data object describing the context in which the user input was provided may be created. One or more automations may be selected from an automation database based on a similarity to the determined context. The selected automations can be automatically displayed for the user, thereby encouraging the user to leverage automations across multiple platforms without requiring the user to switch between different platforms and without requiring the user to learn or understand platform-specific automation engines.
Systems and methods to leverage unused compute resource for machine learning tasks
Systems and methods relating to leveraging inactive computing resources are discussed. An example system may include one or more computing nodes having an active state and an inactive state, one or more processors, and a memory. The memory may contain instructions therein that, when executed, cause the one or more processors to identify a task to be performed by the one or more computing nodes based upon a received request. The instructions may further cause the one or more processors to create one or more sub-tasks based upon the task and schedule the one or more sub-tasks for execution on the one or more computing nodes during the inactive state. The instructions may further cause the one or more processors to collate the one or more sub-tasks into a completed task, and generate a completed task notification based upon the completed task.
METHOD FOR SPLITTING NEURAL NETWORK MODEL BY USING MULTI-CORE PROCESSOR, AND RELATED PRODUCT
Embodiments of the present disclosure provide a method for splitting a neural network model to be processed by a multi-core processor and related products. When a splittable operator is present in the neural network model, the operator is split, and an optimal splitting combination is selected to obtain an optimal splitting result of an entire neural network model, and then sub-operators corresponding to the optimal splitting result are executed through multiple cores in parallel. Thereby, a purpose of reducing resource consumption of a computer device is achieved.
Method for large-scale distributed machine learning using formal knowledge and training data
A method for large-scale distributed machine learning using input data comprising formal knowledge and/or training data. The method consisting of independently calculating discrete algebraic models of the input data in one or many computing devices, and in sharing indecomposable components of the algebraic models among the computing devices without constraints on when or on how many times the sharing needs to happen. The method uses an asynchronous communication among machines or computing threads, each working in the same or related learning tasks. Each computing device improves its algebraic model every time it receives new input data or the sharing from other computing devices, thereby providing a solution to the scaling-up problem of machine learning systems.
Distributed processing of sensed information
A method for distributed neural network processing, the method may include detecting, by a local neural network that belongs to a local device, and based on sensed information, an occurrence of a triggering event for executing or completing a classification or detection process; sending to a remote device, a request for executing or completing the classification or detection process by a remote device that comprises a remote neural network; wherein the remote neural network has more computational resources than the local neural network; determining by the remote device whether to accept the request; and executing or completing, by the remote device, the classification or detection process when determining to accept the request; wherein the executing or completing involves utilizing the remote neural network.