G06F9/5038

Machine-learning application proxy for IoT devices including large-scale data collection using dynamic servlets with access control

An apparatus and method for providing ML processing for one or more ML applications operating on one or more Internet of Things (IoT) devices includes receiving a ML request from an IoT device. The ML request can be generated by a ML application operating on the IoT device and include input data collected by the first ML application. A ML model to perform ML processing of the input data included in the ML request is identified and provided to an ML core for ML processing along with the input data included in the first ML request. The ML core produces ML processing output data based on ML processing by the ML core of input data included in the ML request using the ML model. The ML processing output data can be transmitted to the IoT device.

Configurable system for resolving requests received from multiple client devices in a network system

A system, a method, and a computer program for generating a dynamically configurable resolution route for transmitting a request object to one or more nodes in a network, comprising receiving a trigger signal from a first node, determining one or more destination nodes based on a resolution process, schema or scenario, determining a pathway to the one or more destination nodes, generating a resolution route for transmitting the request object in the network, iteratively transmitting the request object to the one or more destination nodes based on the resolution route, receiving a request object resolution signal from a final destination node, and transmitting the request object resolution signal to the first node based on the request object resolution signal.

Technologies for providing shared memory for accelerator sleds

Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.

SYSTEM AND METHOD FOR BATCH AND SCHEDULER MIGRATION IN AN APPLICATION ENVIRONMENT MIGRATION

A method of batch and scheduler migration assesses a batch job, scans it's scheduling mechanism and components, ascertains a quantum change for migrating the batch job to a target batch service and forecasts an assessment statistic that provides at least one functional readiness and a timeline to complete the migration of the batch job. The method generates a transformed batch job structure by breaking the batch job according to the target batch service while retaining the scheduling mechanism. Further, it updates containerized batch service components of the target batch service as per the forecasted assessment statistic and the transformed batch job structure, and migrates the batch job to the target batch service by re-platforming the updated containerized batch service components.

BLOCKCHAIN-BASED INTERACTION METHOD AND SYSTEM FOR EDGE COMPUTING SERVICE
20230040149 · 2023-02-09 ·

A blockchain-based interaction method and system for an edge computing service: using, as a bearing entity of an MECaaS, a device that has an environment for an operating system and that is of a user; registering a computing power device of the user as an edge node by using the MECaaS; uploading or updating registration information of the edge node to a blockchain layer; issuing, by a requesting device as a data producer, a computing task to the MECaaS; invoking, by the MECaaS, the smart contract deployed on the blockchain layer; standardizing a data format of the computing task; matching a target edge node for the requesting device; establishing an M2M communication between the requesting device and the target edge node, so that the requesting device can transmit raw data to the target edge node, and the target edge node can feed back a computing result to the requesting device.

SYSTEM FOR MONITORING AND OPTIMIZING COMPUTING RESOURCE USAGE OF CLOUD BASED COMPUTING APPLICATION
20230043579 · 2023-02-09 ·

A system of monitoring and optimizing computing resources usage for computing application may include predicting a first performance metric for job load capacity of a computing application for optimal job concurrency and optimal resource utilization. The system may include generating an alerting threshold based on the first performance metric. The system may further include, in response to a difference between the alerting threshold and a job load of the computing application within an interval exceeding a threshold, predicting a second performance metric for job load capacity of the computing application for optimal job concurrency and optimal resource utilization. The system may further include, in response to a difference between the first performance metric and the second performance metric exceeding a difference threshold, updating the alerting threshold with a job load capacity with the optimal resource utilization rate corresponding to the second performance metric.

Frozen indices
11556388 · 2023-01-17 · ·

Methods and systems for searching a frozen index are provided. Exemplary methods include: a method may comprise: receiving an initial search and a subsequent search; loading the initial search and the subsequent search into a throttled thread pool, the throttled thread pool including; getting the initial search from the throttled thread pool; storing a first shard from a mass storage in a memory in response to the initial search; performing the initial search on the first shard; providing first top search result scores from the initial search; and removing the first shard from the memory when the initial search is completed.

Traversing a large connected component on a distributed file-based data structure

A distributed system including multiple processing nodes. The distributed system can perform certain acts. The acts can include receiving a set of input nodes and a set of criteria. The acts can include obtaining an adjacency list representing a large connected component. The large connected component can include nodes, edges, and edge metadata. A quantity of the nodes of the large connected component can exceed 1 billion. The adjacency list can be distributed across the multiple processing nodes. The nodes of the large connected component can include the input nodes. The acts also can include performing one or more iterations of traversing the large connected component until a stopping condition is satisfied. Each iteration can include processing a set of input nodes at the multiple processing nodes using the set of criteria to generate first data at the multiple processing nodes, determining a set of output nodes such that each output node of the set of output nodes is one hop from a respective input node of the set of input nodes, consolidating the first data from the multiple processing nodes to a first processing node of the multiple processing nodes, processing the first data at the first processing node; and assigning the set of input nodes for a subsequent iteration of the one or more iterations based on the set of output nodes when the stopping condition is not satisfied. The acts further can include outputting second data based on the first data received and processed at the first processing node during the one or more iterations. Other embodiments are disclosed.

Computation graph mapping in heterogeneous computer system
11556756 · 2023-01-17 · ·

The present disclosure relates to a method for scheduling a computation graph on heterogeneous computing resources. The method comprises generating an augmented computation graph that includes a first set of replica nodes corresponding to a first node in the computation graph and a second set of replica nodes corresponding to a second node in the computation graph, wherein the replica nodes of the first set are connected by edges to the replica nodes of the second set according to dependency between the first node and the second node in the computation graph, adapting the augmented computation graph to include performance values for the edges, the replica nodes of the first set, and the replica nodes of the second set, and determining a path across the adapted computation graph via one replica node of the first set and one replica node of the second set based on the performance values.

Loading of neural networks onto physical resources

In some examples, a system generates a neural network comprising logical identifiers of compute resources. For executing the neural network, the system maps the logical identifiers to physical addresses of physical resources, and loads instructions of the neural network onto the physical resources, wherein the loading comprises converting the logical identifiers in the neural network to the physical addresses.