Patent classifications
G06F9/4843
Asynchronous execution graphs for autonomous vehicles
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for executing the operations represented by an asynchronous execution graph. One of the methods includes receiving data characterizing an asynchronous execution graph comprising one or more subgraphs, wherein each subgraph comprises a plurality of nodes connected by edges, the plurality of nodes comprising a source node, one or more processor nodes, and one or more sink nodes; receiving source data from an external system that corresponds to the source node of a first subgraph in the graph; in response, executing the operations represented by the processor nodes in the first subgraph; and executing the operations represented by each sink node in the first subgraph.
EDGE FUNCTION BURSTING
One example method includes determining that local resources at an edge site are inadequate to support performance of a function needed by software running on the edge site, invoking a client agent, in response to invoking the client agent, receiving an execution manifest, determining, by the client agent, where to execute the function, wherein the determining comprises identifying a target execution environment for the function and the determining is based in part on information contained in the execution manifest, and transmitting, by the client agent, the execution manifest to a server agent of the target execution environment, and the execution manifest facilitates execution of the function in the target execution environment.
BATCH SCHEDULING FUNCTION CALLS OF A TRANSACTIONAL APPLICATION PROGRAMMING INTERFACE (API) PROTOCOL
Embodiments described herein are generally directed to improving performance of a transactional API protocol by batch scheduling data dependent functions. In an example, a prescribed sequence of function calls associated with a transactional application programming interface (API) is received that is to be carried out by an executer (e.g., a compute service or a second processing resource remote from a first processing resource with which an application is associated) to perform an atomic unit of work on behalf of the application. Transport latency over an interconnect between the application and the executer is reduced by: (i) creating a batch representing the prescribed sequence of function calls in a form of a list of function descriptors in which variable arguments of the prescribed sequence of function calls are replaced with corresponding global memory references; and (ii) transmitting the batch via the interconnect as a single message.
Methods and apparatus to credit background applications
Examples disclosed herein include means for comparing bandwidth usage of an application executing in a background of a device to a threshold to determine a state of the application as one of active or inactive, means for logging event records associated with the application, and means for crediting a duration of background execution of the application. In disclosed examples, the means for crediting is to determine whether the bandwidth usage pattern is spiked or continuous based on a first event record representative of background execution of the application being started, update a second event record to be representative of the background execution of the application being stopped when the bandwidth usage pattern is spiked and a timestamp of the second event record exceeds a temporal activity window, and determine the duration of the background execution of the application based on the first event record and the second event record.
Task optimization method and task optimization device in mobile robot
A task optimization method and a task optimization device in a mobile robot are provided. The task optimization method includes: obtaining at least one task type in a mobile robot and usage information when all users use a task corresponding to each task type; separately performing machine learning on the usage information of all the users corresponding to each task type to obtain at least one piece of user's usage habit information corresponding to each task type and usage probability thereof, thereby performing machine learning on usage information when all users use the task corresponding to the task type; based on the at least one piece of usage habit information corresponding to each task type, the usage probability thereof and the real-time usage information, optimizing the task corresponding to the task type used by the user in real time.
EXTENDING PARALLEL SOFTWARE THREADS
A method for executing a software program, comprising: identifying in a program a plurality of host threads, each for performing some of a plurality of parallel sub-tasks of a task; and for each of the host threads: generating device threads, each associated with the host thread, each for one of the parallel tasks associated thereof; generating a parent thread associated with the host thread for communicating with the device threads; configuring a host processing circuitry to execute the parent thread; and configuring at least one other processing circuitry to execute in parallel the device threads while the host processing circuitry executes the parent thread; and for at least one of the host threads: receiving by the parent thread a value from the at least one other processing circuitry, the value generated when executing at least one of the device threads associated with the at least one host thread.
Secure booting method, apparatus, device for embedded program, and storage medium
The present disclosure provides a secure booting method, apparatus, device for an embedded program and a storage medium. The method includes: when a boot program is running, acquiring data of an application program, including signature information, public key information, parameter information, encrypted data, and a digital check code; performing signature check according to the signature information; performing integrity check according to the digital check code if the signature check passes; and performing data decryption according to the public key information and the parameter information if the integrity check passes. The present disclosure may improve information security.
Fault-tolerant and highly available configuration of distributed services
Fault-tolerant and highly available configuration of distributed services including a computer-implemented method for role-based configuration discovery comprising receiving a request comprising an identifier of a role; identifying a first key, in a replica of a distributed configuration store, comprising a first value that matches the role identifier; identifying one or more other key-value pairs associated in the replica with the first key; and returning a response to an entity that sent the request comprising the value of at least one key-value pair that is specific to the role the service has. Also disclosed are techniques for log forwarding.
Self-monitoring
The present approach relates to event monitoring and management of an instance using a generated service map, allowing monitoring of CIs (e.g., applications) and connections that are currently active in a user's specific instance. A self-monitoring solution is generated for a user (e.g., via an application) that depicts status, configuration, and errors related to the user's instance. In certain implementations, the present techniques involve applying internal knowledge of the working of a user's instance and applications to perform the self-monitoring, and determine when an alert should be generated. Further, the present techniques may involve making a determination to provide a user with a self-help solution in addition or based on the self-monitoring of the user's instance.
Gate formation for a quantum processor
In a general aspect, a gate is formed for a quantum processor. In some implementations, an arbitrary program is received. The arbitrary program includes a first sequence of quantum logic gates, which includes a parametric XY gate. A native gate set is identified, which includes a set of quantum logic gates associated with a quantum processing unit. A second sequence of quantum logic gates corresponding to the parametric XY gate is identified, which includes a parametric quantum logic gate. Each of the quantum logic gates in the second sequence is selected from the native gate set. A native program is generated. The native program includes a third sequence of quantum logic gates. The third sequence of quantum logic gates corresponds to the first sequence of quantum logic gates and includes the second sequence of quantum logic gates. The native program is provided for execution by the quantum processing unit.