Patent classifications
G06F9/485
LEDGER-BASED VERIFIABLE CODE EXECUTION
A system includes a ledger on which a task giver may register a task. The task may include executable code. A task solver may accept the task and execute the code to produce a solver output that is recorded on the ledger. Verifiers may provide competing verifier outputs which may also be recorded on the ledger. The solver and verifiers may compare their outputs to determine if there is agreement. Agreement may signify consistent and accurate execution of the code. Disagreement may indicate the presence of errors. In some cases, the solver and verifiers may compete in a contention-based protocol where a solver may assert control of tokens where the solver identifies an error in verifier execution. Additionally or alternatively, a verifier may assert control of tokens where the verifier identifies an error in solver execution.
SYSTEMS AND METHODS FOR AI META-CONSTELLATION
System and method for device constellation according to certain embodiments. For example, a method for device constellation, the method includes the steps of: receiving a request, the request including a plurality of request parameters; decomposing the request into one or more tasks; selecting one or more edge devices based at least in part on the plurality of request parameters; assigning the one or more tasks to the one or more selected edge devices to cause the one or more selected edge devices to perform the one or more tasks; and receiving one or more task results from the one or more selected edge devices.
APPLICATION USER JOURNEY MANAGEMENT
An application activation method includes enabling an activation of one or more applications, including an activation of a first application, on a computing device. A first plurality of interactions of a user with the one or more applications on the computing device are detected. A first offer to renew the activation of the first application is generated based on the first plurality of interactions of the user. The first offer is provided to the user via the computing device. An acceptance of the first offer is received from the user, and the activation of the first application is renewed responsive to receiving the acceptance of the first offer.
TIME MANAGEMENT FOR ENHANCED QUANTUM CIRCUIT OPERATION EMPLOYING A HYBRID CLASSICAL/QUANTUM SYSTEM
Systems, computer-implemented methods and/or computer program products are provided for facilitating time management of a quantum program at one or more nodes of a system, such as a hybrid classical/quantum system. A system, such as a classic portion of the hybrid system, can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a time management component that can communicate with a node to trigger the node to execute one or more quantum program instructions relative to a counter of the node that is advanced by the communicating. The time management component can advance the counter at the node based upon a combination of time of another node and of a determined actual propagation time for the communicating.
Mobile service applications
Techniques for improved mobile application architectures and service communication protocols are discussed herein. Some embodiments may include a mobile device configured for providing a mobile application including multiple service applications. The service applications may execute asynchronously and in separate containers, providing service orientated architecture (SOA)-like services with respect to other portions of the mobile application, or even external applications. The separation of a monolithic mobile application into separate service applications provide advantages in terms of application performance, development, and maintenance. For example, a subset of all service applications may be started up, and executed on demand to improve device resource utilization efficiency.
Non-cached loads and stores in a system having a multi-threaded, self-scheduling processor
Representative apparatus, method, and system embodiments are disclosed for a self-scheduling processor which also provides additional functionality. Representative embodiments include a self-scheduling processor, comprising: a processor core adapted to execute instructions; and a core control circuit adapted to automatically schedule an instruction for execution by the processor core in response to a received work descriptor data packet. In a representative embodiment, the processor core is further adapted to execute a non-cached load instruction to designate a general purpose register rather than a data cache for storage of data received from a memory circuit. The core control circuit is also adapted to schedule a fiber create instruction for execution by the processor core, and to generate one or more work descriptor data packets to another circuit for execution of corresponding execution threads. Event processing, data path management, system calls, memory requests, and other new instructions are also disclosed.
State transitions for a set of services
Examples herein relate to developing an orchestration plan. Examples disclose the development of a representation of a set of services wherein each service relates to other services via different types of relationships. The examples apply a set of dependency rules for each type of relationship at each service within the set of services such that the application of the set of dependency rules creates inter-service dependencies between state transitions of the set of services. Based on the creation of the inter-service dependencies, the orchestration plan is developed which includes a sequenced order of the state transitions for the set of services.
Automation system and method
A computer-implemented method, computer program product and computing system for receiving a complex task; processing the complex task to define a plurality of discrete tasks each having a discrete goal; executing the plurality of discrete tasks on a plurality of machine-accessible public computing platforms; determining if any of the plurality of discrete tasks failed to achieve its discrete goal; and if a specific discrete task failed to achieve its discrete goal, defining a substitute discrete task having a substitute discrete goal.
METHOD AND APPARATUS FOR DYNAMICALLY ADJUSTING PIPELINE DEPTH TO IMPROVE EXECUTION LATENCY
Apparatus and method for managing pipeline depth of a data processing device. For example, one embodiment of an apparatus comprises: an interface to receive a plurality of work requests from a plurality of clients; and a plurality of engines to perform the plurality of work requests; wherein the work requests are to be dispatched to the plurality of engines from a plurality of work queues, the work queues to store a work descriptor per work request, each work descriptor to include information needed to perform a corresponding work request, wherein the plurality of work queues include a first work queue to store work descriptors associated with first latency characteristics and a second work queue to store work descriptors associated with second latency characteristics; engine configuration circuitry to configure a first engine to have a first pipeline depth based on the first latency characteristics and to configure a second engine to have a second pipeline depth based on the second latency characteristics.
APP MIGRATION SYSTEM AND INFORMATION STORAGE MEDIUM
An app migration system including at least one processor which places an app in one of an inside and an outside of a space joined by at least one user in a user group in which information is shareable; sets, for the app, a permission corresponding to a placement location of the app; migrates the app in one of a route between a public space and a private space and a route between the inside and the outside of the space; and sets, for the migrated app, a permission corresponding to a migration destination of the app.