Patent classifications
G06F2209/548
COORDINATING ASYNCHRONOUS COMMUNICATION AMONG MICROSERVICES
Techniques are described relating to coordinating asynchronous communication among a plurality of client microservices in a managed services domain of a cloud computing environment. An associated computer-implemented method includes receiving at a single request topic queue of a message broker application programming interface (API) at least one message associated with a topic from at least one publisher microservice among the plurality of client microservices. The method further includes identifying an authorization identification parameter included in each of the at least one message. The method further includes publishing each of the at least one message to a respective bucket within a single response topic queue of the message broker API, the respective bucket corresponding to one of at least one subscriber microservice among the plurality of client microservices associated with the authorization identification parameter included in the message.
Method and Control Device for Returning of Command Response Information, and Electronic Device
A method and a control device for returning of command response information, and an electronic device are provided. The method includes: receiving response information for a command request, the response information carrying a status identification and a level identification of the command request; storing the response information in a corresponding level of a data queue in accordance with the level identification, where the data queue includes multiple levels, and each level of the data queue is used to store one or more pieces of response information; scanning all levels of the data queue, and determining, a level in which all parts of response information are collected, as a candidate level; determining a first piece of response information in accordance with a status identification of the response information stored in the candidate level; and outputting the first piece of response information.
MULTIPLE MODULE BOOTUP OPERATION
A system and method for efficiently measuring on-die power supply voltage are described. In various implementations, an integrated circuit includes at least one or more processors and on-chip memory. The on-chip memory has a higher security level than off-chip memory. One of the one or more processors is designated as a security processor. During the processing of the multiple boot steps of a bootup operation, the security processor initializes a message queue in on-chip memory. The security processor also loads multiple modules from off-chip memory into the on-chip memory. The processor executes the multiple loaded modules in an order based on using the message queue to implement inter-module communication among the plurality of boot modules. The security processor transfers requested data between modules using messages from the modules and data storage of the message queue. The modules are completed without reloading any modules from off-chip memory.
Asynchronous message passing for large graph clustering
Systems and methods for sending asynchronous messages include receiving, using at least one processor, at a node in a distributed graph, a message with a first value and determining, at the node, that the first value replaces a current value for the node. In response to determining that the first value replaces the current value, the method also includes setting a status of the node to active and sending messages including the first value to neighboring nodes. The method may also include receiving the messages to the neighboring nodes at a priority queue. The priority queue propagates messages in an intelligently asynchronous manner, and the priority queue propagates the messages to the neighboring nodes, the status of the node is set to inactive. The first value may be a cluster identifier or a shortest path identifier.
High density hosting for messaging service
Aspects of the subject matter described herein relate migrating message for a messaging service. In aspects, a determination is made that messages need to be migrated based on a threshold being crossed. In response, an agent is instructed to migrate data associated with the messages to another location. The agent uses various factors to determine one or more queues to migrate. While a queue is being migrated, during a first portion of the migration, messages may be added to and removed from the queue as senders send new messages and receivers consume messages. During a second portion of the migration, the queue is frozen to disallow the queue to be used for receiving new messages and delivering queued messages. The migration may be orchestrated to attempt to achieve certain goals.
Message Management Method and Apparatus, and Serverless System
A serverless system includes a message management apparatus. The message management apparatus may receive a first message, where the first message is used to indicate to schedule a first stateful function to operate a first state instance; store the first message in a first message queue corresponding to the first state instance, where the first message queue is further used to store a plurality of messages, and each of the plurality of messages is used to indicate one stateful function to operate the first state instance; and transfer a second message to a second stateful function corresponding to the second message, and run the second stateful function corresponding to the second message to operate the first state instance that is in an idle state, where the second message is a message located at a foremost end of the first message queue.
APPLICATION PROGRAMMING INTERFACE (API) SERVER FOR CORRELATION ENGINE AND POLICY MANAGER (CPE), METHOD AND COMPUTER PROGRAM PRODUCT
An application programming interface (API) server for a correlation engine and policy manager (CPE) system includes a processor, and a memory coupled to the processor. The CPE system includes a plurality of components of various component types, and each component among the plurality of components is configured to perform at least one corresponding processing on event data input to the CPE system. The memory is configured to store executable instructions that, when executed by the processor, cause the processor to perform at least one of registering, removing or updating a configuration of at least one component among the plurality of components of the CPE system, or changing a number of components of a same component type among the various component types, to scale up or down the CPE system.
NETWORK ON CHIP WITH TASK QUEUES
A network on a chip architecture uses hardware queues to distribute multiple-instruction tasks to processors dedicated to performing that task. By repeatedly using the same processors to perform the same task, the frequency at which the processors access memory to retrieve instructions is reduced. If a hardware queue runs dry and a processor is remains idle, the processor will determine which queues have tasks and rededicate to performing a new task that has higher demand, without requiring the intervention of centralized load balancing software or specialized programming.
Tracking a relative arrival order of events being stored in multiple queues using a counter
An order controller calculates an absolute value of a difference between a first counter value stored with a first next entry set to an active status in a first queue from among at least two queues and a second counter value stored with a second next entry set to the active status in a second queue. The order controller compares the absolute value with a counter midpoint value. The order controller, responsive to the absolute value being less than the counter midpoint value, selects a smaller value of the first counter value of the first counter value and the second counter value as a next event to process. The order controller, responsive to the absolute value being greater than or equal to the counter midpoint value, selects a larger value of the first counter value and the second counter value as the next event to process.
Link optimization for callout request messages
According to one aspect of the present disclosure, a method and technique for link optimization for callout request messages is disclosed. The method includes: monitoring a plurality of different time-based parameters for each of a plurality of links between a communication pipe of a host system and one or more service systems, the links used to send and receive callout request messages between one or more applications running on the host system and the services systems that process the callout request messages, the time-based parameters associated with different stages of callout request message processing by the communication pipe and the service systems; assessing a performance level of each of the plurality of links based on the time-based parameters; and dynamically distributing the callout request messages to select links of the plurality of links based on the performance assessment.