Patent classifications
G06F2209/543
Multicast message filtering in virtual environments
Various systems, processes, and products may be used to filter multicast messages in virtual environments. In one implementation, a multicast filtering address is received by a network adapter from at least one of a number of virtual machines of a computer system. Responsive to receiving the multicast filtering address, a determination is made whether a multicast filtering store of the network adapter is full. Responsive to determining that the multicast filtering store of the network adapter is full, the multicast filtering address is stored in a local filtering store of the at least one virtual machine.
Fifo queue replication
A fifo queue service of a provider network allows clients replicate a fifo queue to a secondary backup queue of another region. A local instance of the queue service receives and stores send/receive/delete transactions in an order. The service instance applies the transactions to a primary fifo and replicates only the send requests and delete requests to secondary fifo queue of a remote instance of the fifo queue service (e.g., at another region). The remote instance determines, based on ordering metadata of a replicated request, that the replicated request can be stored in accordance with the ordering metadata (e.g., the replicated request depends on another request that has also been received/replicated). In response, the remote secondary instance stores and applies the replicated request to a secondary fifo queue.
Broadcast sending control method and apparatus, storage medium, and electronic device
A broadcast sending control method includes: acquiring a receiver queue corresponding to a broadcast message; acquiring an application type and a launching state of a first receiver, wherein the first receiver is any receiver in the receiver queue; removing, in a case where the application type is a predetermined application type and the launching state is a predetermined launching state, the first receiver from the receiver queue; and sending, according to the receiver queue from which the first receiver has been removed, the broadcast message.
SERVER-DRIVEN NOTIFICATIONS TO MOBILE APPLICATIONS
An example method of implementing server-driven notifications to mobile applications is provided. The method includes registering a mobile computing device with a notification server. The notification server is associated with a set of workflow servers that each correspond to one or more respective mobile applications. The method further includes receiving a first message associated with a first workflow server of the set of workflow servers. The first message includes a first payload identifying a first mobile application running on the mobile computing device and a first application-specific event associated with the first mobile application. The first mobile application corresponds to the first workflow server. The method further includes translating the first payload into a first local notification for the first mobile application. The method further includes, upon displaying the first local notification on the mobile computing device, detecting the first application-specific event in view of a user interface event associated with the first local notification. The method further includes transmitting a notification to the first workflow server, the notification indicating that the user interface event corresponding to the first application-specific event was completed.
APPARATUS AND METHOD FOR PERFORMANCE STATE MATCHING BETWEEN SOURCE AND TARGET PROCESSORS BASED ON INTERPROCESSOR INTERRPUTS
Apparatus, method, and machine-readable medium to provide performance state matching between source and target processors based on inter-processor interrupts. An exemplary apparatus includes a target processor to execute a receiving task at a first performance level and a source processor to execute a sending task at a second performance level higher than the first performance level. The sending task is to store interrupt routing data indicating a pairing between the sending task and the receiving task into a memory location and that the sending task is to dispatch work to be processed by the receiving task. The apparatus further includes a performance management unit to detect the pairing between the sending task and the receiving task based on the interrupt routing data and responsively adjust the performance level of the target processor from the first performance level to the second performance level based, at least in part, on the pairing.
Asynchronous handling of service requests
Techniques for asynchronous handling of service requests are disclosed. A service receives a request from a requesting entity. The request includes a function identifier and function input. Responsive to receiving the message, the service selects a first event handler to process the request. The service translates, via the first event handler, the function identifier to a native function call. The service initiates execution of the native function call using the function input, and receives output corresponding to the execution of the native function call. Responsive to receiving the output, the service selects a second event handler to process the output. The service generates, at least in part by the second event handler, a response based on the output. The service transmits the response to the requesting entity.
BROADCAST SENDING CONTROL METHOD AND APPARATUS, STORAGE MEDIUM, AND ELECTRONIC DEVICE
A broadcast sending control method includes: acquiring a receiver queue corresponding to a broadcast message; acquiring an application type and a launching state of a first receiver, wherein the first receiver is any receiver in the receiver queue; removing, in a case where the application type is a predetermined application type and the launching state is a predetermined launching state, the first receiver from the receiver queue; and sending, according to the receiver queue from which the first receiver has been removed, the broadcast message.
Transforming plug-in application recipe variables
Techniques for transforming plug-in application recipe (PIAR) variables are disclosed. A PIAR definition identifies a trigger and an action. Trigger variable values, exposed by a first plug-in application, are necessary to evaluate the trigger. Evaluating the trigger involves determining whether a condition is satisfied, based on values of trigger variables. A second plug-in application exposes an interface for carrying out an action. Evaluating the action involves carrying out the action based on input variable values. A user selects, via a graphical user interface of a PIAR management application, a variable for a trigger or action operation and a transformation operation to be applied to the variable. The PIAR management application generates a PIAR definition object defining the trigger, the action, and the transformation operation, and stores the PIAR definition object for evaluation on an ongoing basis.
Server-driven notifications to mobile applications
An example method of implementing server-driven notifications to mobile applications may include: receiving, by a mobile computing device, a message from a notification server, wherein the message comprises a payload identifying a mobile application running on the mobile computing device; translating the payload into a local notification including an identifier of the mobile application; causing the local notification to be displayed on the mobile computing device; and responsive to receiving a user interface event associated with the local notification, processing the user interface event by a handler of the mobile application.
Machine Learning Performance and Workload Management
Systems and methods are described herein for reducing resource consumption of a database system and a machine learning (ML) system. Data is received from an ML application of a database system. The data includes a first inference call for a predicted response to the received data. The first inference call is a request to a ML model to generate one or more predictions for which a response is unknown. An ML model using the received data generates an output comprising the predicted response to the data. The output for future inference calls is cached in an inference cache so as to bypass the ML model. The generated output to the ML application is provided by the ML model. A second inference call is received which includes the data of the first inference call. The cached output is retrieved from the inference cache. The retrieving bypasses the ML model.