G06F9/546

Method for managing multiple operating systems in a terminal

The disclosure provides a method for managing multiple operating systems in a terminal. The terminal includes multiple operating systems and a management system. The management system is configured to manage the multiple operating systems. The management system includes a cross-system application database. The method includes: when a first operating system in the multiple operating systems runs in a foreground, and a second operating system in the multiple operating systems runs in a background, if the second operating system receives a first message of a first application in the second operating system, sending, by the second operating system, a notification message to the management system; storing, by the management system, the notification message into the cross-system application database; and listening, by the first operating system, on the cross-system application database, and outputting a prompt of the first message when listening and obtaining the notification message.

Electronic message processing systems and methods
11582190 · 2023-02-14 · ·

A message-hold decision maker system used with an electronic mail processing system that processes electronic messages for a protected computer network improves the electronic mail processing system's performance by increasing the throughput performance of the system. The improvements are achieved by providing an electronic mail processing gateway with additional logic that makes fast and intelligent decisions on whether to hold, block, allow, or sandbox electronic messages in view of potential threats such as viruses or URL-based threats. A message hold decision maker uses current and stored information from a plurality of specialized classification engines to quickly make the decisions. In some examples, the message hold decision maker will instruct an email gateway to hold an electronic mail message while the classification engines perform further analysis.

Machine-learning application proxy for IoT devices including large-scale data collection using dynamic servlets with access control

An apparatus and method for providing ML processing for one or more ML applications operating on one or more Internet of Things (IoT) devices includes receiving a ML request from an IoT device. The ML request can be generated by a ML application operating on the IoT device and include input data collected by the first ML application. A ML model to perform ML processing of the input data included in the ML request is identified and provided to an ML core for ML processing along with the input data included in the first ML request. The ML core produces ML processing output data based on ML processing by the ML core of input data included in the ML request using the ML model. The ML processing output data can be transmitted to the IoT device.

Scalable proxy clusters

The invention enables high-availability, high-scale, high security and disaster recovery for API computing, including in terms of capture of data traffic passing through proxies, routing communications between clients and servers, and load balancing and/or forwarding functions. The invention inter alia provides (i) a scalable cluster of proxies configured to route communications between clients and servers, without any single point of failure, (ii) proxy nodes configured for implementing the scalable cluster (iii) efficient methods of configuring the proxy cluster, (iv) natural resiliency of clusters and/or proxy nodes within a cluster, (v) methods for scaling of clusters, (vi) configurability of clusters to span multiple servers, multiple racks and multiple datacenters, thereby ensuring high availability and disaster recovery (vii) switching between proxies or between servers without loss of session.

SYSTEM AND METHOD FOR MOLECULAR PROPERTY PREDICTION USING EDGE CONDITIONED IDENTITY MAPPING CONVOLUTION NEURAL NETWORK

This disclosure relates generally to system and method for molecular property prediction. Typically, message-pooling mechanism employed in molecular property prediction using conventional message passing neural networks (MPNN) causes over smoothing of the node embeddings of the molecular graph. The disclosed system utilizes edge conditioned identity mapping convolution neural network for the message passing phase. In message passing phase, the system computes an incoming aggregated message vector for each node of the plurality of nodes of the molecular graph based on encoded message received from neighboring nodes such that encoded message vector is generated by fusing a node information and an connecting edge information of the set of neighboring nodes of the node. The incoming aggregated message vector is utilized for computing updated hidden state vector of each node. A discriminative graph-level vector representation is computed by pooling the updated hidden state vectors from all the nodes of the molecular graph.

CROSS-CHAIN COLLABORATIVE GOVERNANCE SYSTEM, METHOD AND DEVICE AND STORAGE MEDIUM
20230039643 · 2023-02-09 ·

A cross-chain collaborative governance system is configured to perform collaborative service and control governance on cross-chain interoperation between application subchains in a cross-chain alliance. The cross-chain collaborative governance system includes: a cross-chain access application layer configured to make a first application subchain and a second application subchain access the cross-chain collaborative governance system; a credible cross-chain collaborative layer configured to provide collaborative service for cross-chain interoperation between the first application subchain and the second application subchain; and a credible cross-chain governance layer configured to perform control governance on the cross-chain interoperation between the first application subchain and the second application subchain.

Frozen indices
11556388 · 2023-01-17 · ·

Methods and systems for searching a frozen index are provided. Exemplary methods include: a method may comprise: receiving an initial search and a subsequent search; loading the initial search and the subsequent search into a throttled thread pool, the throttled thread pool including; getting the initial search from the throttled thread pool; storing a first shard from a mass storage in a memory in response to the initial search; performing the initial search on the first shard; providing first top search result scores from the initial search; and removing the first shard from the memory when the initial search is completed.

Detecting and recovering lost adjunct processor messages

A method, computer program product, and computer system are provided. An operating system (OS) receives a status at completion of a cryptographic adjunct process (AP) instruction directed to an AP message queue on a cryptographic AP. The status includes a return code, a reason code, a queue full indicator, a queue empty indicator, and the count of enqueued request messages on the AP message queue. The OS determines a number of lost request messages on the AP message queue, based on a count of enqueued request messages on the AP message queue received in the status. The OS re-enqueues the number of lost request messages to the AP message queue. The OS recovers the number of lost request messages on the AP message queue.

Point value change notification

Methods, devices, and systems for point value change notification are described herein. One system (100) includes a message broker (108) to receive data from a data acquisition (DAQ) system, a first building management system (BMS) instance (104) connected to the message broker (108) to process a first portion of the DAQ data, a second BMS instance (104) connected to the message broker (108) to process a second portion of the DAQ data, and a web application (118) connected to the message broker (108) to generate a notification of a change in point value of a portion of the first portion or the second portion of the DAQ data, where the first BMS instance (104) and the second BMS instance (104) are provisioned with a plurality of computing resources deployed in a computing environment (102, 502) and are ultimately executed on hardware.

Hardware accelerated dynamic work creation on a graphics processing unit

A processor core is configured to execute a parent task that is described by a data structure stored in a memory. A coprocessor is configured to dispatch a child task to the at least one processor core in response to the coprocessor receiving a request from the parent task concurrently with the parent task executing on the at least one processor core. In some cases, the parent task registers the child task in a task pool and the child task is a future task that is configured to monitor a completion object and enqueue another task associated with the future task in response to detecting the completion object. The future task is configured to self-enqueue by adding a continuation future task to a continuation queue for subsequent execution in response to the future task failing to detect the completion object.