Patent classifications
H04L67/2866
Classification of messages using learned rules
The subject technology receives, in an application on an electronic device, a message, the message being associated with a user and including information in a header portion of the message. The subject technology determines, on the electronic device, a current state of messaging activity of the user based at least in part on a log of previous events associated with the user, where the log of previous events includes information that has been hashed using a cryptographic hash function. The subject technology determines, on the electronic device using a set of rules provided by a machine learning model, that the user is likely to view the message based on the current state of the messaging activity of the user. The subject technology sets, on the electronic device, an indication that the message is important based on the determining.
METHODS AND APPARATUS TO SCHEDULE SERVICE REQUESTS IN A NETWORK COMPUTING SYSTEM USING HARDWARE QUEUE MANAGERS
Example edge gateway circuitry to schedule service requests in a network computing system includes: gateway-level hardware queue manager circuitry to: parse the service requests based on service parameters in the service requests; and schedule the service requests in a queue based on the service parameters, the service requests received from client devices; and hardware queue manager communication interface circuitry to send ones of the service requests from the queue to rack-level hardware queue manager circuitry in a physical rack, the ones of the service requests corresponding to functions as a service provided by resources in the physical rack.
METHODS AND APPARATUS TO SCHEDULE SERVICE REQUESTS IN A NETWORK COMPUTING SYSTEM USING HARDWARE QUEUE MANAGERS
Example edge gateway circuitry to schedule service requests in a network computing system includes: gateway-level hardware queue manager circuitry to: parse the service requests based on service parameters in the service requests; and schedule the service requests in a queue based on the service parameters, the service requests received from client devices; and hardware queue manager communication interface circuitry to send ones of the service requests from the queue to rack-level hardware queue manager circuitry in a physical rack, the ones of the service requests corresponding to functions as a service provided by resources in the physical rack.
Microservices cloud-native architecture for ubiquitous simulation as a service
A system and method for deploying software is disclosed. The system includes an architecture for deploying simulation software as a service. The architecture includes a client layer. The client layer includes an edge device, a resource manager, an update framework, a firewall, and a key management system. The architecture further includes a control layer communicatively coupled to the client layer, wherein a portion of the control layer is configured within a server. The control layer includes an application programming interface, one or more containers, wherein at least one of the one or more containers is a simulation processing container. The control layer further includes an orchestration node, a continuous integration tool, one or more processors, and a content delivery network module. The architecture further includes a data layer communicatively coupled to the one or more containers.
Disaster resilient federated kubernetes operator
Disclosed herein are system, method, and computer program product embodiments for disaster resilience of applications managed by Kubernetes operators. An embodiment operates by creating an orchestration and worker cluster, where the worker cluster is coupled to the orchestration cluster by a proxy server. Custom resources are deployed to the orchestration cluster and custom resource controllers are deployed to the worker cluster. The proxy server federates these custom resources between the orchestration cluster and the worker cluster. During disasters, the worker cluster is recreated and reconciled to prevent loss of the federated cluster.
Disaster resilient federated kubernetes operator
Disclosed herein are system, method, and computer program product embodiments for disaster resilience of applications managed by Kubernetes operators. An embodiment operates by creating an orchestration and worker cluster, where the worker cluster is coupled to the orchestration cluster by a proxy server. Custom resources are deployed to the orchestration cluster and custom resource controllers are deployed to the worker cluster. The proxy server federates these custom resources between the orchestration cluster and the worker cluster. During disasters, the worker cluster is recreated and reconciled to prevent loss of the federated cluster.
METHOD FOR STREAMING DYNAMIC 5G AR/MR EXPERIENCE TO 5G DEVICES WITH UPDATABLE SCENES
A method is provided. The method includes selecting media content including a full scene description, selecting a 5.sup.th generation (5G) media streaming downlink (5GMSd) application server (AS) to stream the media content based on the full scene description, deriving a simplified scene description based on the full scene description, and creating an augmented reality (AR)/mixed reality (MR) session based on the simplified scene description.
Proxy selection by monitoring quality and available capacity
Empirical data of exit nodes are continuously monitored and each exit node's overall performance and available capacity are calculated. The empirical data can include monitoring the number of concurrent requests currently being executed by each exit node and the disconnection chronology of each exit node. Further, each exit node is tested by benchmark requests and ping messages and each exit node's quality rate is calculated. Additionally, systems and methods are provided to select an exit node with the highest quality and available capacity value, from a particular pool to route the user request.
Proxy selection by monitoring quality and available capacity
Empirical data of exit nodes are continuously monitored and each exit node's overall performance and available capacity are calculated. The empirical data can include monitoring the number of concurrent requests currently being executed by each exit node and the disconnection chronology of each exit node. Further, each exit node is tested by benchmark requests and ping messages and each exit node's quality rate is calculated. Additionally, systems and methods are provided to select an exit node with the highest quality and available capacity value, from a particular pool to route the user request.
APPLICATION PROGRAMMING INTERFACE TO DESELECT STORAGE
Apparatuses, systems, and techniques to perform one or more APIs. In at least one embodiment, a processor is to perform an API to deselect storage selected to be used to transfer information between a plurality of fifth generation new radio (5G-NR) computing resources.