Patent classifications
H04L67/61
Method for operating a distributed application
A method for operating a distributed application includes: transmitting, by an application frontend, an initialization request to a registration server via a communication network; selecting, by the registration server, an instance of an application backend and transmitting a fully qualified domain name of the selected instance to the application frontend; transmitting, by the application frontend, a lookup request to a domain name server; transmitting, by the domain name server, an IP address associated with the fully qualified domain name to the application frontend; transmitting, by the application frontend, application data to the transmitted IP address via a connection provided by the communication network; selecting, by a core server of the communication network, a quality service for the distributed application; applying, by the communication network, a service quality determined by the selected quality service to the connection; and operating, by the distributed application, with the applied service quality.
Multi-tenant routing gateway for internet-of-things devices
Novel techniques are described for gateway routing and/or processing of multi-tenant Internet-of-Things (IoT) device data streams. For example, a single IoT routing gateway can be used to route device data streams from IoT devices of multiple customers according to rule-based routing tiers. The routing tiers define routing protocols, including which communication technologies to use for transmission of the device data streams over a cloud network to remote servers. In some cases, the routing tiers further define processing protocols to facilitate rule-based edge processing (and/or remote processing) of some or all device data streams. Some routing tiers can define a primary and one or more secondary solution for routing and/or processing, according to customer-defined rules. In some cases, the routing tiers further enable rule-based control of interconnectivity among IoT devices.
Datapath load distribution for a RIC
To provide a low latency near RT RIC, some embodiments separate the RIC's functions into several different components that operate on different machines (e.g., execute on VMs or Pods) operating on the same host computer or different host computers. Some embodiments also provide high speed interfaces between these machines. Some or all of these interfaces operate in non-blocking, lockless manner in order to ensure that critical near RT RIC operations (e.g., datapath processes) are not delayed due to multiple requests causing one or more components to stall. In addition, each of these RIC components also has an internal architecture that is designed to operate in a non-blocking manner so that no one process of a component can block the operation of another process of the component. All of these low latency features allow the near RT RIC to serve as a high speed IO between the E2 nodes and the xApps.
Software updates based on transport-related actions
An example operation includes one or more of receiving, by a transport, over an open wireless network an encrypted software update and receiving, by the transport, over a closed wireless network a one-time key to decrypt the encrypted software update, wherein the one-time key is received while the transport is in motion and about to perform an action related to the software update.
System and method for selecting and providing zone-specific media
A system and method for providing zone-specific media to a user. As a non-limiting example, various aspects of this disclosure provide a system and method that flexibly selects and provides media content (e.g., audio content), where such content is selected based, at least in part, on a user location (e.g., location within a premises).
System and method for selecting and providing zone-specific media
A system and method for providing zone-specific media to a user. As a non-limiting example, various aspects of this disclosure provide a system and method that flexibly selects and provides media content (e.g., audio content), where such content is selected based, at least in part, on a user location (e.g., location within a premises).
SYSTEMS AND METHODS FOR DEPLOYING SECURE EDGE PLATFORMS
System and methods for communication in a disconnected, intermittent, and limited (DIL) environment are disclosed and include receiving first data generated in the DIL environment at a cloud-in-a-box (CIB) appliance, processing the first data at the CIB appliance, determining that additional processing of the first data is required based on processing the first data at the CIB appliance, assigning a first priority level to the first data in response to determining that additional processing is required, wherein the first priority level is based on at least one of a user input, a predetermined criteria, or a prioritization machine learning model output, establishing a connection with a local area cloud component within the DIL environment, and transmitting a request for additional processing of the first data based on the first priority level.
RESILIENT RENDERING FOR AUGMENTED-REALITY DEVICES
A method by a rendering device includes receiving a request to render multiple surfaces corresponding to multiple virtual objects to be concurrently displayed on an augmented-reality (AR) headset. The method further includes that the AR headset is connected to the rendering device via a wireless link. In response to a determination that a network quality of the wireless link is below a threshold condition, the method further includes selecting a first subset of the multiple surfaces that are higher priority than a second subset of the plurality of surfaces. The method includes transmitting the first subset of multiple surfaces to the AR headset for display and transmitting the second subset of multiple surfaces to the AR headset for display after transmitting the first subset. This method includes rendering the surfaces in accordance with a set of rendering parameters so as to satisfy one or more network constraints.
RATE LIMIT AND BURST LIMIT ENHANCEMENTS FOR REQUEST PROCESSING
A method that includes establishing an open connection for responding to requests from clients supported by an application server. The method may further include establishing a set of queues configured for storing requests received from the client via the open connection. The method may further include selecting requests from the queues based on a rate limit threshold and burst limit threshold of the application server. The rate limit threshold may refer to a number of requests that the application server can process within a first time duration, while the burst limit threshold may refer to a number of requests that the application server can process within a second time duration that is shorter than the first time duration. The method may further include transmitting the requests to a set of data processing servers connected to the application server and receiving an indication that the requests have been processed.
INTELLIGENT TICKETING AND DATA OFFLOAD PLANNING FOR CONNECTED VEHICLES
Intelligent ticketing and data offload planning is provided. A data center receives a ticket request from a vehicle requesting to perform a data upload of vehicle data over a communications network. An optimizer is utilized to generate a ticket, the ticket specifying a time and a location for the vehicle to perform the data upload. The ticket is received from the vehicle. The ticket is validated to ensure that the vehicle should still perform the data upload. The vehicle is indicated to perform the data upload over the communications network responsive to the optimizer confirming the data upload to proceed. The data upload is stored to a storage of the data center.