Patent classifications
H04L47/801
REQUEST TO ESTABLISH PROTOCOL DATA UNIT SESSION WITH TIME SENSITIVE NETWORK PARAMETERS
A method may include receiving, by a wireless device from a time sensitive network, TSN, translator device, one or more TSN parameters. The method may also include sending, by the wireless device to an access and mobility management function (AMF), a non access stratum, NAS, message indicating a request to establish a protocol data unit (PDU) session comprising the one or more TSN parameters. The method may further include receiving, by the wireless device from the AMF, a message indicating acceptance of the PDU session.
LOAD BALANCING COMMUNICATION SESSIONS IN A NETWORKED COMPUTING ENVIRONMENT
Techniques for load balancing communication sessions in a networked computing environment are described herein. The techniques may include establishing a first communication session between a client device and a first computing resource of a networked computing environment. Additionally, the techniques may include storing, in a data store, data indicating that the first communication session is associated with the first computing resource. The techniques may further include receiving, at a second computing resource of the networked computing environment, traffic associated with a second communication session that was sent by the client device, and based at least in part on accessing the data stored in the data store, establishing a traffic redirect such that the traffic and additional traffic associated with the second communication session is sent from the second computing resource to the first computing resource.
Internet provider subscriber communications system
A method for communicating in real-time to users of a provider of Internet access service, without requiring any installation or set-up by the user, that utilizes the unique identification information automatically provided by the user during communications for identifying the user to provide a fixed identifier which is then communicated to a redirecting device. Messages may then be selectively transmitted to the user. The system is normally transparent to the user, with no modification of its content along the path. Content then may be modified or replaced along the path to the user. For the purposes of establishing a reliable delivery of bulletin messages from providers to their users, the system forces the delivery of specially-composed World Wide Web browser pages to the user, although it is not limited to that type of data.
Opportunistic delivery of cacheable content in a communications network
Systems and methods are described for using opportunistically delayed delivery of content to address sub-optimal bandwidth resource usage in network infrastructures that allow subscribers to share forward link resources. According to some embodiments, content is identified as delayable and assigned to a delaycast queue and/or service flow. For example, a server system of a satellite communications system identifies content that can be delayed to exploit future excess link capacity through multicasting and to exploit subscriber-side storage resources. Some implementations attempt to exploit any excess link resources at any time, while others exploit unused bandwidth only during certain times or when a certain threshold of resources is available. Various embodiments also provide content scoring and/or other prioritization techniques for optimizing exploitation of the delaycast queue.
Delivery of Multimedia Components According to User Activity
Systems, methods, apparatuses, and computer readable media may be configured for establishing at least one session for delivery of multimedia. In an aspect, a first transmission of data fragments of a first component and a second transmission of data fragments of a second component may be transmitted and synchronized for presentation. If an inactivity event is detected the session may be maintained while reducing bandwidth consumption.
Optimizing agent for identifying traffic associated with a resource for an optimized service flow
An optimizing agent of an access point device can identify traffic associated with a resource for an optimized service flow so as to provide a user an enhanced experience. The optimizing agent can identify the traffic for the optimized service flow based on one or more optimizations settings. The optimization settings can include a policy that indicates a priority level, a bandwidth, a QoS, or any other prioritization setting. A user can manage a list of resources associated with the one or more optimization settings via a user interface either hosted by a network resource or network device such that traffic associated with the resources should receive optimization.
OPTIMIZING AGENT FOR IDENTIFYING TRAFFIC ASSOCIATED WITH A RESOURCE FOR AN OPTIMIZED SERVICE FLOW
An optimizing agent of an access point device can identify traffic associated with a resource for an optimized service flow so as to provide a user an enhanced experience. The optimizing agent can identify the traffic for the optimized service flow based on one or more optimizations settings. The optimization settings can include a policy that indicates a priority level, a bandwidth, a QoS, or any other prioritization setting. A user can manage a list of resources associated with the one or more optimization settings via a user interface either hosted by a network resource or network device such that traffic associated with the resources should receive optimization.
Configurable HTTP request throttling library
Disclosed herein are system, method, and computer program product embodiments for deploying a configurable throttling library in a cloud platform that throttles requests according to fully customizable parameters across each origin and resource. An administrator can harness the full customization provided by the throttling library to specify increment, decrement, delay, threshold, expiration, and rejection policies. These policies allow administrators to specify parameters guiding throttling on a per-user and a per-resource basis, thus providing significantly enhanced configuration capabilities to the administrator to tailor the throttling to the unique requirements of their applications and the usage thereof.
MOBILITY NETWORK SLICE SELECTION
Core network slices that belong to a given operator community are efficiently tracked at the network control/user plane functions level, with rich data analytics in real-time based on their geographic instantiations. In one aspect, an enhanced vendor agnostic orchestration mechanism is utilized to connect a unified management layer with an integrated slice-components data analytics engine (SDAE), a slice performance engine (SPE), and a network slice selection function (NSSF) in a closed-loop feedback system with the serving network functions of one or more core network slices. The tight-knit orchestration mechanism provides economies of scale to mobile carriers in optimal deployment and utilization of their critical core network resources while serving their customers with superior quality.
Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
Some embodiments provide policy-driven methods for deploying edge forwarding elements in a public or private SDDC for tenants or applications. For instance, the method of some embodiments allows administrators to create different traffic groups for different applications and/or tenants, deploys edge forwarding elements for the different traffic groups, and configures forwarding elements in the SDDC to direct data message flows of the applications and/or tenants through the edge forwarding elements deployed for them. The policy-driven method of some embodiments also dynamically deploys edge forwarding elements in the SDDC for applications and/or tenants after detecting the need for the edge forwarding elements based on monitored traffic flow conditions.