Patent classifications
H04L47/801
APPARATUS, METHOD AND COMPUTER PROGRAM
An apparatus (113) comprising means for performing: receiving one or more forwarding tables from a centralized network configuration entity (101) of a time sensitive network, wherein the forwarding tables comprise entry information; and determining, based at least in part on the one or more forwarding tables, rules for mapping at least one uplink data stream of the time sensitive network to at least one of: a protocol data unit session (135, 137) and a quality of service flow (129, 131, 133).
TRAFFIC ESTIMATIONS FOR BACKBONE NETWORKS
Traffic flow across a backbone network can be determined even though flow data may not be available from all network devices. Flow data can be observed using types of backbone devices, such as aggregation and transit devices. An algorithm can be applied to determine which data to utilize for flow analysis, where this algorithm can be based at least in part upon rules to prevent duplicate accounting of traffic being observed by multiple devices in the backbone network. Such an algorithm can use information such as source address, destination address, and region information to determine which flow data to utilize. In some embodiments, address mapping may be used to attribute this traffic to various services or entities. The data can then be analyzed to provide information about the flow of traffic across the backbone network, which can be useful for purposes such as network optimization and usage allocation.
CLUSTER CAPACITY MANAGEMENT FOR HYPER CONVERGED INFRASTRUCTURE UPDATES
Disclosed are various implementations of cluster capacity management for infrastructure updates. In some examples, cluster hosts for a cluster can be scheduled for an update. A datacenter level workload can invoke an enter cluster maintenance mode component of a datacenter level resource scheduler by identifying a specified cluster of a datacenter. The datacenter level workload can receive an add host command to add a host for the host level update. Once the host level update is performed on hosts of the specified cluster, the datacenter level workload invokes an exit cluster maintenance mode component and implements a cluster scaling decision.
Systems and methods for inserting supplemental content into a quadrature amplitude modulation signal using existing bandwidth
Systems and methods are described herein for inserting supplemental content into a QAM signal. A unicast QAM signal between a server and a client device for delivery of content is established. The QAM signal has a particular frequency corresponding to the channel on which it will be transmitted to the client device. A request or other signal may be received from the client device requesting that supplemental content be inserted into the QAM signal. A portion of the bandwidth of the QAM signal is allocated for the supplemental content. The supplemental content is transcoded into a supplemental QAM signal in the particular frequency. The supplemental content may be packetized content such as internet protocol-based content and may be retrieved from a server or database. The supplemental QAM signal is then inserted into the unicast QAM signal using the allocated portion of the bandwidth.
Time sensitive network bridge configuration
A session management function (SMF) receives, from an access and mobility management function (AMF), a request for a time sensitive network (TSN) bridge. The SMF sends, to a user plane function (UPF) that supports TSN functionality, a message comprising configuration parameters of the TSN bridge. The configuration parameters comprise an identifier of the TSN bridge. The configuration parameters comprise an identifier of a port associated with TSN packet transmission.
MOBILITY NETWORK SLICE SELECTION
Core network slices that belong to a given operator community are efficiently tracked at the network control/user plane functions level, with rich data analytics in real-time based on their geographic instantiations. In one aspect, an enhanced vendor agnostic orchestration mechanism is utilized to connect a unified management layer with an integrated slice-components data analytics engine (SDAE), a slice performance engine (SPE), and a network slice selection function (NSSF) in a closed-loop feedback system with the serving network functions of one or more core network slices. The tight-knit orchestration mechanism provides economies of scale to mobile carriers in optimal deployment and utilization of their critical core network resources while serving their customers with superior quality.
Apparatus, system, and method for multi-bitrate content streaming
An apparatus for multi-bitrate content streaming includes a receiving module configured to capture media content, a streamlet module configured to segment the media content and generate a plurality of streamlets, and an encoding module configured to generate a set of streamlets. The system includes the apparatus, wherein the set of streamlets comprises a plurality of streamlets having identical time indices and durations, and each streamlet of the set of streamlets having a unique bitrate, and wherein the encoding module comprises a master module configured to assign an encoding job to one of a plurality of host computing modules in response to an encoding job completion bid. A method includes receiving media content, segmenting the media content and generating a plurality of streamlets, and generating a set of streamlets.
MANAGING BUFFERS FOR RATE PACING
A method, decoder and server for managing buffers for rate pacing. The decoder includes a memory, a transceiver configured to transmit and receive a signal, and processing circuitry operably connected to the memory and the transceiver. The processing circuitry receives, from the server, a removal rate message indicating a drain rate of a pacing buffer of the decoder. The processing circuitry also provides packets from the pacing buffer to a decoding buffer of the decoder according to the drain rate.
PREDICTIVE NETWORK CAPACITY SCALING BASED ON CUSTOMER INTEREST
In one example, the present disclosure describes a device, computer-readable medium, and method for scaling network capacity predictively, based on customer interest. For instance, in one example, a method includes predicting an interest of a first customer in data content that will be available for consumption over a data network at a time in the future, wherein the predicting is based on customer data including at least a search pattern associated with the first customer, flagging the data content when the predicting indicates at least a threshold degree of likelihood that the first customer will be interested in the data content, and scaling an allocation of resources of the data network to the first customer, based on the flagging.
System and method for characterizing network traffic
A system monitors first traffic and identifies associations between applications that generated or received the traffic and parameters such as domain names, a remote host, and a local host referenced in the traffic. Subsequent traffic is monitored and determined to be generated by or addressed to an application according to such parameters in the subsequent traffic, such as remote host, local host, domain name, or port number. The subsequent traffic is associated with an application without requiring deep packet inspection (DPI). In particular, an application may be associated with a session based on evaluation of a single packet of the session.