H04L12/841

Jitter Buffer Apparatus and Method

Disclosed is a method and apparatus operative to process packets of media received from a network including a receiver unit operative, a jitter buffer data structure and a playback head defining a point in the jitter buffer data structure from which the ordered queue of packets are to be played back, and at least one prototype head. Each prototype head having a predetermined latency assigned thereto and defining a point in the jitter buffer data structure from which the ordered queue of packets is being played back containing said latency a processor operable to determine a measure of conversational quality associated with the ordered queue of packets being played back by each prototype head. Also described is a head selector operable to compare the measures of conversational quality associated with the ordered queue of packets being played back by each prototype head to select the prototype head with the highest measure of conversational quality and a playback unit coupled to the playback head.

ADAPTIVE GAIN REDUCTION FOR BACKGROUND CONNECTIONS

The technologies disclosed herein provide improvements to the Low Extra Delay Background Transport (LEDBAT) protocol. Some aspects of the present disclosure introduce an adaptive congestion window gain for background connections. In some configurations, a gain value for influencing the transmission rate of a background connection is dynamically adjusted based on data indicating a round trip time (RTT). The RTT includes a sum of a time in which the data is communicated to a remote device and a time in which acknowledgement is data returned from the remote device. In some configurations, the gain is decreased when the RTT is below a threshold and the gain is increased when the RTT is above the threshold. Among other features, the present disclosure also provides techniques involving a modified slow-start, multiplicative decrease and periodic slowdowns. The features disclosed herein mitigate some existing issues, such as latency drift, inter-LEDBAT fairness, and unnecessary slowdowns.

DEVICE AND METHOD FOR TRANSMITTING PACKET IN WIRELESS COMMUNICATION SYSTEM

The present disclosure relates to a pre-5.sup.th-Generation (5G) or 5G communication system to be provided for supporting higher data rates Beyond 4.sup.th-Generation (4G) communication system such as Long Term Evolution (LTE). An operation method of a transmission device according to an embodiment includes determining the amount of packets transferred from an application layer to a buffer and determining the amount of packets transferred from the buffer to a transmission layer. The method also includes blocking/non-blocking a packet transfer from the application layer to the buffer, based on the amount of packets transferred from the application layer to the buffer and the amount of packets transferred from the buffer to the transmission layer.

Speech transcoding in packet networks

Speech transcoding in packet networks may be useful when both incoming and outgoing speech streams of the transcoding entity are packet based. This can be any transcoding entity having packet interfaces. A method can include omitting jitter buffering before decoding in a transcoder and omitting bad frame handling in a decoding stage of a transcoder. The method can also include freezing a decoder and the encoder when a packet is not received. The method can also include sending packet loss information from the decoder to the encoder as side information when the packet is not received. The method can further include setting an outgoing packet stream to permit detection of missing packets by a downstream decoder upon receiving a valid packet after the packet is not received.

METHOD AND APPRATUS FOR CONTROLLING NETWORK TRAFFIC
20170272365 · 2017-09-21 ·

A method for managing a network traffic of a radio access network, the method comprising steps of identifying, by a processor of a baseband unit (BBU), at least one characteristic of a data traffic received from at least one user equipment, and determining, by the processor, whether to process locally at an edge node or at a remote service network in response to the at least one characteristic of the data traffic received from the user equipment.

Dynamic flowlet prioritization

In one embodiment, a next set of packets in a first flow may be identified. A counter may be incremented, where the counter indicates a first number of initial sets of packets in first flow that have been identified. The identified next set of packets may be prioritized such that the first number of initial sets of packets in the first flow are prioritized and a sequential order of all packets in the first flow is maintained. The identifying, incrementing, and prioritizing may be repeated until no further sets of packets in the first flow remain to be identified or the first number of initial sets of packets is equal to a first predefined number.

Interference cognizant network scheduling

Systems and methods for interference cognizant network scheduling are provided. In certain embodiments, a method of scheduling communications in a network comprises identifying a bin of a global timeline for scheduling an unscheduled virtual link, wherein a bin is a segment of the timeline; identifying a pre-scheduled virtual link in the bin; and determining if the pre-scheduled and unscheduled virtual links share a port. In certain embodiments, if the unscheduled and pre-scheduled virtual links don't share a port, scheduling transmission of the unscheduled virtual link to overlap with the scheduled transmission of the pre-scheduled virtual link; and if the unscheduled and pre-scheduled virtual links share a port: determining a start time delay for the unscheduled virtual link based on the port; and scheduling transmission of the unscheduled virtual link in the bin based on the start time delay to overlap part of the scheduled transmission of the pre-scheduled virtual link.

AUTOMATED FLOW DEVOLVEMENT IN AN AGGREGATE FLOW ENVIRONMENT
20170264557 · 2017-09-14 ·

Mechanisms for devolving microflows from aggregate flows are disclosed. An ingress node receives a packet that matches an aggregate flow entry in a flow table. A determination that a devolve action is associated with the aggregate flow entry is made. Based on the determination that the devolve action is associated with the aggregate flow entry, a microflow flow entry is generated in the flow table to define a microflow. The microflow flow entry includes header information extracted from the packet. Microflow generation information that identifies the microflow is sent to a controller node. It is determined that the microflow has timed out based on an idle timeout period of time. In response to determining that the microflow has timed out, microflow termination information that includes path measurement metric information associated with the microflow is sent to the controller node.

Content delivery system and content delivery method
09763133 · 2017-09-12 · ·

A plurality of cache servers, connected to a packet forwarding apparatus, forwarding a packet transmitted and received between a storage apparatus that holds a content under management in store and a user terminal, temporarily holds at least part of the content under management in store. A controller decides an on-screen resolution at the terminal, based on information contained in a content request message from the terminal, and selects a cache server that holds a content of the on-screen resolution in store. The controller instructs the selected cache server to deliver the content. The cache server instructed calculates a bit rate based on a signal received from the terminal. The cache server reads content from the terminal, which is to have the on-screen resolution and a bit rate not higher than the calculated bit rate. The content is stored in a packet and transmitted, then delivered without reducing the user's QoE.

Flow control scheme for parallel flows

A method includes a proxy device receiving from a source device a request to establish a flow to a destination device; generating, based on the request, a meta-packet that indicates that the flow to the destination device is to be proxied; determining whether a pre-established flow connecting the proxy device to another proxy device that leads toward the destination device exists; sending the meta-packet on the pre-established flow, when it is determined that the pre-established flow exists; receiving by the other proxy device, the meta-packet, and establishing the flow to the destination device based on the meta-packet, where the proxy devices assign one or more of a source address, a source port, a destination address, or a destination port, associated with the source device and the destination device, to the pre-established flow.