H04L47/266

Dynamic network receiver-driven data scheduling over a datacenter network for managing endpoint resources and congestion mitigation

A network endpoint receiver controls packet flow from a transmitter. Packets are received via a network in packet traffic according to a push mode, where the transmitter controls pacing of transmitting the packets. Characteristics related to the packet traffic are monitored at the receiver. The monitored characteristics are compared to reception performance parameters, and based on the comparison, a decision is made to switch from the push mode to a pull mode for controlling the packet flow. The receiver transmits a pull mode request packet to the transmitter, where the pull mode request packet indicates a pacing of subsequent packets transmitted by the transmitter to the receiver in accordance with the pull mode. Pacing of further transmitted packets may be controlled by subsequent pull mode request packets sent over time to the transmitter by the receiver. Similarly, the receiver may control additional transmitters to transmit at equal or different rates.

COOPERATIVE DISTRIBUTED SCHEDULING FOR DEVICE-TO-DEVICE (D2D) COMMUNICATION
20220061064 · 2022-02-24 ·

In a communication system having a plurality of user equipment (UE) devices that are operating in a contention based mode for device-to-device (D2D) communication, each UE device transmits a preferred transmission indicator when a condition for preferred transmission is met at the UE device. If a UE device receives a preferred transmission indicator, the UE device delays transmission of a D2D scheduling assignment (SA) to contend for communication resources for D2D communication. The length of the delay can be based on a number of preferred transmission indicators that are received. The preferred transmission indicator is based on a buffer size in one example.

System and method for ordering of data transferred over multiple channels

A multiple channel data transfer system (10) includes a source (12) that generates data packets with sequence numbers for transfer over multiple request channels (14). Data packets are transferred over the multiple request channels (14) through a network (16) to a destination (18). The destination (18) re-orders the data packets received over the multiple request channels (14) into a proper sequence in response to the sequence numbers to facilitate data processing. The destination (18) provides appropriate reply packets to the source (12) over multiple response channels (20) to control the flow of data packets from the source (12).

Systems and methods for implementing bearer call-back services

The present disclosure is directed at systems, methods and media for providing bearer call-back services for bearers that have been rejected or pre-empted by a network apparatus in a core network. In some embodiments, if a network apparatus enters a state in which it becomes necessary to reject or pre-empt a bearer associated with a user equipment (UE) (e.g., due to load conditions in a radio access network, the core network, or an application server), the network apparatus can send to the UE a call-back message when the network apparatus exits the state that precipitated the bearer rejection or pre-emption. By sending a call-back message, the network apparatus can save the UE from multiple unsuccessful attempts to establish a bearer, or from waiting an unnecessarily long time before establishing a bearer.

Technologies for monitoring networked computing devices using deadman triggers

Technologies for monitoring networked computing devices using deadman triggers includes a network interface controller (NIC) configured to collect a pin state of at least one deadman trigger pin associated with a deadman trigger and determine whether a triggering event associated with the deadman trigger has been detected as a result of the comparison. The NIC is further configured to generate, in response to a determination that the triggering event has been detected, a status packet that is usable to identify the detected triggering event associated with the deadman trigger. Additionally, the NIC is configured to issue a stop transmission command to each of a plurality of egress packet transmission queues of the NIC and insert the generated status packet into at least one of the plurality of egress packet transmission queues for transmission to a target computing device. Other embodiments are described herein.

COOPERATIVE DISTRIBUTED SCHEDULING FOR DEVICE-TO-DEVICE (D2D) COMMUNICATION
20170251487 · 2017-08-31 ·

In a communication system having a plurality of user equipment (UE) devices that are operating in a contention based mode for device-to-device (D2D) communication, each UE device transmits a preferred transmission indicator when a condition for preferred transmission is met at the UE device. If a UE device receives a preferred transmission indicator, the UE device delays transmission of a D2D scheduling assignment (SA) to contend for communication resources for D2D communication. The length of the delay can be based on a number of preferred transmission indicators that are received. The preferred transmission indicator is based on a buffer size in one example.

Link status buffer flow control management

Generally, this disclosure describes techniques for buffer management based on link status. A host platform may include a Baseboard Management Controller (BMC) and a network controller that includes a buffer used by the BMC. When a network controller is in a lower power link state, the BMC may attempt to send data to the link partner which causes the network controller to transition out of the low power state. However, this transition may take longer than the buffer's ability to buffer the incoming flow from the BMC. Accordingly, to avoid the need for larger buffer space, a buffer manager is used to provide flow control management of the buffer based on link status.

Sending traffic policies

Sending traffic policies includes identifying a location of an issue source with a first router in a network layer of an interconnected network and sending to a second router located along a route towards the issue source a traffic policy that addresses an issue caused with the issue source.

Content delivery system and content delivery method
09763133 · 2017-09-12 · ·

A plurality of cache servers, connected to a packet forwarding apparatus, forwarding a packet transmitted and received between a storage apparatus that holds a content under management in store and a user terminal, temporarily holds at least part of the content under management in store. A controller decides an on-screen resolution at the terminal, based on information contained in a content request message from the terminal, and selects a cache server that holds a content of the on-screen resolution in store. The controller instructs the selected cache server to deliver the content. The cache server instructed calculates a bit rate based on a signal received from the terminal. The cache server reads content from the terminal, which is to have the on-screen resolution and a bit rate not higher than the calculated bit rate. The content is stored in a packet and transmitted, then delivered without reducing the user's QoE.

METHOD AND APPARATUS FOR PACKET DELAY MANAGEMENT IN eCPRI NETWORK
20220046465 · 2022-02-10 · ·

A method for data packet delay management in a communications network connecting a sender node over an eCPRI interface to a receive node. The method performed at the receiver node receiving data packets from the sender node comprises monitoring (102) a buffer level at a buffer receiving data from the sender node. When the buffer level reaches a threshold (104-Yes) the method further comprises transmitting (108) to the sender node over the eCPRI interface at least one message comprising information indicative of length of time of a Medium Access Control, MAC, flow control and transmitting (114) at least one MAC flow control frame to the sender node with a delay (110,112).