Patent classifications
H04L47/30
System and method for providing bandwidth congestion control in a private fabric in a high performance computing environment
Systems and methods for providing bandwidth congestion control in a private fabric in a high performance computing environment. An exemplary method can provide, at one or more microprocessors, a first subnet, the first subnet comprising a plurality of switches, and a plurality of host channel adapters, wherein each of the host channel adapters comprise at least one host channel adapter port, and wherein the plurality of host channel adapters are interconnected via the plurality of switches, and a plurality of end nodes. The method can provide, at a host channel adapter, an end node ingress bandwidth quota associated with an end node attached to the host channel adapter. The method can receive, at the end node of the host channel adapter, ingress bandwidth, the ingress bandwidth exceeding the ingress bandwidth quota of the end node.
System and method for providing bandwidth congestion control in a private fabric in a high performance computing environment
Systems and methods for providing bandwidth congestion control in a private fabric in a high performance computing environment. An exemplary method can provide, at one or more microprocessors, a first subnet, the first subnet comprising a plurality of switches, and a plurality of host channel adapters, wherein each of the host channel adapters comprise at least one host channel adapter port, and wherein the plurality of host channel adapters are interconnected via the plurality of switches, and a plurality of end nodes. The method can provide, at a host channel adapter, an end node ingress bandwidth quota associated with an end node attached to the host channel adapter. The method can receive, at the end node of the host channel adapter, ingress bandwidth, the ingress bandwidth exceeding the ingress bandwidth quota of the end node.
Selectively bypassing a routing queue in a routing device in a fifth generation (5G) or other next generation network
The technologies described herein are generally directed toward shedding processing loads associated with route updates. According to an embodiment, a system can comprise a processor and a memory that can enable operations facilitating performance of operations including facilitating receiving, from a second routing device via a network, a communication. The operations can further comprise, in response to a queueing delay being determined to be less than a threshold, queueing, in the queue, the communication for a third routing device selected according to a first selection process as being on a route to a destination routing device for the communication. Further, operations to, in response to the queueing delay of the queue being determined to be equal to or above the threshold, transmit the communication to a fourth routing device, with the fourth routing device being selected according to a second selection process different than the first selection process.
Method of Managing Data Transmission for Ensuring Per-Flow Fair Bandwidth Sharing
A computer-implementation method includes receiving a data packet; identifying a virtual queue from a list of virtual queues to which the data packet pertains; and determining whether the identified virtual queue size exceeds a threshold maximum size. When the first size does not exceed the threshold maximum size, the identified virtual queue is increased based on a size of the data packet and the data packet is forwarded. The method further includes setting a virtual queue from the list of virtual queues as a target queue; determining a service capacity based on an update time interval and increasing a credit allowance based on the service capacity. The target queue is reduced by an amount based on the credit allowance size, and the credit allowance is reduced by the same amount.
Control- and/or Monitoring-System for Industrial Ethernet Applications and a Respective Method of Control and Monitoring an Industrial Ethernet Device
A control and/or monitoring system is disclosed. In an embodiment, the system includes a host device on which an application for generating an Ethernet frame for deriving information about an industrial Ethernet network can be run and an industrial Ethernet tunnel device which is adapted to communicate with the host device. The system is configured to inject the Ethernet frame for deriving an information about an industrial Ethernet network and/or about one or more of the industrial Ethernet devices through the industrial Ethernet tunnel device and to receive an answer in an Ethernet frame format in the host device. An industrial Ethernet tunnel device and a method for control and/or monitoring of one or more devices in an industrial Ethernet network are also disclosed.
Control- and/or Monitoring-System for Industrial Ethernet Applications and a Respective Method of Control and Monitoring an Industrial Ethernet Device
A control and/or monitoring system is disclosed. In an embodiment, the system includes a host device on which an application for generating an Ethernet frame for deriving information about an industrial Ethernet network can be run and an industrial Ethernet tunnel device which is adapted to communicate with the host device. The system is configured to inject the Ethernet frame for deriving an information about an industrial Ethernet network and/or about one or more of the industrial Ethernet devices through the industrial Ethernet tunnel device and to receive an answer in an Ethernet frame format in the host device. An industrial Ethernet tunnel device and a method for control and/or monitoring of one or more devices in an industrial Ethernet network are also disclosed.
SPLIT DATA THRESHOLD ADJUSTMENTS IN A WIRELESS WIDE AREA NETWORK (WWAN)
Certain aspects of the present disclosure provide techniques for configuring a device-configured split data threshold for utilizing split data radio bearers (DRBs). A method that may be performed at a user equipment (UE) includes setting a device-configured split data threshold to a value that is less than or equal to a network-configured split data threshold value in response to the network-configured split data threshold value being greater than a first buffer threshold value associated with a data buffer of the UE, monitoring an amount of data in the data buffer, and triggering a first scheduling request (SR) associated with a first communication link and a second SR associated with a second communication link in response to the amount of data in the data buffer being greater than the device-configured split data threshold value.
Deterministic real time multi protocol heterogeneous packet based transport
Deterministic real-time multi-protocol heterogeneous packet-based transport is achieved by traffic shaping. When receiving a plurality of packets from a root complex where contents of each packet from the plurality of packets organized in accordance with a first protocol, a sequence number is added to each packet and a packet type is identified. Every packet in the first plurality of packets is encapsulated into at least one packet organized in accordance with a second protocol to form a second plurality of packets organized in accordance with the second protocol. All the packets from the second plurality of packets pass traffic scheduling or traffic shaping prior being sent via a plurality of connections to avoid burstiness and to achieve bounded transport latency in the plurality of connections, thereby providing deterministic real-time behavior in distributed systems.
Deterministic real time multi protocol heterogeneous packet based transport
Deterministic real-time multi-protocol heterogeneous packet-based transport is achieved by traffic shaping. When receiving a plurality of packets from a root complex where contents of each packet from the plurality of packets organized in accordance with a first protocol, a sequence number is added to each packet and a packet type is identified. Every packet in the first plurality of packets is encapsulated into at least one packet organized in accordance with a second protocol to form a second plurality of packets organized in accordance with the second protocol. All the packets from the second plurality of packets pass traffic scheduling or traffic shaping prior being sent via a plurality of connections to avoid burstiness and to achieve bounded transport latency in the plurality of connections, thereby providing deterministic real-time behavior in distributed systems.
Method and apparatus for providing a low latency transmission system using adjustable buffers
One aspect of the present invention discloses a network system capable of transmitting and processing audio video (“A/V”) data with enhanced quality of service (“QoS”). The network system includes a transmitter, a transmission channel, an adjustable decoder buffer, and a decoder. The transmitter contains an encoder able to encode A/V data in accordance with encoding bit rate recommendation from SQoS and packets loss notifications. The transmission channel, in one example, transmits A/V data from the transmitter or the receiver. The adjustable decoder buffer, in one aspect, is able to change its storage capacity or buffering size in response to the adaptive latency estimate. Upon fetching at least a portion of the A/V data from the adjustable decoder buffer, SQoS updates the adaptive latency estimate based on the quality of the decoded A/V data.