Patent classifications
H04L49/506
Signaling to support scheduling in an integrated access and backhaul system
In accordance with an example embodiment of the present invention, an apparatus comprising: at least one processor; and at least one memory including computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform at least the following: allocate a physical uplink channel between at least one integrated access and backhaul node user equipment function and a parent distributed unit; and send at least one message via the physical uplink channel, wherein the at least one message includes at least: a destination queue depth scheduled on a downlink by at least one integrated access and backhaul node distributed unit.
PACKET-FLOW MESSAGE-DISTRIBUTION SYSTEM
Switchless interconnect fabric message distribution includes end-to-end partitioning of message pathways or multiple priority levels with interrupt capability. A switchless interconnect fabric message distribution system includes a data distribution module and at least two host-bus adapters connected to the data distribution module. The data distribution module includes partition first in first out buffers. Each of the host-bus adapters includes an input manager connected to input priority first in first out buffers and an output manager connected to priority first in first out buffers.
High-performance data repartitioning for cloud-scale clusters
Techniques herein partition data using data repartitioning that is store-and-forward, content-based, and phasic. In embodiments, computer(s) maps network elements (NEs) to grid points (GPs) in a multidimensional hyperrectangle. Each NE contains data items (DIs). For each particular dimension (PD) of the hyperrectangle the computers perform, for each particular NE (PNE), various activities including: determining a linear subset (LS) of NEs that are mapped to GPs in the hyperrectangle at a same position as the GP of the PNE along all dimensions of the hyperrectangle except the PD, and data repartitioning that includes, for each DI of the PNE, the following activities. The PNE determines a bit sequence based on the DI. The PNE selects, based on the PD, a bit subset of the bit sequence. The PNE selects, based on the bit subset, a receiving NE of the LS. The PNE sends the DI to the receiving NE.
Fabric Vectors for Deep Learning Acceleration
Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow-based computations on wavelets of data. Each processing element has a respective compute element and a respective routing element. Instructions executed by the compute element include operand specifiers, some specifying a data structure register storing a data structure descriptor describing an operand as a fabric vector or a memory vector. The data structure descriptor further describes various attributes of the fabric vector: length, microthreading eligibility, number of data elements to receive, transmit, and/or process in parallel, virtual channel and task identification information, whether to terminate upon receiving a control wavelet, and whether to mark an outgoing wavelet a control wavelet.
PRIORITY-BASED FLOW CONTROL
Some embodiments provide a method for a hardware forwarding element. The method adds a received packet to a buffer. The method determines whether adding the packet to the buffer causes the buffer to pass one of multiple flow control thresholds, each of which corresponds to a different packet priority. When adding the packet to the buffer causes the buffer to pass a particular flow control threshold corresponding to a particular priority, the method generates a flow control message for the particular priority.
Processing method, system, physical device and storage medium based on distributed stream computing
A processing method based on distributed stream computing is performed by a computing device. After receiving flow data sent by an upstream computing node, the computing device stores the flow data into a flow data pool. The computing device collects a data ratio of the flow data in the flow data pool to a total capacity of the flow data pool. When the collected data ratio is greater than or equal to a first threshold, the computing device performs an operation of adding a mark for enabling the upstream computing node to decrease a flow data sending rate; when the collected data ratio is less than or equal to a second threshold, the second threshold being less than the first threshold, the computing device performs an operation of deleting the mark for enabling the upstream computing node to increase the flow data sending rate.
Congestion control method and apparatus, communications network, and computer storage medium
This application discloses a congestion control method and related apparatus. In the congestion control method, a network device first obtains statistical information of a target egress queue within a first time period, where the target egress queue is any target egress queue in the network device. The network device determines an explicit congestion notification (ECN) threshold for the target egress queue within a second time period based on the statistical information of the target egress queue within the first time period, where the second time period is chronologically subsequent to the first time period. When a queue depth of the target egress queue exceeds the ECN threshold within the second time period, the network device sets an ECN mark for a data packet in the target egress queue.
Congestion control method and apparatus, communications network, and computer storage medium
This application discloses a congestion control method and related apparatus. In the congestion control method, a network device first obtains statistical information of a target egress queue within a first time period, where the target egress queue is any target egress queue in the network device. The network device determines an explicit congestion notification (ECN) threshold for the target egress queue within a second time period based on the statistical information of the target egress queue within the first time period, where the second time period is chronologically subsequent to the first time period. When a queue depth of the target egress queue exceeds the ECN threshold within the second time period, the network device sets an ECN mark for a data packet in the target egress queue.
METHODS AND APPARATUSES FOR TRANSPARENT EMBEDDING OF PHOTONIC SWITCHING INTO ELECTRONIC CHASSIS FOR SCALING DATA CENTER CLOUD SYSTEM
There is provided methods and apparatuses for transferring photonic cells or frames between a photonic switch and an electronic switch enabling a scalable data center cloud system with photonic functions transparently embedded into an electronic chassis. In various embodiments, photonic interface functions may be transparently embedded into existing switch chips (or switch cards) without changes in the line cards. The embedded photonic interface functions may provide the switch cards with the ability to interface with both existing line cards and photonic switches. In order to embed photonic interface functions without changes on the existing line cards, embodiments use two-tier buffering with a pause signalling or pause messaging scheme for managing the two-tier buffer memories.
TRAFFIC MANAGEMENT IN A NETWORK SWITCHING SYSTEM WITH REMOTE PHYSICAL PORTS
In a switching system that comprises a central switching device an at least one port extender device, the central switching device includes at least one port configured to interface with the port extender device, and the port extender device includes a plurality of front ports for interfacing with one or more networks. The central switching device includes a processor that processes packets received from the at least one port extender device, and a plurality of egress queues for storing processed packets that are to be forwarded to the at least one port extender device for transmission via ones of the front ports. The central switching device also includes a flow control processor configured to, responsively to flow control messages received from the at least one port extender device, control transmission of packets to the at least one port extender device to prevent overflow of egress queues of the port extender device.