Patent classifications
H04L47/6225
Expandable queue
A network device includes packet processing circuitry and queue management circuitry. The packet processing circuitry is configured to transmit and receive packets to and from a network. The queue management circuitry is configured to store, in a memory, a queue for queuing data relating to processing of the packets, the queue including a primary buffer and an overflow buffer, to choose between a normal mode and an overflow mode based on a defined condition, to queue the data only in the primary buffer when operating in the normal mode, and, when operating in the overflow mode, to queue the data in a concatenation of the primary buffer and the overflow buffer.
Adaptive video streaming
A method, system and apparatus for image capture, analysis and transmission are provided. A link aggregation method involves identifying controller network ports to a source connected to the same subnetwork; producing packets associating corresponding controller network ports selected by the source CPU for substantially uniform selection; and transmitting the packets to their corresponding network ports. An image analysis method involves producing by a camera an indication whether a region of an image differs by a threshold extent from a corresponding region of a reference image; transmitting the indication and image data to a controller via a communications network; and storing at the controller the image data and the indication in association therewith. The controller may perform operations according to positive indications. A transmission method involves receiving user input in respect of a video stream and transmitting, in accordance with the user input, selected data packets of selected image frames thereof.
Base station device, and terminal device for retransmitting group of unit data
A base station device includes: a storage that stores a group indicating unit data subject to retransmission out of predetermined number of unit data included in transmission data to be transmitted to a terminal device, and identification information to identify the group, in an associated manner; a receiver that receives, from the terminal device, identification information corresponding to transmission data transmitted to the terminal device; a communication controller that refers to the storage based on the received identification information, and that determines retransmission of unit data included in a group corresponding to the received identification information out of the transmission data; and a transmitter that transmits unit data included in the group determined to be retransmitted by the communication controller, to the terminal device.
Multidrop network system
A multidrop network system includes N network devices. The N network devices includes M transmission-permissible devices including a master device and at least one slave device, wherein M is not greater than N. Each transmission-permissible device has at least one identification code as its identification in the multidrop network system, and the M transmission-permissible devices have at least N identification codes. The M transmission-permissible devices obtain transmission opportunities in turn according to their respective identification codes in each round of data transmission. A K.sup.th device among the M transmission-permissible devices has multiple identification codes, and thus obtains multiple transmission opportunities in one round of data transmission. Each of the M transmission-permissible devices performs a count operation and generates a current count value; and when the current count value is the same as the identification code of a device of the M transmission-permissible devices, this device earns one transmission opportunity.
Methods and systems for queue and pipeline latency metrology in network devices and smart NICs
Inbound packets can be received by a network device that determines a receive pipeline latency metric based on a plurality of receive pipeline residency times of the inbound packets and determines a receive queue latency metric based on a plurality of receive queue residency times of the inbound packets. The receive queue latency metric and the receive pipeline latency metric can be reported to a data collector. The network appliance may also receive a plurality of outbound packets on a transmit queue, determine a transmit queue latency metric based on the transmit queue residency times of the outbound packets, and determine a transmit pipeline latency metric based on the transmit pipeline residency times of the outbound packets. The outbound packets may be transmitted toward their destination. The transmit queue latency metric and the transmit pipeline latency metric can be reported to the data collector.
Method for measuring a transmission delay with control of degrees of contention applied to a data frame
The invention relates to a method for transmitting a target data frame (fA) on a path comprising at least one router (R) that has input ports (P1, P2, P3), at least one output port (PS) and an arbitration unit (UA) configured so as to select a data frame from a plurality of data frames each coming from a different input port and competing for transmission by one and the same output port. The method comprises specifying, for each of the access ports of the router, data frames (fB, fC) competing with the target data frame for transmission by a target output port of the router. An end-to-end transmission time of the target data frame on the path is then measured while the arbitration unit selects the competing data frame (fB) before the target data frame (fA) for transmission by the target output port (PS).
Expandable Queue
A network device includes packet processing circuitry and queue management circuitry. The packet processing circuitry is configured to transmit and receive packets to and from a network. The queue management circuitry is configured to store, in a memory, a queue for queuing data relating to processing of the packets, the queue including a primary buffer and an overflow buffer, to choose between a normal mode and an overflow mode based on a defined condition, to queue the data only in the primary buffer when operating in the normal mode, and, when operating in the overflow mode, to queue the data in a concatenation of the primary buffer and the overflow buffer.
RADIO UNIT CASCADING IN RADIO ACCESS NETWORKS
The described technology is generally directed towards radio unit cascading in radio access networks. Radio units (RUs) can be configured with processors adapted to support daisy chaining of multiple RUs, so that the multiple RUs can connect to one hardware interface at a distributed unit (DU). An RU processor for a given RU can be configured to receive downlink data, including downlink data for the given RU as well as downlink data for other downstream RUs. The RU processor can extract the downlink data for the given RU and forward the downlink data for other downstream RUs via a southbound interface. The RU processor can also be configured to receive uplink data from the other RUs, multiplex the received uplink data from the other RUs with uplink data from the given RU, and send the resulting multiplexed data towards the DU via a northbound interface.
Fair arbitration between multiple sources targeting a destination
A hardware module comprises at least a first ingress buffer and a second ingress buffer, where the second ingress buffer holds data packets from a plurality of source components. To ensure fairness between one or more sources providing data to the first ingress buffer and the plurality of sources providing data to the second ingress buffer, processing circuitry examines source identifiers in packets held in the second ingress buffer and selects between the buffers so as to arbitrate between the sources. In some embodiments, the examination of the source identifiers provides statistics for a weighted round robin between the ingress buffers. In other embodiments, the source identifier of whichever packet is currently at the head of the second ingress buffer is used to perform a simple round robin between the sources.
AI ENGINE-SUPPORTING DOWNLINK RADIO RESOURCE SCHEDULING METHOD AND APPARATUS
An Artificial Intelligence (AI) engine-supporting downlink radio resource scheduling method and apparatus are provided. The AI engine-supporting downlink radio resource scheduling method includes: constructing an AI engine, establishing a Socket connection between an AI engine and an Open Air Interface (OAI) system, and configuring the AI engine into an OAI running environment to utilize the AI engine to replace a Round-Robin scheduling algorithm and a fair Round-Robin scheduling algorithm adopted by a Long Term Evolution (LTE) at a Media Access Control (MAC) layer in the OAI system for resource scheduling to take over a downlink radio resource scheduling process; sending scheduling information to the AI engine through Socket during the downlink radio resource scheduling process of the OAI system; and utilizing the AI engine to carry out resource allocation according to the scheduling information, and returning a resource allocation result to the OAI system.