H04L49/9094

Fixed HS-DSCH or E-DCH allocation for VoIP (or HS-DSCH without HS-SCCH/E-DCH without E-DPCCH)

In order to reduce the HS-SCCH overhead, a fixed time allocation approach could be used. In that case, the scheduling time of each VoIP user is semi-static and thus there is no need to transmit e.g. HS-SCCH toward the UE for the initial transmissions, if the UE knows when to receive data on the HS-DSCH and what transport format is used. There are at least two ways of implementing this: 1) HS-SCCH/E-DPCCH signalling to indicate parameters of a first transmission, with subsequent transmissions using the same parameters (and HS-SCCH/E-DPCCH always sent when changes needed), or 2) fixed allocation, RRC signalling used to allocate users and tell the default transport parameters.

Data transmission and network interface controller

Implementations of this disclosure provide data transmission operations and network interface controllers. An example method performed by a first RDMA network interface controller includes obtaining m data packets from a host memory of a first host; sending the m data packets to a second RDMA network interface controller of a second host; backing up the m data packets to a network interface controller memory integrated into the first RDMA network interface controller; determining that the second RDMA network interface controller does not receive n data packets of the m data packets; and in response, obtaining the n data packets from the m data packets that have been backed up to the network interface controller memory integrated into the first RDMA network interface controller, and retransmitting the n data packets to the second RDMA network interface controller.

DATA TRANSMISSION AND NETWORK INTERFACE CONTROLLER
20210014307 · 2021-01-14 ·

Implementations of this disclosure provide data transmission operations and network interface controllers. An example method performed by a first RDMA network interface controller includes obtaining m data packets from a host memory of a first host; sending the m data packets to a second RDMA network interface controller of a second host; backing up the m data packets to a network interface controller memory integrated into the first RDMA network interface controller; determining that the second RDMA network interface controller does not receive n data packets of the m data packets; and in response, obtaining the n data packets from the m data packets that have been backed up to the network interface controller memory integrated into the first RDMA network interface controller, and retransmitting the n data packets to the second RDMA network interface controller.

PACKET PROCESSING DEVICE AND NETWORK SYSTEM
20200382436 · 2020-12-03 · ·

A packet processing device is implemented in a network that transmits priority packets and non-priority packets having a lower priority than the priority packets. The packet processing device includes: a packet storage, a gate, a controller, a detector, a generator, and a transmitter. The packet storage stores non-priority packets. The gate is provided on an output side of the packet storage. The controller controls the gate. The detector detects a transmission pattern of the priority packets. The generator generates, based on the transmission pattern of the priority packets, a gate control signal for controlling a gate of a packet processing device implemented in another node. The transmitter transmits the gate control signal to a destination of the priority packets.

METHOD FOR PROCESSING NETWORK PACKETS AND ELECTRONIC DEVICE THEREFOR
20200314037 · 2020-10-01 ·

An electronic device including a wireless communication circuitry, a processor including a plurality of cores, and a memory. The processor receives a packet of a first session associated with a first core among the plurality of cores, identifies whether a core associated with the first session is changed to a second core different from the first core, sets pending information based on an amount of packets which are pending in a first packet of the first core when it is identified that the core is changed to the second core, stores data corresponding to the received packet of the first session in a pending buffer of the memory, and inserts the data corresponding to the received packet of the first session, stored in the pending buffer, into a packet queue of the second core.

Data transmission and network interface controller
10785306 · 2020-09-22 · ·

Implementations of this disclosure provide data transmission operations and network interface controllers. An example method performed by a first RDMA network interface controller includes obtaining m data packets from a host memory of a first host; sending the m data packets to a second RDMA network interface controller of a second host; backing up the m data packets to a network interface controller memory integrated into the first RDMA network interface controller; determining that the second RDMA network interface controller does not receive n data packets of the m data packets; and in response, obtaining the n data packets from the m data packets that have been backed up to the network interface controller memory integrated into the first RDMA network interface controller, and retransmitting the n data packets to the second RDMA network interface controller.

Openflow match and action pipeline structure

An embodiment of the invention includes a packet processing pipeline. The packet processing pipeline includes match and action stages. Each match and action stage in incurs a match delay when match processing occurs and each match and action stage incurs an action delay when action processing occurs. A transport delay occurs between successive match and action stages when data is transferred from a first match and action stage to a second match and action stage.

Techniques for warming up a node in a distributed data store

In various embodiments, a node manager configures a new node as a replacement for an unavailable node that was previously included in a distributed data store. First, the node manager identifies a source node that stores client data that was also stored in the unavailable node. Subsequently, the node manager configures the new node to operate as a slave of the source node and streams the client data from the source node to the new node. Finally, the node manager configures the new node to operate as one of multiple masters nodes in the distributed data store. Advantageously, by configuring the node to implement a hybrid of a master-slave replication scheme and a master-master replication scheme, the node manager enables the distributed data store to process client requests without interruption while automatically restoring the previous level of redundancy provided by the distributed data store.

Traffic manager resource sharing

A traffic manager is shared amongst two or more egress blocks of a network device, thereby allowing traffic management resources to be shared between the egress blocks. Among other aspects, this may reduce power demands and allow a larger amount of buffer memory to be available to a given egress block that may be experiencing high traffic loads. Optionally, the shared traffic manager may be leveraged to reduce the resources required to handle data units on ingress. Rather than buffer the entire unit in the ingress buffers, an arbiter may be configured to buffer only the control portion of the data unit. The payload of the data unit, by contrast, is forwarded directly to the shared traffic manager, where it is placed in the egress buffers. Because the payload is not being buffered in the ingress buffers, the ingress buffer memory may be greatly reduced.

Advanced message queuing protocol (AMQP) message broker and messaging client interactions via dynamic programming commands using message properties
10728181 · 2020-07-28 · ·

A method and an information handling system (IHS) transform an initial message having an identified protocol format to an encapsulated message having an advanced message queuing protocol (AMQP) format. A dynamic message brokering (DMB) module interacts with an AMQP client application to generate a binding key and a routing key corresponding to message attributes of the initial message. The DMB module dynamically applies one or more of the binding key and the routing key to respective programming command modules, including a provider module, to generate an AMQP client message which is forwarded to an AMQP server. The AMQP server creates a queue for messages having attributes that are identifiable within the received client message, and uses the binding key to bind the queue to a specified exchange. The AMQP server routes the received client message to the queue, using the routing key, enabling subscribers to retrieve the messages.