Patent classifications
H04L12/819
Resource Efficient Forwarding Of Guaranteed And Non-Guaranteed Data Packets
A node of a data network receives data packets (200). For at least one of the received data packets (200), the node determines whether the data packet (200) is a guaranteed data packet which is subject to a guarantee that the data packet is not dropped and not delayed by more than a certain delay limit or a non-guaranteed data packet which is not subject to the guarantee. Based on a worst case calculation of a delay experienced by a data packet forwarded by the node, the node configures a resource contingent with a maximum amount of resources which is more than a minimum amount of resources required to meet the guarantee. Further, the node assigns resources to the resource contingent and identifies resources in excess of the minimum amount as excess resources. In response to determining that the data packet (200) is anon-guaranteed data packet and determining that sufficient excess resources are present, the node forwards the data packet (200) based on the excess resources.
STATELESS AND RELIABLE LOAD BALANCING USING SEGMENT ROUTING AND TCP TIMESTAMPS
Stateless and reliable load balancing using segment routing and an available side-channel may be provided. First, a non-SYN packet associated with a connection may be received. The non-SYN packet may have first data contained in an available side-channel. Next an associated bucket may be retrieved based on a hash of second data in the non-SYN packet. The associated bucket may identify a plurality of servers. Then a one of the plurality of servers may be selected based on the first data contained in the available side-channel.
Method for transmitting data in dual connectivity and a device therefor
The present invention relates to a wireless communication system. More specifically, the present invention relates to a method and a device for transmitting data in dual connectivity, the method comprising: configuring that UL data is only to be transmitted to the first eNB, if an amount of data available for transmission in a PDCP entity is less than a threshold, receiving a PDCP data from an upper layer, transmitting a BSR to request an UL grant to the second eNB, receiving the UL grant from the second eNB, and transmitting the PDCP data using the UL grant to the second eNB if an amount of the PDCP data has been indicated to the second eNB by the BSR.
Transmission burst control in a network device
Incoming data units within a network apparatus are temporarily buffered before being released to downstream components of the apparatus, such as a traffic manager or packet processor. A congestion detection component monitors for congestion with respect to particular subsets of the data units, such as data units that arrived over a same port or port group. When a metric, such as overutilization of the one or more buffers, indicates a state of congestion with respect to one of one of these subsets, various actions may be taken with respect to the subset to reduce the risk of complications from the congestion. In an embodiment, contrary to the expectation that a state of congestion would demand accelerating the release of data units from the buffer(s), a burst control mechanism is enabled. In an embodiment, the actions may further include temporarily pausing a lossless data stream and/or disabling features such as cut-through switching.
PROCESSING PACKET
A method and device for processing a packet are provided in this disclosure. According to an example of the method, an HTTPS packet is received from a user host, and a non-online user session entry matching the HTTPs packet is searched for according to a source IP address and a destination IP address of the HTTPS packet. In case that the non-online user session entry is found, a token is obtained from a first token bucket if determining that a user session corresponding to the non-online user session entry has no token, where the number of tokens in the first token bucket is set based on processing capability of a CPU of the access gateway device. When the token is successfully obtained, the HTTPS packet is sent to the CPU for processing. When the token has failed to be obtained, the HTTPS packet is abandoned.
System and method for managing virtual radio access network slicing
A method is provided in one example embodiment and may include configuring a slice identity for each of a plurality of virtual radio access network (vRAN) slices, wherein each vRAN slice comprises functionality to perform, at least in part, one or more radio protocol operations on subscriber traffic; configuring an allotment of radio resources that can be utilized by each vRAN slice of the plurality of vRAN slices; receiving, by a slice manager, a subscriber profile identity (SPID) for a subscriber; and mapping the SPID for the subscriber to a particular vRAN slice of the plurality of vRAN slices. The method can further include communicating the mapping for the subscriber to the particular vRAN slice to which the SPID is mapped. The method can further include communicating the allotment of radio resources that can be utilized by the particular vRAN slice to the particular vRAN slice.
MULTICORE BUS ARCHITECTURE WITH NON-BLOCKING HIGH PERFORMANCE TRANSACTION CREDIT SYSTEM
This invention is a bus communication protocol. A master device stores bus credits. The master device may transmit a bus transaction only if it holds sufficient number and type of bus credits. Upon transmission, the master device decrements the number of stored bus credits. The bus credits correspond to resources on a slave device for receiving bus transactions. The slave device must receive the bus transaction if accompanied by the proper credits. The slave device services the transaction. The slave device then transmits a credit return. The master device adds the corresponding number and types of credits to the stored amount. The slave device is ready to accept another bus transaction and the master device is re-enabled to initiate the bus transaction. In many types of interactions a bus agent may act as both master and slave depending upon the state of the process.
TCAM-based load balancing on a switch
In an example, a network switch is configured to operate natively as a load balancer. The switch receives incoming traffic on a first interface communicatively coupled to a first network, and assigns the traffic to one of a plurality of traffic buckets. This may include looking up a destination IP of an incoming packet in a fast memory such as a ternary content-addressable memory (TCAM) to determine whether the packet is directed to a virtual IP (VIP) address that is to be load balanced. If so, part of the source destination IP address may be used as a search tag in the TCAM to assign the incoming packet to a traffic bucket or IP address of a service node.
Hash table entries insertion method and apparatus using virtual buckets
The present disclosure describes a process and apparatus for improving insertions of entries into a hash table. A large number of smaller virtual buckets may be combined together and associated with buckets used for hash table entry lookups and/or entry insertion. On insertion of an entry, hash table entries associated with a hashed-to virtual bucket may be moved between groups of buckets associated with the virtual bucket, to better distribute entries across the available buckets to reduce the number of entries in the largest buckets and the standard deviation of the bucket sizes across the entire hash table.
Operation of user equipment in C-DRx mode with token bucket based access
Systems, methods, and apparatus for transmitting additional information over a radio access network are described. A method of wireless communication includes aligning a discontinuous reception (DRx) schedule for a user equipment (UE) with a plurality of token arrival times, determining, at a time based on a first token arrival time, whether a radio frequency (RF) band is available for communication, and transmitting control information on the RF band when the RF band is available for communication. The token arrival time may correspond to a waking time for the UE defined by the DRx schedule.