Patent classifications
H04L12/861
SWITCH AND DATA ACCESSING METHOD THEREOF
A switch for transmitting data packets between at least one source node and at least one target node is provided. The switch includes a storage unit, a control unit, at least one receiving port and at least one transmitting port. The storage unit includes a plurality of storage blocks and configured to cache the data packets. The control unit is configured to manage the storage blocks. The switch receives and caches the data packets transmitted from the at least one source node via the receiving port and transmits the cached data packets to the at least one target node via the transmitting port. A data accessing method adapted for the switch is also provided.
Digital Signal Processing Over Data Streams
The techniques and systems described herein are directed to providing deep integration of digital signal processing (DSP) operations with a general-purpose query processor. The techniques and systems provide a unified query language for processing tempo-relational and signal data, provide mechanisms for defining DSP operators, and support incremental computation in both offline and online analysis. The techniques and systems include receiving streaming data, aggregating and performing uniformity processing to generate a uniform signal, and storing the uniform signal in a batched columnar representation. Data can be copied from the batched columnar representation to a circular buffer, where DSP operations are applied to the data. Incremental processing can avoid redundant processing. Improvements to the functioning of a computer are provided by reducing an amount of data that to be passed back and forth between separate query databases and DSP processors, and by reducing a latency of processing and/or memory usage.
System and method for creating a scalable monolithic packet processing engine
A novel and efficient method is described that creates a monolithic high capacity Packet Engine (PE) by connecting N lower capacity Packet Engines (PEs) via a novel Chip-to-Chip (C2C) interface. The C2C interface is used to perform functions, such as memory bit slicing and to communicate shared information, and enqueue/dequeue operations between individual PEs.
Method and device for filtering media packets
A method including: receiving, at a video conferencing device, a packet of a video conferencing media stream, the video conferencing device including a processor; determining, by the video conferencing device, whether a length of the packet is sufficiently long to contain media; sending a request to a Look-up Table memory using the media stream ID as an input value while in parallel determining, with the processor, whether the packet is a valid media packet; in response to receiving a destination address in a media processing network from the Look-up Table memory and determining that the packet is a valid media packet, modifying, by the video conferencing device, a header of the packet with the destination address received from the Look-up Table memory; and transmitting, by the video conferencing device, the packet to the modified destination address.
Scheduling method for virtual processors based on the affinity of NUMA high-performance network buffer resources
A scheduling method for virtual processors based on the affinity of NUMA high-performance network buffer resources, including: in a NUMA architecture, when a network interface card (NIC) of a virtual machine is started, getting distribution of the buffer of the NIC on each NUMA node; getting affinities of each NUMA node for the buffer of the network interface card on the basis of an affinity relationship between each NUMA node; determining a target NUMA node in combination with the distribution of the buffer of the NIC on each NUMA node and NUMA node affinities for the buffer of the NIC; scheduling the virtual processor to the CPU on the target NUMA node. The problem of affinity between the VCPU of the virtual machine and the buffer of the NIC not being optimal in the NUMA architecture is solved to reduce the speed of VCPU processing network packets.
Switching methods and systems for a network interface card
Methods and systems for network communication are provided. One method includes, receiving a network packet at a first network interface card (NIC) operationally coupled to a computing device; identifying a second NIC as a destination for the network packet; placing the network packet by the first NIC at a host memory location, without utilizing resources of a processor of the computing device; notifying the second NIC that the network packet has been placed at the host memory location; retrieving the network packet by the second NIC from the host memory location; transmitting the network packet by the second NIC to another destination; notifying the first NIC by the second NIC that the packet has been transmitted by the second NIC; and freeing the host memory location by the first NIC.
CONGESTION AVOIDANCE IN A NETWORK DEVICE
A network device receives a packet is received from a network, and determines at least one port, among a plurality of ports of the network device, via which the packet is to be transmitted. The network device also determines an amount of free buffer space in a buffer memory of the network device, and dynamically determines, based at least in part on the amount of free buffer space, respective thresholds for triggering ones of multiple traffic management operations to be performed based on the packet. Using the respective thresholds, the network device determines whether or not to trigger ones of the multiple traffic management operations with respect to the packet. The network device performs one or more of the traffic management operations with respect to the packet determined to be triggered based on the corresponding one of the respective thresholds.
Traffic Management in a Network Switching System with Remote Physical Ports
A switching system includes a port extender device coupled to a central switching device. Packets processed by the central switching device are forwarded to the port extender device and enqueued in ones of a plurality of egress queues in the port extender device for transmission of the packets via the front ports of the port extender device. Respective egress queues in the port extender device have a queue depth that is less than a queue depth of corresponding respective egress queues in the central switching device. A flow control message indicative of congestion in a particular egress queue of the port extender device is generated and transmitted to the central switch device to control transmission of packets from the central switching device to the particular egress queue of the port extender device.
IMPULSIVE NOISE DETECTION CIRCUIT AND ASSOCIATED METHOD
An impulsive noise detection method is applied to an orthogonal frequency-division multiplexing (OFDM) system to detect whether an input signal includes impulsive noise. The impulsive noise detection method includes receiving the input signal, converting the input signal to a digital input signal, filtering out a data band from the digital input signal to generate a signal under detection, calculating the signal under detection to generate a calculation result, and determining whether the input signal includes the impulsive noise according to the calculation result and a threshold.
Managing a Jitter Buffer Size
It is presented a method for managing a jitter buffer depth for receiving real-time communication. The method is performed in a receiver and comprises the steps of: determining an adaptive bitrate state of the receiver when a current capacity of a communication channel for receiving the real-time communication is below a maximum bitrate for receiving the real-time communication; and increasing a depth of a jitter buffer for receiving the real-time communication when the adaptive bitrate state is determined.