Patent classifications
H04L49/9078
Forwarding Entry Update Method and Apparatus
A forwarding entry update method and apparatus, the method including receiving a write operation packet, where the write operation packet has write operation information, where the write operation information has write operation data and a write operation address, where the write operation data indicates a forwarding entry, and where the write operation address indicates an address to which the write operation data is to be written in a memory, obtaining the write operation information from the write operation packet, and writing the write operation data into the memory according to the write operation address in the write operation information.
System and Method of A High Buffered High Bandwidth Network Element
A method and apparatus of a network element that processes a packet in the network element is described. In an exemplary embodiment, the network element receives a data packet that includes a destination address. The network element receives a packet, with a packet switch unit, wherein the packet was received by the network element on an ingress interface. The network element further determines if the packet is to be stored in an external queue. In addition, the network element identifies the external queue for the packet based on one or more characteristics of the packet. The network element additionally forwards the packet to a packet storage unit, wherein the packet storage unit includes storage for the external queue. Furthermore, the network element receives the packet from the packet storage unit and forwards the packet to an egress interface corresponding to the external queue.
COMPUTER NETWORK PACKET TRANSMISSION TIMING
Establishing an expected transmit time at which a network interface controller (NIC) is expected to transmit a next packet. Enqueuing, with the NIC and before the expected transmit time, a packet P.sub.1 to be transmitted at the expected transmit time. Upon enqueuing P.sub.1, incrementing the expected transmit time by an expected transmit duration of P.sub.1. Transmitting at the NIC's line rate and timestamping enqueued P.sub.1 with its actual transmit time. Adjusting the expected transmit time by a difference between P.sub.1's actual transmit and P.sub.1's expected transmit time. Requesting, before completion of transmitting P.sub.1, to transmit a P.sub.2 at time t(P.sub.2). Enqueuing, in sequence, zero or more P.sub.0, such that the current expected transmit time plus the duration of the transmission of the P.sub.0s at the line rate equals t(P.sub.2). Transmitting at the line rate each enqueued P.sub.0. Upon enqueuing each P.sub.0, incrementing, for each P.sub.0, the expected transmit time by the expected transmit duration of the P.sub.0. Enqueuing P.sub.2 for transmission directly following enqueuing the final P.sub.0. Transmitting, by the NIC, enqueued P.sub.2 at t(P.sub.2).
METHOD AND SYSTEM FOR OPTIMIZING SERVICE DEVICE TRAFFIC MANAGEMENT
A method and system for optimizing service device traffic management. Specifically, the method and system disclosed herein entail filtering network traffic flows directed to service devices, distributed throughout a network, for inspection. Through the aforementioned filtering, a targeted subset of network traffic flows may be identified and excluded from service device processing. The filtering thus alleviates traffic congestion and improves traffic throughput at the service device(s), thereby optimizing the management and/or processing of network traffic flows redirected to the service device(s).
System and method of a high buffered high bandwidth network element
A method and apparatus of a network element that processes a packet in the network element is described. In an exemplary embodiment, the network element receives a data packet that includes a destination address. The network element receives a packet, with a packet switch unit, wherein the packet was received by the network element on an ingress interface. The network element further determines if the packet is to be stored in an external queue. In addition, the network element identifies the external queue for the packet based on one or more characteristics of the packet. The network element additionally forwards the packet to a packet storage unit, wherein the packet storage unit includes storage for the external queue. Furthermore, the network element receives the packet from the packet storage unit and forwards the packet to an egress interface corresponding to the external queue.
System and method for executing native client code in a storage device
A system and method for executing user-provided code securely on a solid state drive (SSD) to perform data processing on the SSD. In one embodiment, a user uses a security-oriented cross-compiler to compile user-provided source code for a data processing task on a host computer containing, or otherwise connected to, an SSD. The resulting binary is combined with lists of input and output file identifiers and sent to the SSD. A central processing unit (CPU) on the SSD extracts the binary and the lists of file identifiers. The CPU obtains from the host file system the addresses of storage areas in the SSD containing the data in the input files, reads the input data, executes the binary using a container, and writes the results of the data processing task back to the SSD, in areas corresponding to the output file identifiers.
Queuing system to predict packet lifetime in a computing device
Techniques are disclosed for a queuing system for network devices. In one example, a network device includes a plurality of memories and processing circuitry connected to the plurality of memories. The plurality of memories includes a local memory of processing circuitry and an external memory to the processing circuitry. The processing circuitry is configured to receive an incoming network packet to be processed, wherein the network packet is held in a queue prior to processing and determine a predicted lifetime of the network packet based on a dequeue rate for the queue. The processing circuitry is further configured to select a first memory from the plurality of memories based on the predicted lifetime and store the network packet at the first memory in response to selecting the first memory from the plurality of memories.
TECHNOLOGIES FOR JITTER-ADAPTIVE LOW-LATENCY, LOW POWER DATA STREAMING BETWEEN DEVICE COMPONENTS
Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.
IMPROVING PERFORMANCE OF MULTI-PROCESSOR COMPUTER SYSTEMS
Embodiments of the invention may improve the performance of multi-processor systems in processing information received via a network. For example, some embodiments may enable configuration of a system such that information received via a network may be distributed among multiple processors for efficient processing. A user (e.g., system administrator) may select from among multiple configuration options, each configuration option being associated with a particular mode of processing information received via a network. By selecting a configuration option, the user may specify how information received via the network is processed to capitalize on the system's characteristics, such as by aligning processors on the system with certain NICs. As such, the processor(s) aligned with a NIC may perform networking-related tasks associated with information received by that NIC. If initial alignment causes one or more processors to become over-burdened, processing tasks may be dynamically re-distributed to other processors so as to achieve a more even distribution of the overall processing burden across the system.
Network processors
The present disclosure is directed to a network processor for processing high volumes of traffic provided by todays access networks at (or near) wireline speeds. The network process can be implemented within a residential gateway to perform, among other functions, routing to deliver high speed data services (e.g., data services with rates up to 10 Gbit/s) from a wide area network (WAN) to end user devices in a local area network (LAN).