Patent classifications
H04L49/90
Pipeline using match-action blocks
An apparatus includes an output bus configured to store data, a match table, one or more storage devices, and logic. The match table is configured to store a plurality of entries, each entry including a key value, wherein the match table specifies a matching entry in response to being queried by the query data. The one or more storage devices are configured to store operation information for each of the plurality of entries stored in the match table. The operation information specifies one or more instructions associated with each respective entry in the plurality of entries stored in the match table. The logic is configured to receive one or more operands from the output bus, identify one or more instructions from the one or more storage devices, and generate, based on the one or more instructions and the one or more operands, processed data.
Multiplexing and congestion control
Methods, systems and devices for network congestion control exploit the inherent burstiness of network traffic, using a wave-based characterization of network traffic and corresponding multiplexing methods and approaches.
Multiplexing and congestion control
Methods, systems and devices for network congestion control exploit the inherent burstiness of network traffic, using a wave-based characterization of network traffic and corresponding multiplexing methods and approaches.
Position parameterized recursive network architecture with topological addressing
A digital data communications network that supports efficient, scalable routing of data and use of network resources by combining a recursive division of the network into hierarchical sub-networks with repeating parameterized general purpose link communication protocols and an addressing methodology that reflects the physical structure of the underlying network hardware. The sub-division of the network enhances security by reducing the amount of the network visible to an attack and by insulating the network hardware itself from attack. The fixed bandwidth range at each sub-network level allows quality of service to be assured and controlled. The routing of data is aided by a topological addressing scheme that allows data packets to be forwarded towards their destination based on only local knowledge of the network structure, with automatic support for mobility and multicasting. The repeating structures in the network greatly simplify network management and reduce the effort to engineer new network capabilities.
Position parameterized recursive network architecture with topological addressing
A digital data communications network that supports efficient, scalable routing of data and use of network resources by combining a recursive division of the network into hierarchical sub-networks with repeating parameterized general purpose link communication protocols and an addressing methodology that reflects the physical structure of the underlying network hardware. The sub-division of the network enhances security by reducing the amount of the network visible to an attack and by insulating the network hardware itself from attack. The fixed bandwidth range at each sub-network level allows quality of service to be assured and controlled. The routing of data is aided by a topological addressing scheme that allows data packets to be forwarded towards their destination based on only local knowledge of the network structure, with automatic support for mobility and multicasting. The repeating structures in the network greatly simplify network management and reduce the effort to engineer new network capabilities.
DYNAMIC REDUNDANCY
A device implementing dynamic redundancy may include at least one processor configured to receive, from another device, packet reception data corresponding to video data previously provided for transmission from the device to the other device and determine, based at least in part on the packet reception data, an amount of redundancy to apply to video data provided for transmission to the other device. The at least one processor may be further configured to determine, based at least in part on the amount of redundancy, an encoding scheme for applying the redundancy to the video data. The at least one processor may be further configured to apply the amount of redundancy to the video data based at least in part on the encoding scheme to generate redundant data items and provide the video data and the redundant data items for transmission to the other device.
DYNAMIC REDUNDANCY
A device implementing dynamic redundancy may include at least one processor configured to receive, from another device, packet reception data corresponding to video data previously provided for transmission from the device to the other device and determine, based at least in part on the packet reception data, an amount of redundancy to apply to video data provided for transmission to the other device. The at least one processor may be further configured to determine, based at least in part on the amount of redundancy, an encoding scheme for applying the redundancy to the video data. The at least one processor may be further configured to apply the amount of redundancy to the video data based at least in part on the encoding scheme to generate redundant data items and provide the video data and the redundant data items for transmission to the other device.
PACKET PROCESSING CONFIGURATIONS
Examples described herein relate to an interface and a network interface device coupled to the interface and comprising circuitry. In some examples, the circuitry is to receive packet data to be egressed, wherein the packet data does not specify a destination for the packet data and process the packet data to be egressed to generate a mapping of ingress packet-to-target based on a determination.
Improving performance of multi-processor computer systems
Embodiments of the invention may improve the performance of multi-processor systems in processing information received via a network. For example, some embodiments may enable configuration of a system such that information received via a network may be distributed among multiple processors for efficient processing. A user (e.g., system administrator) may select from among multiple configuration options, each configuration option being associated with a particular mode of processing information received via a network. By selecting a configuration option, the user may specify how information received via the network is processed to capitalize on the system's characteristics, such as by aligning processors on the system with certain NICs. As such, the processor(s) aligned with a NIC may perform networking-related tasks associated with information received by that NIC. If initial alignment causes one or more processors to become over-burdened, processing tasks may be dynamically re-distributed to other processors so as to achieve a more even distribution of the overall processing burden across the system.
Expandable queue
A network device includes packet processing circuitry and queue management circuitry. The packet processing circuitry is configured to transmit and receive packets to and from a network. The queue management circuitry is configured to store, in a memory, a queue for queuing data relating to processing of the packets, the queue including a primary buffer and an overflow buffer, to choose between a normal mode and an overflow mode based on a defined condition, to queue the data only in the primary buffer when operating in the normal mode, and, when operating in the overflow mode, to queue the data in a concatenation of the primary buffer and the overflow buffer.