H04L2012/5681

Access node for data centers

An access node that can be configured and optimized to perform input and output (I/O) tasks, such as storage and retrieval of data to and from network devices (such as solid state drives), networking, data processing, and the like. For example, the access node may be configured to receive data to be processed, wherein the access node includes a plurality of processing cores, a data network fabric, and a control network fabric; receive, over the control network fabric, a work unit message indicating a processing task to be performed a processing core; and process the work unit message, wherein processing the work unit message includes retrieving data associated with the work unit message over the data network fabric.

DATA PROCESSING UNIT FOR COMPUTE NODES AND STORAGE NODES

A new processing architecture is described in which a data processing unit (DPU) is utilized within a device. Unlike conventional compute models that are centered around a central processing unit (CPU), example implementations described herein leverage a DPU that is specially designed and optimized for a data-centric computing model in which the data processing tasks are centered around, and the primary responsibility of, the DPU. For example, various data processing tasks, such as networking, security, and storage, as well as related work acceleration, distribution and scheduling, and other such tasks are the domain of the DPU. The DPU may be viewed as a highly programmable, high-performance input/output (I/O) and data-processing hub designed to aggregate and process network and storage I/O to and from multiple other components and/or devices. This frees resources of the CPU, if present, for computing-intensive tasks.

METHOD AND APPARATUS FOR ORDER ENTRY IN AN ELECTRONIC TRADING SYSTEM

Orders received by an electronic trading system are processed in batches based on the instrument to which an order relates. An incoming order is assigned to a queue of a queue set that makes up the batch according to a random process. Where orders are received from related trading parties, they are assigned to the same queue set according to their time of receipt. The batch has a random duration within defined minimum and maximum durations and at the end of the batch, the orders held in the queues are transferred to a matching thread of the trading system sequentially with one order being removed from each queue and a number of passes of the queues completed until orders have been removed.

Method and apparatus for order entry in an electronic trading system

Orders received by an electronic trading system are processed in batches based on the instrument to which an order relates. An incoming order is assigned to a queue of a queue set that makes up the batch according to a random process. Where orders are received from related trading parties, they are assigned to the same queue set according to their time of receipt. The batch has a random duration within defined minimum and maximum durations and at the end of the batch, the orders held in the queues are transferred to a matching thread of the trading system sequentially with one order being removed from each queue and a number of passes of the queues completed until orders have been removed.

Data processing unit for compute nodes and storage nodes

A new processing architecture is described in which a data processing unit (DPU) is utilized within a device. Unlike conventional compute models that are centered around a central processing unit (CPU), example implementations described herein leverage a DPU that is specially designed and optimized for a data-centric computing model in which the data processing tasks are centered around, and the primary responsibility of, the DPU. For example, various data processing tasks, such as networking, security, and storage, as well as related work acceleration, distribution and scheduling, and other such tasks are the domain of the DPU. The DPU may be viewed as a highly programmable, high-performance input/output (I/O) and data-processing hub designed to aggregate and process network and storage I/O to and from multiple other components and/or devices. This frees resources of the CPU, if present, for computing-intensive tasks.

Technologies for jitter-adaptive low-latency, low power data streaming between device components

Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.

Combined input and output queue for packet forwarding in network devices

An apparatus for switching network traffic includes an ingress packet forwarding engine and an egress packet forwarding engine. The ingress packet forwarding engine is configured to determine, in response to receiving a network packet, an egress packet forwarding engine for outputting the network packet and enqueue the network packet in a virtual output queue. The egress packet forwarding engine is configured to output, in response to a first scheduling event and to the ingress packet forwarding engine, information indicating the network packet in the virtual output queue and that the network packet is to be enqueued at an output queue for an output port of the egress packet forwarding engine. The ingress packet forwarding engine is further configured to dequeue, in response to receiving the information, the network packet from the virtual output queue and enqueue the network packet to the output queue.

Multipath traffic management

One embodiment provides an apparatus. The apparatus includes client traffic management (CTM) logic. The CTM logic is to trigger implementation of a selected network traffic flow related to the client device, the triggering based, at least in part, on a network traffic flow related to the client device. The network traffic flow is associated with a connection and includes at least one subflow. Each subflow is carried by a respective path associated with the connection. The triggering includes at least one of constraining and/or adjusting an allowable throughput at a service provider for one or more of the at least one subflow. The selected traffic policy is to be implemented in a transport layer.

COMBINED INPUT AND OUTPUT QUEUE FOR PACKET FORWARDING IN NETWORK DEVICES

An apparatus for switching network traffic includes an ingress packet forwarding engine and an egress packet forwarding engine. The ingress packet forwarding engine is configured to determine, in response to receiving a network packet, an egress packet forwarding engine for outputting the network packet and enqueue the network packet in a virtual output queue. The egress packet forwarding engine is configured to output, in response to a first scheduling event and to the ingress packet forwarding engine, information indicating the network packet in the virtual output queue and that the network packet is to be enqueued at an output queue for an output port of the egress packet forwarding engine. The ingress packet forwarding engine is further configured to dequeue, in response to receiving the information, the network packet from the virtual output queue and enqueue the network packet to the output queue.

Technologies for balancing throughput across input ports of a multi-stage network switch

Technologies for balancing throughput across input ports include a network switch. The network switch is to generate, for an arbiter unit in a first stage of a hierarchy of stages of arbiter units, turn data indicative of a set of turns in which to transfer packet data from devices connected to input ports of the arbiter unit. The network switch is also to transfer, with the arbiter unit, the packet data from the devices in the set of turns. Additionally, the network switch is to determine weight data indicative of the number of turns represented in the set and provide the weight data from the arbiter unit in the first stage to another arbiter unit in a subsequent stage to cause the arbiter unit in the subsequent stage to allocate a number of turns for the transfer of the packet data from the arbiter unit in the first stage.