Patent classifications
H04L12/873
SCALABLE MEMORY SYSTEM PROTOCOL SUPPORTING PROGRAMMABLE NUMBER OF LEVELS OF INDIRECTION
A memory device includes a memory component that stores data and a processor. The processor may receive requests from a requesting component to perform a plurality of data operations, generate a plurality of packets associated with the plurality of data operations, and continuously transmit each of the plurality of packets until each of the plurality of packets are transmitted. Each of the plurality of packets after the first packet of the plurality of packets is transmitted on a subsequent clock cycle immediately after a previous packet is transmitted.
SCALABLE MEMORY SYSTEM PROTOCOL SUPPORTING PROGRAMMABLE NUMBER OF LEVELS OF INDIRECTION
A memory device includes a memory component that stores data and a processor. The processor may receive requests from a requesting component to perform a plurality of data operations, generate a plurality of packets associated with the plurality of data operations, and continuously transmit each of the plurality of packets until each of the plurality of packets are transmitted. Each of the plurality of packets after the first packet of the plurality of packets is transmitted on a subsequent clock cycle immediately after a previous packet is transmitted.
Intelligent multiplexing using class-based, multi-dimensioned decision logic for managed networks
A server system determines, for a group of user sessions assigned to a single modulator, that an aggregate bandwidth for a first frame time exceeds a specified budget for the modulator. The user sessions comprise data in a plurality of classes, each class having a respective priority. In response to a determination that the aggregate bandwidth exceeds a specified budget, the server system allocates a portion of the aggregate bandwidth, including allocating a first portion of the data for a first user session in the group of user sessions and allocating a second portion of the data for a second user session in the group of user sessions, where both the first portion and the second portion are allocated in accordance with the class priorities. The server system transmits the allocated portions of the data for the group of user sessions through the modulator during the first frame time.
Event processing with enhanced throughput
The present systems and methods allow for rapid processing of large volumes of events. A producer node in a cluster determines a sharding key for a received event from an event stream. The producer node uses a sharding map to correlate the sharding key for the event with a producer channel, and provides the event to a producer event buffer associated with the producer channel. The producer event buffer transmits the event to a corresponding consumer event buffer associated with a consumer channel on a consumer node. The event processing leverages a paired relationship between producer channels on the producer node and consumer channels on the consumer node, so as to generate enhanced throughput. The event processing also supports dynamic rebalancing of the system in response to adding or removing producer or consumer nodes, or adding or removing producer or consumer channels to or from producer or consumer nodes.
METHODS TO STRENGTHEN CYBER-SECURITY AND PRIVACY IN A DETERMINISTIC INTERNET OF THINGS
Methods to strengthen the cyber-security and privacy in a proposed deterministic Internet of Things (IoT) network are described. The proposed deterministic IoT consists of a network of simple deterministic packet switches under the control of a low-complexity ‘Software Defined Networking’ (SDN) control-plane. The network can transport ‘Deterministic Traffic Flows’ (DTFs), where each DTF has a source node, a destination node, a fixed path through the network, and a deterministic or guaranteed rate of transmission. The SDN control-plane can configure millions of distinct interference-free ‘Deterministic Virtual Networks’ (DVNs) into the IoT, where each DVN is a collection of interference-free DTFs. The SDN control-plane can configure each deterministic packet switch to store several deterministic periodic schedules, defined for a scheduling-frame which comprises F time-slots. The schedules of a network determine which DTFs are authorized to transmit data over each fiber-optic link of the network. These schedules also ensure that each DTF will receive a deterministic rate of transmission through every switch it traverses, with full immunity to congestion, interference and Denial-of-Service (DoS) attacks. Any unauthorized transmissions by a cyber-attacker can also be detected quickly, since the schedules also identify unauthorized transmissions. Each source node and destination node of a DTF, and optionally each switch in the network, can have a low-complexity private-key encryption/decryption unit. The SDN control-plane can configure the source and destination nodes of a DTF, and optionally the switches in the network, to encrypt and decrypt the packets of a DTF using these low-complexity encryption/decryption units. To strengthen security and privacy and to lower the energy use, the private keys can be very large, for example several thousands of bits. The SDN control-plane can configure each DTF to achieve a desired level of security well beyond what is possible with existing schemes such as AES, by using very long keys. The encryption/decryption units also use a new serial permutation unit the very low hardware cost, which allows for exceptional security and very-high throughputs in FPGA hardware.
Load balancing among network links using an efficient forwarding scheme
A network element includes multiple output ports and circuitry. The multiple output ports are configured to transmit packets over multiple respective network links of a communication network. The circuitry is configured to receive from the communication network, via one or more input ports of the network element, packets that are destined for transmission via the multiple output ports, to monitor multiple data-counts, each data-count corresponding to a respective output port, and is indicative of a respective data volume of the packets forwarded for transmission via the respective output port, to select for a given packet, based on the data-counts, an output port among the multiple output ports, and to forward the given packet for transmission via the selected output port.
Logical router comprising disaggregated network elements
A logical router includes disaggregated network elements that function as a single router and that are not coupled to a common backplane. The logical router includes spine elements and leaf elements implementing a network fabric with front panel ports being defined by leaf elements. Control plane elements program the spine units and leaf to function a logical router. The control plane may define operating system interfaces mapped to front panel ports of the leaf elements and referenced by tags associated with packets traversing the logical router. Redundancy and checkpoints may be implemented for a route database implemented by the control plane elements. The logical router may include a standalone fabric and may implement label tables that are used to label packets according to egress port and path through the fabric.
Adaptive polling in software-defined networking (SDN) environments
Example methods and systems for adaptive polling. One example may comprise operating in a polling mode to poll, from a network interface, zero or more packets that require packet processing by the network device. The method may also comprise: in response to detecting a non-zero polling round, adjusting a polling parameter to delay switching from the polling mode to a sleep mode. The method may further comprise: in response to detecting a zero polling round and determining that a switch condition is satisfied, adjusting a sleep parameter associated with the sleep mode based on traffic characteristic information associated with one or more polling rounds; and switching from the polling mode to the sleep mode in which polling from the network interface is halted based on the sleep parameter.
Highly deterministic latency in a distributed system
A distributed computing system, such as may be used to implement an electronic trading system, supports a notion of fairness in latency. The system does not favor any particular client. Thus, being connected to a particular access point into the system (such as via a gateway) does not give any particular device an unfair advantage or disadvantage over another. That end is accomplished by precisely controlling latency, that is, the time between when request messages arrive at the system and a time at which corresponding response messages are permitted to leave. The precisely controlled, deterministic latency can be fixed over time, or it can vary according to some predetermined pattern, or vary randomly within a pre-determined range of values.
ROUTE SELECTION BASED ON BUFFER CONGESTION
A switch includes a plurality of ingress ports, a plurality of egress ports, and a plurality of buffers comprising a buffer coupled to each ingress port, egress port pair. An ingress port is to determine a plurality of potential egress ports for a packet. The ingress port is to select an egress port of the plurality of potential egress ports based on congestion of the corresponding buffers coupled to the ingress port and to each of the plurality of potential egress ports. The ingress port is to place the packet into the corresponding buffer coupled to the ingress port and the selected egress port.