Patent classifications
H04L49/9078
Forwarding entry update method and apparatus in a memory
A forwarding entry update method and apparatus, the method including receiving a write operation packet, where the write operation packet has write operation information, where the write operation information has write operation data and a write operation address, where the write operation data indicates a forwarding entry, and where the write operation address indicates an address to which the write operation data is to be written in a memory, obtaining the write operation information from the write operation packet, and writing the write operation data into the memory according to the write operation address in the write operation information.
SYSTEM AND METHOD FOR REGULATING NVMe-oF COMMAND REQUESTS AND DATA FLOW ACROSS A NETWORK WITH MISMATCHED RATES
One embodiment can provide a method and system for implementing flow control. During operation, a switch identifies a command from a host to access a storage device coupled to the switch. The switch queues the command in a command queue corresponding to the host. In response to determining that an amount of data pending transmission to the host from the storage device is below a predetermined threshold, the switch removes a command from the command queue and forwards the removed command to the storage device.
Updating no sync technique for ensuring continuous storage service in event of degraded cluster state
An Updating No Sync (UNS) technique ensures continuous data protection for content driven distribution of data served by storage nodes of a fault tolerant cluster in the event of a degraded cluster state. A storage service implemented in each node includes one or more slice services (SSs) configured to process and store metadata describing the data served by the storage nodes and one or more block services (BSs) configured to process and store the data on storage devices of the node. A bin assignment service may coopt one or more healthy BSs to temporarily store updates of data and metadata received at the SS while the cluster is in the degraded state as an overflow data path (hence the term “Updating No Sync,” which denotes updating without synchronizing, i.e., not distributing the data within the cluster, but only accumulating an overflow of SS information). Once the cluster is no longer degraded, the accumulated overflow SS information at the BSs may be synchronized back to restored BSs, i.e., according to write path determination in the absence of node failure/unavailability.
System and method of a high buffered high bandwidth network element
A method and apparatus of a network element that processes a packet in the network element is described. In an exemplary embodiment, the network element receives a data packet that includes a destination address. The network element receives a packet, with a packet switch unit, wherein the packet was received by the network element on an ingress interface. The network element further determines if the packet is to be stored in an external queue. In addition, the network element identifies the external queue for the packet based on one or more characteristics of the packet. The network element additionally forwards the packet to a packet storage unit, wherein the packet storage unit includes storage for the external queue. Furthermore, the network element receives the packet from the packet storage unit and forwards the packet to an egress interface corresponding to the external queue.
Technologies for jitter-adaptive low-latency, low power data streaming between device components
Technologies for low-latency data streaming include a computing device having a processor that includes a producer and a consumer. The producer generates a data item, and in a local buffer producer mode adds the data item to a local buffer, and in a remote buffer producer mode adds the data item to a remote buffer. When the local buffer is full, the producer switches to the remote buffer producer mode, and when the remote buffer is below a predetermined low threshold, the producer switches to the local buffer producer mode. The consumer reads the data item from the local buffer while operating in a local buffer consumer mode and reads the data item from the remote buffer while operating in a remote buffer consumer mode. When the local buffer is above a predetermined high threshold, the consumer may switch to a catch-up operating mode. Other embodiments are described and claimed.
UPDATING NO SYNC TECHNIQUE FOR ENSURING CONTINUOUS STORAGE SERVICE IN EVENT OF DEGRADED CLUSTER STATE
An Updating No Sync (UNS) technique ensures continuous data protection for content driven distribution of data served by storage nodes of a fault tolerant cluster in the event of a degraded cluster state. A storage service implemented in each node includes one or more slice services (SSs) configured to process and store metadata describing the data served by the storage nodes and one or more block services (BSs) configured to process and store the data on storage devices of the node. A bin assignment service may coopt one or more healthy BSs to temporarily store updates of data and metadata received at the SS while the cluster is in the degraded state as an overflow data path (hence the term “Updating No Sync,” which denotes updating without synchronizing, i.e., not distributing the data within the cluster, but only accumulating an overflow of SS information). Once the cluster is no longer degraded, the accumulated overflow SS information at the BSs may be synchronized back to restored BSs, i.e., according to write path determination in the absence of node failure/unavailability.
Method and system for optimizing service device traffic management
A method and system for optimizing service device traffic management. Specifically, the method and system disclosed herein entail filtering network traffic flows directed to service devices, distributed throughout a network, for inspection. Through the aforementioned filtering, a targeted subset of network traffic flows may be identified and excluded from service device processing. The filtering thus alleviates traffic congestion and improves traffic throughput at the service device(s), thereby optimizing the management and/or processing of network traffic flows redirected to the service device(s).
METHODS, SYSTEMS AND DEVICES FOR PARALLEL NETWORK INTERFACE DATA STRUCTURES WITH DIFFERENTIAL DATA STORAGE AND PROCESSING SERVICE CAPABILITIES
Systems, methods and devices relating to a network-accessible data storage system for processing data transactions received over the network comprising one or more of a communication interface to the network, one or more data storage devices configured to respond to the data transactions communicated via the communication interface, the one or more data storage devices providing at least two data storage resources distinctly designated to accommodate respective data processing characteristics, and the data transactions are comprised of one or more of data transactions received or data transactions sent, and a resource allocation engine operatively associated with the communication interface to receive as input a given data processing characteristic automatically identifiable from each of the data transactions and allocate a designated one of the data storage resources according to the given data processing characteristic in responding to each of the data transactions.
Methods, systems and devices for parallel network interface data structures with differential data storage and processing service capabilities
Systems, methods and devices relating to a network-accessible data storage device comprising a network interface in data communication with a network, the network interface for receiving and sending data units, the data units being assigned to at least one of a plurality of network data queues depending on at least one data unit characteristic; a data storage component communicatively coupled with the network interface, the data storage component comprising a plurality of data storage resources for receiving and responding to data transactions communicated in data units; and a queue mapping component for mapping each network data queues to at least one data storage resource for processing of data transactions.
Protocol Data Unit End Handling with Fractional Data Alignment and Arbitration Fairness
In at least one embodiment, a method for handling data units in a multi-user system includes granting a shared resource to a user of a plurality of users for a transaction associated with an entry of a transaction data structure. The method includes determining whether the transaction stored last partial data of a data unit associated with the user in an alignment register associated with the user. The method includes asserting a request for arbitration of a plurality of transactions associated with the plurality of users. The request is asserted for an additional transaction associated with the entry in response to determining that the transaction stored the last partial data in the alignment register. The method may include flushing the last partial data from the alignment register to a target memory in response to detecting an additional grant of the shared resource to the user for the additional transaction.