H04L69/26

Scalable in-network computation for massively-parallel shared-memory processors

A network device configured to perform scalable, in-network computations is described. The network device is configured to process pull requests and/or push requests from a plurality of endpoints connected to the network. A collective communication primitive from a particular endpoint can be received at a network device. The collective communication primitive is associated with a multicast region of a shared global address space and is mapped to a plurality of participating endpoints. The network device is configured to perform an in-network computation based on information received from the participating endpoints before forwarding a response to the collective communication primitive back to one or more of the participating endpoints. The endpoints can inject pull requests (e.g., load commands) and/or push requests (e.g., store commands) into the network. A multicast capability enables tasks, such as a reduction operation, to be offloaded to hardware in the network device.

Verifying revision levels while storing data in a storage network

A method includes receiving, by a storage unit of a set of storage units of a storage network, a write request regarding an encoded data slice, where the write request includes a slice payload and a corresponding revision level of the encoded data slice. The method further includes determining whether the corresponding revision level of the encoded data slice is a next revision level. The method further includes generating a write response message that includes a status message for the encoded data slice based on the determining whether the corresponding revision level of the encoded data slice is the next revision level, where when the corresponding revision level is the next revision level, the status message includes an operation succeeded message. The method further includes sending the write response message to a computing device of the storage network.

Method and device for processing data packets

The invention proposes a method of encoding data packet by encoding type information and size information of said data packet into the same field. The invention also proposes a method of processing data packets received. The data packet comprises a header part and a message part. The header part comprises at least one bit for indicating the type of said data packet, said method comprising a step 101 of obtaining the size information of said data packet based on said at least one bit.

Reliable communications using a point to point protocol

This disclosure describes techniques for performing communications between devices using various aspects of Ethernet standards. As further described herein, a protocol is disclosed that may be used for communications between devices, where the communications take place over a physical connection complying with Ethernet standards. Such a protocol may enable reliable and in-order delivery of frames between devices, while following Ethernet physical layer rules, Ethernet symbol encoding, Ethernet lane alignment, and/or Ethernet frame formats.

Network device and conversion apparatus
11765102 · 2023-09-19 · ·

A network device includes a switch chip and a CPU, wherein the switch chip at least includes a CPU interface, and the CPU at least includes a media access controller and a Buffer. The network device further includes a conversion apparatus. The conversion apparatus receives a first packet uploaded by the switch chip to the CPU through the CPU interface, obtains a second packet by migrating a private information header in an Ethernet header of the first packet to a specified position of the first packet, calculates a Cyclic Redundancy Check, CRC, code of the second packet, obtains a third packet by replacing a CRC code already carried in the second packet with the calculated CRC code, and sends the third packet to the Buffer on the CPU for buffering, wherein the specified position is a position other than the Ethernet header in the first packet.

SCALABLE IN-NETWORK COMPUTATION FOR MASSIVELY-PARALLEL SHARED-MEMORY PROCESSORS

A network device configured to perform scalable, in-network computations is described. The network device is configured to process pull requests and/or push requests from a plurality of endpoints connected to the network. A collective communication primitive from a particular endpoint can be received at a network device. The collective communication primitive is associated with a multicast region of a shared global address space and is mapped to a plurality of participating endpoints. The network device is configured to perform an in-network computation based on information received from the participating endpoints before forwarding a response to the collective communication primitive back to one or more of the participating endpoints. The endpoints can inject pull requests (e.g., load commands) and/or push requests (e.g., store commands) into the network. A multicast capability enables tasks, such as a reduction operation, to be offloaded to hardware in the network device.

Storage-optimized data-atomic systems and techniques for handling erasures and errors in distributed storage systems

Described are devices, systems and techniques for implementing atomic memory objects in a multi-writer, multi-reader setting. In an embodiment, the devices, systems and techniques use maximum distance separable (MDS) codes, and may be specifically designed to optimize a total storage cost for a given fault-tolerance requirement. Also described is an embodiment to handle the case where some of the servers can return erroneous coded elements during a read operation.

NETWORK DEVICE AND CONVERSION APPARATUS
20220014479 · 2022-01-13 · ·

A network device includes a switch chip and a CPU, wherein the switch chip at least includes a CPU interface, and the CPU at least includes a media access controller and a Buffer. The network device further includes a conversion apparatus. The conversion apparatus receives a first packet uploaded by the switch chip to the CPU through the CPU interface, obtains a second packet by migrating a private information header in an Ethernet header of the first packet to a specified position of the first packet, calculates a Cyclic Redundancy Check, CRC, code of the second packet, obtains a third packet by replacing a CRC code already carried in the second packet with the calculated CRC code, and sends the third packet to the Buffer on the CPU for buffering, wherein the specified position is a position other than the Ethernet header in the first packet.

METHOD AND SYSTEM FOR DATA TRANSFER ON AN AVIONICS BUS

A method for transmitting a set of conforming data frames in a specialized data network, the method comprising providing, from a data source, at least one specialized header frame to a data destination by way of the specialized data network, generating, at the data source, a set of conforming data frames, and providing at least a subset of the conforming data frames to the data destination by way of the specialized data network.

Fabric control protocol for data center networks with packet spraying over multiple alternate data paths

A fabric control protocol is described for use within a data center in which a switch fabric provides full mesh interconnectivity such that any of the servers may communicate packet data for a given packet flow to any other of the servers using any of a number of parallel data paths within the data center switch fabric. The fabric control protocol enables spraying of individual packets for a given packet flow across some or all of the multiple parallel data paths in the data center switch fabric and, optionally, reordering of the packets for delivery to the destination. The fabric control protocol may provide end-to-end bandwidth scaling and flow fairness within a single tunnel based on endpoint-controlled requests and grants for flows. In some examples, the fabric control protocol packet structure is carried over an underlying protocol, such as the User Datagram Protocol (UDP).