Patent classifications
H04L49/102
STREAMING COMMUNICATION BETWEEN DEVICES
In accordance with implementations of the subject matter described herein, there is provided a solution for streaming communication between devices. In this solution, a memory of a first device comprising a ring buffer is allocated to be dedicated for storing a data stream of an application to be transmitted to a second electronic device. The application of the first device writes data to be transmitted into the ring buffer, to form a portion of the first data stream, and a write pointer of the ring buffer is thus updated. A portion of data is read based on a source memory address from the ring buffer via the interface device. The interface device also transmits the data portion to a second device. The read data portion is stored in a dedicated ring buffer of the memory. In accordance with the solution, an efficient streaming communication interface is provided between devices.
Fast scheduling and optimization of multi-stage hierarchical networks
Significantly optimized multi-stage networks including scheduling methods for faster scheduling of connections, useful in wide target applications, with VLSI layouts using only horizontal wires and vertical wires to route large scale partial multi-stage hierarchical networks having inlet and outlet links, and laid out in an integrated circuit device in a two-dimensional grid arrangement of blocks are disclosed. The optimized multi-stage networks in each block employ one or more slices of rings of stages of switches with inlet and outlet links of partial multi-stage hierarchical networks connecting to rings from either left-hand side or right-hand side; and employ hop wires or multi-drop hop wires wherein hop wires or multi-drop wires are connected from switches of stages of rings of slices of a first partial multi-stage hierarchical network to switches of stages of rings of slices of the first or a second partial multi-stage hierarchical network.
Fast scheduling and optimization of multi-stage hierarchical networks
Significantly optimized multi-stage networks including scheduling methods for faster scheduling of connections, useful in wide target applications, with VLSI layouts using only horizontal wires and vertical wires to route large scale partial multi-stage hierarchical networks having inlet and outlet links, and laid out in an integrated circuit device in a two-dimensional grid arrangement of blocks are disclosed. The optimized multi-stage networks in each block employ one or more slices of rings of stages of switches with inlet and outlet links of partial multi-stage hierarchical networks connecting to rings from either left-hand side or right-hand side; and employ hop wires or multi-drop hop wires wherein hop wires or multi-drop wires are connected from switches of stages of rings of slices of a first partial multi-stage hierarchical network to switches of stages of rings of slices of the first or a second partial multi-stage hierarchical network.
GPU-NATIVE PACKET I/O METHOD AND APPARATUS FOR GPU APPLICATION ON COMMODITY ETHERNET
The disclosure relates to a method and device for inputting and outputting packets inside a GPU based on a commodity Ethernet device. According to embodiments of the disclosure, a method for commodity Ethernet device-based graphic processing unit (GPU) internal packet input/output performed by a GPU internal packet input/output device comprises being allocated an available packet buffer from a memory pool inside a GPU, after packets received from a network interface controller (NIC) are directly transferred to the allocated packet buffer, processing the directly transferred packets through a reception (Rx) kernel, transmitting a transmission packet to a network through the NIC according to an operation of a transmission (Tx) kernel, and returning the allocated packet buffer.
Techniques to operate a time division multiplexing(TDM) media access control (MAC)
Techniques to operate a time division multiplexing (TDM) media access control (MAC) module include examples of facilitating use of shared resources allocated to ports of a network interface based on a time slot mechanism. The shared resources allocated to process packet data received or sent through the ports of the network interface.
UNICAST ADDRESSING FOR REDUNDANT COMMUNICATION PATHS
In an example, a node in a network includes four ports coupled to respective nodes via respective links. A first port and a third port are coupled to respective nodes via respective near links and a second port and a fourth port are coupled to respective nodes via respective skip links. The node further includes at least one processor configured to send a first message in a first direction via the second port, and the first message includes a first destination address that corresponds to the second side of the node. The at least one processor is further configured to send a second message in a second direction via the fourth port, and the second message includes a second destination address that corresponds to the first side of the node.
SELF-CHECKING NODE
In an example, a method includes forming a first self-checking pair including a self-checking node and a first node adjacent to the self-checking node in a network. The method further includes forming a second self-checking pair including the self-checking node and a second node adjacent to the self-checking node in the network, wherein the self-checking node is between the first node and the second node. The method further includes transmitting a first paired broadcast with the first self-checking pair and transmitting a second paired broadcast with the second self-checking pair.
HYPERSCALAR PACKET PROCESSING
The disclosed systems and methods provide hyperscalar packet processing. A method includes receiving a plurality of network packets from a plurality of data paths. The method also includes arbitrating, based at least in part on an arbitration policy, the plurality of network packets to a plurality of packet processing blocks comprising one or more full processing blocks and one or more limited processing blocks. The method also includes processing, in parallel, the plurality of network packets via the plurality of packet processing blocks, wherein each of the one or more full processing blocks processes a first quantity of network packets during a clock cycle, and wherein each of the one or more limited processing blocks processes a second quantity of network packets during the clock cycle that is greater than the first quantity of network packets. The method also includes sending the processed network packets through data buses.
Switch device and communication control method
This switch device is mounted on a vehicle, and comprises: a switch unit for relaying communication data communicated between a plurality of communication devices; a buffer for holding the communication data to be relayed; and a control unit for transmitting stop request to at least one communication device of the plurality of communication devices if communication data to be transmitted to a communication device in which abnormality has been detected is held in the buffer, the stop request which requests for stopping transmission of communication data to the switch device and for holding of communication data to be transmitted to the switch device.
Shared resources for multiple communication traffics
Systems, methods, and computer-readable media are disclosed for an apparatus coupled to a communication bus, where the apparatus includes a queue and a controller to manage operations of the queue. The queue includes a first space to store a first information for a first traffic type, with a first flow class, and for a first virtual channel of communication between a first communicating entity and a second communicating entity. The queue further includes a second space to store a second information for a second traffic type, with a second flow class, and for a second virtual channel of communication between a third communicating entity and a fourth communicating entity. The first traffic type is different from the second traffic type, the first flow class is different from the second flow class, or the first virtual channel is different from the second virtual channel. Other embodiments may be described and/or claimed.