H04L49/9047

De-duplicating remote procedure calls

A method, computer program product, and a computing system are provided for de-duplicating remote procedure calls at a client. In an implementation, the method may include generating a plurality of local pending remote procedure calls. The method may also include identifying a set of duplicate remote procedure calls among the plurality of remote procedure calls. The method may also include associating each remote procedure call within the set of duplicate remote procedure calls with one another. The method may also include executing a remote procedure call of the set of duplicate remote procedure calls. The method may further include providing a response for the remote procedure call of the set of duplicate remote procedure calls with the other remote procedure calls of the set of duplicate remote procedure calls.

TRANSMITTING CREDITS BETWEEN ACCOUNTING CHANNELS
20190238485 · 2019-08-01 ·

Example implementations relate to transmitting credits between accounting channels. A first number of credits may be transmitted to a source accounting buffer over a first accounting channel that is inactive. A second accounting channel may be inactivated and the first accounting channel may be activated. Any remaining credits received via the second accounting channel may be transmitted from the source accounting buffer to a destination accounting buffer.

Distributed contiguous reads in a network on a chip architecture
10346049 · 2019-07-09 · ·

Systems and techniques for network on a chip based computer architectures and distributing data without shared pointers therein are described. A described system includes computing resources; and a memory resource configured to maintain a dedicated memory region of the memory resource for distributed read operations requested by the computing resources. The computing resources can generate a packet to fetch data from the dedicated memory region without using memory addresses of respective data elements. The memory resource can receive the first packet, determine whether the first packet indicates the distributed read operation, and determine that the dedicated memory region is non-empty. Further, the memory resource can fetch one or more data elements from the dedicated memory region based on the first packet indicating the distributed read operation and the dedicated memory region being non-empty, and send a packet that includes the one or more fetched data elements.

CONTROLLER
20190199680 · 2019-06-27 · ·

The controller has a communication unit that receives read/write requests specifying an address of the same virtual area from a plurality of clients, and an actual area to be read/written by the communication unit. The communication unit has a management table that associates an identifier of the client with an address of the actual area that is different for each client, and an address conversion unit that carries out reading and writing to the address of the actual area associated with the identifier of the client with reference to the management table.

METHODS AND APPARATUSES FOR DYNAMIC RESOURCE AND SCHEDULE MANAGEMENT IN TIME SLOTTED CHANNEL HOPPING NETWORKS

The present application is at least directed to an apparatus operating on a network. The apparatus includes a non-transitory memory including an interface queue designated for a neighboring device and having instructions stored thereon for enqueuing a received packet. The apparatus also includes a processor, operably coupled to the non-transitory memory, configured to perform a set of instructions. The instructions include receiving the packet in a cell from the neighboring device. The instructions also include checking whether a track ID is in the received packet. The instructions also include checking a table stored in the memory to find a next hop address. Further, the instructions include inserting the packet into a subqueue of the interface queue. The application is also directed to a computer-implemented apparatus configured to dequeu a packet. The application is also directed to a computer-implemented apparatus configured to adjust a bundle of a device. The application is further directed to a computer-implemented apparatus configured to process a bundle adjustment request from a device.

CONTROL DEVICE AND METHOD OF VEHICLE MULTI-MASTER MODULE BASED ON RING COMMUNICATION TOPOLOGY BASED VEHICLE
20190182164 · 2019-06-13 · ·

Provided is a collision prevention system of a multi-master including: a plurality of external modules; and an integrated device. The integrated device includes: a plurality of interfaces connected respectively to the plurality of external modules and respectively controlled by corresponding external modules; a plurality of internal modules; a plurality of dedicated buffers connected respectively to the plurality of interfaces and the plurality of internal modules; and a common block connected to the plurality of dedicated buffers and controlled by the plurality of interfaces and the plurality of internal modules. The plurality of dedicated buffers includes a GBU and a plurality of LBUs. The GBU and the plurality of LBUs are connected to two neighboring GBUs and a plurality of LBUs to form a ring communication topology, which transmits ring communication data in one direction. The common block is connected to the ring communication topology through the GBU.

PACKET PROCESSING METHOD AND ROUTER
20190166058 · 2019-05-30 ·

Embodiments of the application describe a packet processing method and a router. The method includes: receiving, by an input line card, at least one packet; obtaining, by the input line card, information about an available first buffer block in a third buffer module, where the third buffer module is a first buffer module that includes an available first buffer block; allocating, by the input line card, a third buffer block to each of the at least one packet based on at least one buffer information block stored in the input line card and the information about an available first buffer block; and buffering, by the input line card, each packet into the third buffer block. Distributed packet buffering can be implemented by using the method.

Deadlock-free multicast routing on a dragonfly network

Systems and methods are provided for managing multicast data transmission in a network having a plurality of switches arranged in a Dragonfly network topology, including: receiving a multicast transmission at an edge port of a switch and identifying the transmission as a network multicast transmission; creating an entry in a multicast table within the switch; routing the multicast transmission across the network to a plurality of destinations via a plurality of links, wherein at each of the links the multicast table is referenced to determine to which ports the multicast transmission should be forwarded; and changing, when necessary, the virtual channel used by each copy of the multicast transmission as the copy progresses through the network.

Deadlock-free multicast routing on a dragonfly network

Systems and methods are provided for managing multicast data transmission in a network having a plurality of switches arranged in a Dragonfly network topology, including: receiving a multicast transmission at an edge port of a switch and identifying the transmission as a network multicast transmission; creating an entry in a multicast table within the switch; routing the multicast transmission across the network to a plurality of destinations via a plurality of links, wherein at each of the links the multicast table is referenced to determine to which ports the multicast transmission should be forwarded; and changing, when necessary, the virtual channel used by each copy of the multicast transmission as the copy progresses through the network.

Allocation of shared reserve memory
20240195754 · 2024-06-13 ·

A device includes ports, a packet processor, and a memory management circuit. The ports communicate packets over a network. The packet processor processes the packets using queues. The memory management circuit maintains a shared buffer in a memory and adaptively allocates memory resources from the shared buffer to the queues, maintains in the memory, in addition to the shared buffer, a shared-reserve memory pool for use by the queues, identifies, among the queues, a queue that requires additional memory resources, the queue having an occupancy that is (i) above a current value of a dynamic threshold, rendering the queue ineligible for additional allocation from the shared buffer, and (ii) no more than a defined margin above the current value of the dynamic threshold, rendering the queue eligible for allocation from the shared-reserve memory pool, and allocates memory resources to the identified queue from the shared-reserve memory pool.