H04L49/901

Resource sharing in a telecommunications environment
11543979 · 2023-01-03 · ·

A transceiver is designed to share memory and processing power amongst a plurality of transmitter and/or receiver latency paths, in a communications transceiver that carries or supports multiple applications. For example, the transmitter and/or receiver latency paths of the transceiver can share an interleaver/deinterleaver memory. This allocation can be done based on the data rate, latency, BER, impulse noise protection requirements of the application, data or information being transported over each latency path, or in general any parameter associated with the communications system.

Communication input-output device

In a recording device, a data memory including a DRAM having a write pointer for each of banks, and a queue control memory that stores an active flag is provided. When frame data is written into a write-target queue, a bank for which an active flag indicates an activated state is selected as a write-target bank among the banks to write the frame data, and if there is no bank for which an active flag indicates an activated state, a bank for which an active flag indicates a deactivated state is selected as a write-target bank, a row address of a write pointer of the bank is activated, and thereafter the frame data is written.

Communication input-output device

In a recording device, a data memory including a DRAM having a write pointer for each of banks, and a queue control memory that stores an active flag is provided. When frame data is written into a write-target queue, a bank for which an active flag indicates an activated state is selected as a write-target bank among the banks to write the frame data, and if there is no bank for which an active flag indicates an activated state, a bank for which an active flag indicates a deactivated state is selected as a write-target bank, a row address of a write pointer of the bank is activated, and thereafter the frame data is written.

LOW LATENCY SMALL MESSAGE COMMUNICATION CHANNEL USING REMOTE WRITE

A method implemented by an electronic device for sending data on low latency communication includes remote writing a sequence number of a message to be sent next to a receiver, determining whether there is an open position in a receive buffer of the receiver using a local tracking mechanism, writing data of the message to an area of an address space of a sending application that is mapped onto the receive buffer as a result of determining there is the open position in the receive buffer, incrementing a local sequence counter, and updating position information in the local tracking mechanism.

Connection management in a network adapter

A network adapter includes a network interface, a host interface and processing circuitry. The network interface connects to a communication network for communicating with remote targets. The host interface connects to a host that accesses a Multi-Channel Send Queue (MCSQ) storing Work Requests (WRs) originating from client processes running on the host. The processing circuitry is configured to retrieve WRs from the MCSQ and distribute the WRs among multiple Send Queues (SQs) accessible by the processing circuitry, and retrieve WRs from the multiple NSQs and execute data transmission operations specified in the WRs retrieved from the multiple NSQs.

Connection management in a network adapter

A network adapter includes a network interface, a host interface and processing circuitry. The network interface connects to a communication network for communicating with remote targets. The host interface connects to a host that accesses a Multi-Channel Send Queue (MCSQ) storing Work Requests (WRs) originating from client processes running on the host. The processing circuitry is configured to retrieve WRs from the MCSQ and distribute the WRs among multiple Send Queues (SQs) accessible by the processing circuitry, and retrieve WRs from the multiple NSQs and execute data transmission operations specified in the WRs retrieved from the multiple NSQs.

Classification of encrypted internet traffic

A method includes obtaining a first plurality of encrypted traffic flows traversing a communication network, performing a first classification, wherein a result of the first classification identifies a traffic type associated with each encrypted traffic flow of the first plurality of encrypted traffic flows, and wherein the first classification is based on a traffic pattern of the each encrypted traffic flow, performing a second classification, wherein a result of the second classification identifies a traffic type associated with each server name indication from which the first plurality of encrypted traffic flows is associated, and wherein the second classification is based on the result of the first classification, and performing a third classification identifying a traffic type associated with each encrypted traffic flow of the first plurality of encrypted traffic flows, wherein the third classification is based on a combination of the results of the first classification and the second classification.

System and method for providing a dynamic cloud with subnet administration (SA) query caching

A system and method support can subnet management in a cloud environment. During a virtual machine migration in a cloud environment, a subnet manager can become a bottleneck point that delays efficient service. A system and method can alleviate this bottleneck point by ensuring a virtual machine retains a plurality of addresses after migration. The system and method can further allow for each host node within the cloud environment to be associated with a local cache that virtual machines can utilize when re-establishing communication with a migrated virtual machine.

System and method for providing a dynamic cloud with subnet administration (SA) query caching

A system and method support can subnet management in a cloud environment. During a virtual machine migration in a cloud environment, a subnet manager can become a bottleneck point that delays efficient service. A system and method can alleviate this bottleneck point by ensuring a virtual machine retains a plurality of addresses after migration. The system and method can further allow for each host node within the cloud environment to be associated with a local cache that virtual machines can utilize when re-establishing communication with a migrated virtual machine.

Multi-stride packet payload mapping for robust transmission of data
11522816 · 2022-12-06 · ·

Systems and methods for packet payload mapping for robust transmission of data are described. For example, methods may include receiving, using a network interface, packets that each respectively include a primary frame and one or more preceding frames from the sequence of frames of data that are separated from the primary frame in the sequence of frames by a respective multiple of a stride parameter; storing the frames of the packets in a buffer with entries that each hold the primary frame and the one or more preceding frames of a packet; reading a first frame from the buffer as the primary frame from one of the entries; determining that a packet with a primary frame that is a next frame in the sequence has been lost; and, responsive to the determination, reading the next frame from the buffer as a preceding frame from one of the entries.