H04L47/30

System and method of suppressing inbound payload to an integration flow of an orchestration based application integration

Described herein are systems and methods for suppressing inbound payload to an integration flow of an orchestration based application integration. The systems and methods described herein can, based upon a scan of an integration, identify and exclude from memory certain portions of one or more payloads that are received at the integration flow.

IMPROVING COMMUNICATION EFFICIENCY

There is provided a method comprising: preparing a data packet for transmission on a bearer, wherein at least one of the following is configured for transmission: a first network node and a second network node, checking whether at least one predetermined criterion is met, selecting, based at least partly on the said checking, whether to transmit the data packet via the first network node, via the second network node, or via both the first and the second network nodes, and transmitting the data packet according to the selecting.

FACILITATION OF HANDOVER COORDINATION BASED ON VOICE ACTIVITY DATA
20180007595 · 2018-01-04 ·

A more efficient network can be achieved by leveraging an adaptive dejitter buffer. The dejitter buffer can be dynamically adjusted based off a network data analysis. A communication handover can be adjusted or shifted based on voice inactivity data related to a forecasted punctuation. The dejitter buffer memory/depth of a mobile device can also be adjusted in accordance with receiving a delay interruption length associated with another mobile device. Thereafter, the dejitter buffer memory can be filled with voice packet data to decrease a packet delay variation at the mobile device.

SYSTEM AND METHOD FOR DETERMINING A CAUSE OF NETWORK CONGESTION

A method and apparatus of a device that determines a cause and effect of congestion in this device is described. In an exemplary embodiment, the device measures a queue group occupancy of a queue group for a port in the device, where the queue group stores a plurality of packets to be communicated through that port. In addition, the device determines if the measurement indicates a potential congestion of the queue group, where the congestion prevents a packet from being communicated within a time period. If potential congestion exists on that queue group, the device further gathers information regarding packets to be transmitted through that port. For example, the device can gather statistics packets that are stored in the queue group and/or new enqueue packets.

Fair Distribution Of Radio Resources Between Guaranteed Bit Rate (GBR) And Non-GBR Data Bearers

User equipments (UEs) may be scheduled by determining relative priorities of data radio bearers (DRBs), each DRB associated with a respective UE. A limit is established dividing radio resources available for allocation in the cell during a scheduling period into at least a first limited portion and a second remaining portion. According to the determined relative priorities: a) up to the first limited portion of the radio resources are allocated to only the DRBs that have a guaranteed bit rate (GBR), and thereafter b) the second remaining portion of the radio resources are allocated to only the DRBs which have not been fully allocated from the first limited portion. Schedules indicating this allocation are transmitted to the respective UEs. In carrier aggregation where each carrier aggregated cells has a respective plurality of DRBs, relative priorities for each respective plurality of DRBs are determined for each carrier aggregated cell.

Packet processing at a server

A server processers received real-time transport protocol packets from a first device to obtain sequentially ordered packets at a first buffer. The server decodes the sequentially ordered packets to obtain decoded packets at a decoder. The server encodes the decoded packets to obtain encoded packets at an encoder. The server transmits the encoded packets from the encoder to a storage unit. The server fetches the encoded packets from the storage unit at a first interval using a second buffer. The server transmits the encoded packets from the second buffer to a second device at a second interval.

CIRCUIT AND METHOD FOR CREDIT-BASED FLOW CONTROL
20180013689 · 2018-01-11 ·

A receiving circuit of a communications link comprises: a first data buffer configured to input, under control of a first clock signal, data of a first data stream transmitted by a transmitting circuit, and to generate a credit trigger signal indicating when a data value is read from the first data buffer, wherein data is read from the first data buffer, or from a further data buffer coupled to the output of the first data buffer, under control of a second clock signal; and a credit generation circuit configured to generate, based on the credit trigger signal, a credit signal for transmission to the transmitting circuit under control of the first clock signal, the credit signal indicating that one or more further data values of the first data stream can be transmitted by the transmitting circuit.

Memory allocator for I/O operations

Some embodiments provide a novel method for sharing data between user-space processes and kernel-space processes without copying the data. The method dedicates, by a driver of a network interface controller (NIC), a memory address space for a user-space process. The method allocates a virtual region of the memory address space for zero-copy operations. The method maps the virtual region to a memory address space of the kernel. The method allows access to the virtual region by both the user-space process and a kernel-space process.

Memory allocator for I/O operations

Some embodiments provide a novel method for sharing data between user-space processes and kernel-space processes without copying the data. The method dedicates, by a driver of a network interface controller (NIC), a memory address space for a user-space process. The method allocates a virtual region of the memory address space for zero-copy operations. The method maps the virtual region to a memory address space of the kernel. The method allows access to the virtual region by both the user-space process and a kernel-space process.

Accurate time-stamping of outbound packets

A network device includes a port, a transmission pipeline and a time-stamping circuit. The port is configured for connecting to a network. The transmission pipeline includes multiple pipeline stages and is configured to process packets and to send the packets to the network via the port. The time-stamping circuit is configured to temporarily suspend at least some processing of at least a given packet in the transmission pipeline, to verify whether a pipeline stage having a variable processing delay, located downstream from the time-stamping circuit, meets an emptiness condition, and, only when the pipeline stage meets the emptiness condition, to time-stamp the given packet and resume the processing of the given packet.