H04L12/863

MODELLING INTERFERENCE

This disclosure relates to communicating on a wireless channel in the presence of an interference source. A receiver accesses the channel to perform a sequence of energy detections. The interference source is modelled as cyclically transitioning into and out of an inactive state and as cyclically transitioning, when out of the inactive state, between a first active state, in which the interference source is active and creating interference on the channel, and a second active state, in which the interference source is active but creating a substantially lower level of interference on the channel. Based on the sequence of energy detections, large and small time scale metrics are determined. Transmission of data by a transmitter is in dependence on the estimated metrics.

PACKET ORDER IDENTIFICATION WITH REDUCED OVERHEAD IN PACKETIZED DATA TRANSMISSION
20170272376 · 2017-09-21 ·

A transmitting device comprising: a transmitter for transmitting data to a receiving device; and a controller for formatting the data to be transmitted from the transmitter, by dividing the data amongst a plurality of packets. The controller is configured to package each respective one of the packets with only a respective portion of an index sequence as an identifier field for distinguishing between the packets within the sequence, wherein at least one of the portions is alone insufficient to identify its respective packet. The controller is further configured to control the transmitter to transmit the packets including the respective portions of the index sequence, ordered such that the index sequence repeats cyclically over the transmission of the packets; thereby enabling the receiving device to determine a respective position in the index sequence for each of the packets by referencing a plurality of the portions together, and to thereby identify the packets.

System and method for improving infrastructure to infrastructure communications

Systems and methods are provided for improving communications between infrastructures using RPCs. An authoritative endpoint in a first infrastructure receives a registration request from a non-authoritative server in a second infrastructure through a transport layer on which a remote procedure call (RPC) layer depends. This request establishes a connection with the authoritative endpoint. The authoritative entity authenticates and registers the non-authoritative entity, and receives RPCs from client devices through the non-authoritative entity. The authoritative entity provides responses to the RPCs through the non-authoritative entity over the established connection. The authoritative entity also performed load-shedding operations, such as notifying the non-authoritative entity of a time to live of the connection. The RPC requests and responses sent over the connection may be chunked into frames, each frame identifying a stream to which it belongs.

CONTROL APPARATUS, DATA TRANSMITTING SYSTEM, AND METHOD OF DATA TRANSMITTING
20170324669 · 2017-11-09 · ·

A control apparatus controlling a first data transfer apparatus including a first port, the control apparatus includes a memory, and a processor configured to detect a first transmission rate of first data transmitted using the first port is equal to or higher than a first value, when it is detected that the first transmission rate is equal to or higher than the first value, request the first data transfer apparatus to change a destination of a first packet from a first information processing apparatus to a data buffer, after the first packet transmitted from the first data transfer apparatus is stored for a first period in the data buffer, cause the data buffer to transmit the first packet to the first data transfer apparatus, and request the first data transfer apparatus to change the destination of the first packet from the data buffer apparatus to the first information processing apparatus.

PROVIDING QUEUEING IN A LOG STREAMING MESSAGING SYSTEM
20170272516 · 2017-09-21 ·

Providing queuing in a log streaming system. A state of each of a set of queues of messages is maintained by sending messages to a state topic in the log streaming system. Responsive to a client writing a message to one of the queues, writing the message to a message topic for the queue in the log streaming system. Responsive to the client reading from one of the queues, reading a message from the message topic for the queue and storing property types relating to the availability of the message in the state topic for the queue by sending messages to the state topic referencing the message in the message topic.

METHOD AND APPARATUS FOR PROGRAMMABLE BUFFERS IN MOBILE NETWORKS

It is possible to dynamically configure buffers in network devices by sending software defined network (SDN) instructions to a Control-to-Data-Plane Interface (CDPI) agents on the network devices. An SDN instruction may instruct a CDPI agent to configure a buffer in a network device to store a specific type of traffic flow in accordance with a traffic handling policy. In some embodiments, the SDN instruction instructs the CDPI Agent to directly configure a buffer by, for example, associating a virtual port with a new/existing buffer, binding a virtual port associated with the buffer to a switch, and/or installing a flow control rule in a flow table of the switch. In other embodiments, the SDN instruction may instruct the CDPI agent to reconfigure the buffer by transitioning the buffer to a different state.

System for transmitting concurrent data flows on a network
09813348 · 2017-11-07 · ·

A system for transmitting concurrent data flows on a network, includes a memory containing data of data flows; a plurality of queues assigned respectively to the data flows, organized to receive the data as atomic transmission units; a flow regulator to poll the queues in sequence and, if the polled queue contains a full transmission unit, transmitting the unit on the network at a nominal flow-rate of the network; a sequencer to poll the queues in a round-robin manner and enable a data request signal when the filling level of the polled queue is below a threshold common to all queues, which threshold is greater than the size of the largest transmission unit; and a direct memory access configured to receive the data request signal and respond thereto by transferring data from the memory to the corresponding queue at a nominal speed of the system, up to the common threshold.

Advertising network layer reachability information specifying a quality of service for an identified network flow

Methods, apparatus and articles of manufacture for advertising network layer reachability information specifying a quality of service for an identified network flow are disclosed. Example methods disclosed herein to specify quality of service for network flows include receiving network layer reachability information including a first quality of service class specified for a first network flow, the network layer reachability information having been advertised by a first network element that is to receive the first network flow. Such example methods can also include updating an incoming packet determined to belong to the first network flow to indicate that the incoming packet belongs to the first quality of service class, the incoming packet being received from a second network element. Such example methods can further include, after updating the incoming packet, routing the incoming packet towards the first network element.

Method, device and system for establishing label switched path
09813332 · 2017-11-07 · ·

The present application discloses a method, a device and a system for establishing an LSP. The method includes: allocating, by a proxy node device, a label for a destination node device, generating a label mapping message carrying the label, an address of the destination node device and an address of the proxy node device, and sending the label mapping message to an upstream node device to initiate establishment of a first LSP from an entry node device to the proxy node device; stitching, by the proxy node device, the first LSP with a second LSP to form a third LSP from the entry node device to the destination node device, where the second LSP is an LSP established between the proxy node device and the destination node device.

Low latency device interconnect using remote memory access with segmented queues
09811500 · 2017-11-07 · ·

A writing application on a computing device can reference a tail pointer to write messages to message buffers that a peer-to-peer data link replicates in memory of another computing device. The message buffers are divided into at least two queue segments, where each segment has several buffers. Messages are read from the buffers by a reading application on one of the computing devices using an advancing head pointer by reading a message from a next message buffer when determining that the next message buffer has been newly written. The tail pointer is advanced from one message buffer to another within a same queue segment after writing messages. The tail pointer is advanced from a message buffer of a current queue segment to a message buffer of a next queue segment when determining that the head pointer does not indicate any of the buffers of the next queue segment.