H04L47/2433

System and method for low latency network switching
11658911 · 2023-05-23 · ·

A network switch and associated method of operation for establishing a low latency transmission path through the network which bypasses the packet queue and scheduler of the switch fabric. The network switch transmits each of a plurality of data packets to the identified destination egress port over the low latency transmission if the data packet is identified to be transmitted over the low latency transmission path from the ingress port to the destination egress port, and transmits the data packet to the destination egress port through the packet queue and scheduler if the data packet is not identified to be transmitted over the low latency transmission path from the ingress port to the destination egress ports.

Mesh network system and communication method of the same having data flow transmission sorting mechanism
20230155946 · 2023-05-18 ·

The present invention discloses a mesh network system having data flow transmission sorting mechanism that includes station devices, mesh network devices and a portal network apparatus. The portal network device is configured to manage the station devices to perform communication with an external network through the mesh network devices and the portal network device, to detect outward data flows and inward data flows related to the station devices, to perform priority ranking on the outward data flows and the inward data flows according to device priority information and relative flow amount information to generate outward priority ranking information and inward priority ranking information, and to transmit the outward priority ranking information and inward priority ranking information to the mesh network devices such that the mesh network system transmits the outward data flows and the inward data flows respectively according to the outward priority ranking information and inward priority ranking information.

UTILIZING SERVICE TAGGING FOR ENCRYPTED FLOW CLASSIFICATION

In one embodiment, a device in a network receives domain name system (DNS) information for a domain. The DNS information includes one or more service tags indicative of one or more services offered by the domain. The device detects an encrypted traffic flow associated with the domain. The device identifies a service associated with the encrypted traffic flow based on the one or more service tags. The device prioritizes the encrypted traffic flow based on the identified service associated with the encrypted traffic flow.

DATA TRAFFIC CONTROL

As an example, a method includes storing, in non-transitory memory, prioritization rules that establish a priority preference for egress of data traffic for a first location. The first location includes a first location apparatus to control egress of data traffic for the first location apparatus and a second location apparatus at a second location, which is different from the first location, to receive data traffic and cooperate with the first apparatus to measure bandwidth with respect to the first location. The first location apparatus is coupled with the second location apparatus via at least one bidirectional network connection. The method also includes estimating capacity of the at least one network connection for the egress of data traffic with respect to the first location. The method also includes categorizing each packet in egress data traffic from the first location based on an evaluation of each packet with respect to the prioritization rules. The method also includes placing each packet in one of a plurality of egress queues associated with the at least one network connection at the first location apparatus according to the categorization of each respective packet and the estimated capacity. The method also includes sending the packets from the first location apparatus to the second location apparatus via a respective network connection according to a priority of the respective egress queue into which each packet is placed, such that the first location apparatus transmits at the estimated capacity for the egress of data traffic.

PREAMBLE DESIGN ON A SHARED COMMUNICATION MEDIUM

Techniques for managing preamble transmission and processing on a shared communication medium are disclosed. An access point or an access terminal, for example, may generate a preamble for silencing communication on a communication medium with respect to an upcoming data transmission, configure the preamble to identify one or more target devices for the silencing, and transmit the preamble over the communication medium in advance of the data transmission. Conversely, the access point or the access terminal may receive a preamble (as a receiving device) over a communication medium, identify one or more target devices for silencing communication on the communication medium with respect to an upcoming data transmission based on the preamble, and selectively silence communication over the communication medium based on itself (as the receiving device) being among the one or more target devices.

Edge synchronization systems and methods

The present invention relates to IoT devices existing in a deployed ecosystem. The various computers in the deployed ecosystem are able to respond to requests from a device directly associated with it in a particular hierarchy, or it may seek a response to the request from a high order logic/data source (parent). The logic/data source parent may then repeat the understanding process to either provide the necessary response to the logic/data source child who then replies to the device or it will again ask a parent logic/data sources for the appropriate response. This architecture allows for a single device to make one request to a single known source and potentially get a response back from the entire ecosystem of distributed servers.

Rate limit and burst limit enhancements for request processing
11683254 · 2023-06-20 · ·

A method that includes establishing an open connection for responding to requests from clients supported by an application server. The method may further include establishing a set of queues configured for storing requests received from the client via the open connection. The method may further include selecting requests from the queues based on a rate limit threshold and burst limit threshold of the application server. The rate limit threshold may refer to a number of requests that the application server can process within a first time duration, while the burst limit threshold may refer to a number of requests that the application server can process within a second time duration that is shorter than the first time duration. The method may further include transmitting the requests to a set of data processing servers connected to the application server and receiving an indication that the requests have been processed.

DYNAMIC QUALITY OF SERVICE FOR OVER-THE-TOP CONTENT
20170353389 · 2017-12-07 ·

A method, non-transitory computer readable medium and apparatus for changing a quality of service for data packets that are delivered over-the-top are disclosed. For example, the method includes a processor that identifies the data packets as video data packets that are delivered over-the-top in a communication network, changes the quality of service associated with the data packets from a best effort quality of service level to a higher priority quality of service level, monitors the data packets until no video data packet is identified in the data packets and changes the quality of service associated with the data packets back to the best effort quality of service level from the higher priority quality of service level.

Orchestrating Network Usage Under the Threat of Unexpected Bandwidth Shortages
20230189060 · 2023-06-15 · ·

A method for orchestrating use of a communications network for conveying a plurality of data streams transmitted by a plurality of applications includes attributing levels of importance to individual or groups of data streams, determining based at least in part on the levels of importance and on network capacity requirements of the data streams an ordered list of data streams to be curtailed or stopped in case of a shortage of bandwidth in the communications network, providing the list to a management entity that monitors available bandwidth, compares available bandwidth to a combined bandwidth requirement and, in response to determining that the available bandwidth is, or is imminent to become, less than the combined bandwidth requirement, curtails or stops data streams in the order given by the list so as to bring the combined bandwidth requirement back to or below the available bandwidth.

Radio base station and mobile station

Priority control is performed using FPIs. A radio base station eNB according to the present invention includes: a bearer management unit 12 configured to manage FPIs assigned to data flows received from a core network device S-GW via an S1 bearer; and a priority control unit 13 configured to perform priority control over the data flows received from the core network device S-GW via the S1 bearer, in which the bearer management unit 12 establishes a radio bearer with a mobile station UE for each of the FPIs, and the priority control unit 13 transfers each data flow received from the core network device S-GW via the S1 bearer, to the radio bearer corresponding to the FPI assigned to the data flow.