Patent classifications
H04L47/30
ETHERNET PAUSE AGGREGATION FOR A RELAY DEVICE
A relay device is provided that may identify a quantity of empty data byte locations in a data buffer of the relay device. The relay device may receive an indicator associated with transmitting data packets. The relay device may pause or enable a lossless flow of data between the relay device, a host device, and a peer device based on the quantity of empty data byte locations, the indicator, or both. The relay device may include a first data interface coupled with a peer device, a second data interface coupled with a host device, a data buffer configured to store data packets received from the host device, and a state machine that enables a lossless transmission of data between the host device and peer device. The state machine may transmit a pause frame to the host device based on a data buffer utilization reaching a data storage capacity.
Method and system for data management in an edge server
Example implementations relate to method and system for data management in a computing system, such as an edge server having a processing resource. During operation, the processing resource collects data from a plurality of smart devices and process a portion of the data at each edge-stage of a plurality of first edge-stages to generate partially processed data. Further, the processing resource evaluates a data processing load at an edge-stage of the plurality of first edge-stages based on a throughput of the edge-stage or a size of a data processing queue of a next edge-stage of the plurality of first edge-stages. The processing resource further pushes the partially processed data to the next edge-stage or a portion of the partially processed data to an external computing system and a remaining portion of the partially processed data to the next edge-stage, based on the data processing load at the edge-stage.
Method and system for data management in an edge server
Example implementations relate to method and system for data management in a computing system, such as an edge server having a processing resource. During operation, the processing resource collects data from a plurality of smart devices and process a portion of the data at each edge-stage of a plurality of first edge-stages to generate partially processed data. Further, the processing resource evaluates a data processing load at an edge-stage of the plurality of first edge-stages based on a throughput of the edge-stage or a size of a data processing queue of a next edge-stage of the plurality of first edge-stages. The processing resource further pushes the partially processed data to the next edge-stage or a portion of the partially processed data to an external computing system and a remaining portion of the partially processed data to the next edge-stage, based on the data processing load at the edge-stage.
DETERMINISTIC QUALITY OF SERVICE
According to a first embodiment, a method may include receiving at least one time sensitive communications (TSC) request from at least one application function (AF). The method may further include determining at least one hold and forward (HnF) parameter based on the at least one TSC request. The method may further include transmitting the at least one HnF parameter to at least one user equipment (UE).
PASSIVE MEASUREMENT OF COMMUNICATION FLOWS
Methods, systems, and devices for communications are described. One or more flows between a node and one or more other nodes in a communication network may be monitored over a time period. During the monitoring, it may be identified that, during a subset of the time period, communications over at least one of the flows were restricted by the communication network based on receiving at least one indicator of congestion for the at least one flow. A quantity of traffic communicated over the one or more flows during the subset of the time period may then be determined, and respective flow rates of the one or more flows may be obtained. The obtained flow rates may be used to calculate a data rate of one or more connections between the node and the one or more other nodes.
PASSIVE MEASUREMENT OF COMMUNICATION FLOWS
Methods, systems, and devices for communications are described. One or more flows between a node and one or more other nodes in a communication network may be monitored over a time period. During the monitoring, it may be identified that, during a subset of the time period, communications over at least one of the flows were restricted by the communication network based on receiving at least one indicator of congestion for the at least one flow. A quantity of traffic communicated over the one or more flows during the subset of the time period may then be determined, and respective flow rates of the one or more flows may be obtained. The obtained flow rates may be used to calculate a data rate of one or more connections between the node and the one or more other nodes.
Method and apparatus for flow control in a wireless communication system
A method performed by a first node for flow control in a wireless communication system is provided. The method includes identifying a triggering event for transmitting downlink flow control feedback information, generating the downlink flow control feedback information including an available buffer size based on the identified triggering event, and transmitting, to a second node, a backhaul adaptation protocol (BAP) layer message including the downlink flow control feedback information.
Method and apparatus for flow control in a wireless communication system
A method performed by a first node for flow control in a wireless communication system is provided. The method includes identifying a triggering event for transmitting downlink flow control feedback information, generating the downlink flow control feedback information including an available buffer size based on the identified triggering event, and transmitting, to a second node, a backhaul adaptation protocol (BAP) layer message including the downlink flow control feedback information.
MANAGING NETWORK LATENCY USING BUFFER FILL CONTROL
A method of managing a fill state of a buffer in an external device includes monitoring the latency of a network connection to an external device having a network buffer via a managing device. A state of fill of the network buffer is determined based on at least the monitored latency of the network connection, and the effective network speed is estimated based on the state of fill of the network buffer. One or more network traffic scheduling parameters are adjusted in response to the estimated effective network speed, such as a maximum currently usable network speed that is lower than a maximum possible speed of the network. The maximum currently usable network speed of the network connection is periodically increased if the monitored latency is in a normal state and the maximum currently usable network speed is lower than the maximum possible speed of the network.
System and method of suppressing inbound payload to an integration flow of an orchestration based application integration
Described herein are systems and methods for suppressing inbound payload to an integration flow of an orchestration based application integration. The systems and methods described herein can, based upon a scan of an integration, identify and exclude from memory certain portions of one or more payloads that are received at the integration flow.