Patent classifications
H04L12/873
Adaptive bandwidth throttling
Apparatuses, systems, methods, and computer program products are disclosed for adaptive bandwidth throttling. A monitor module determines a network bandwidth and/or a historical bandwidth for a data transfer between a storage source and a storage target. A target module adjusts a target bandwidth for a data transfer using a weighting factor. A target bandwidth may be based on at least one of a network bandwidth and a historical bandwidth. A weighting factor for a target bandwidth may be based on a priority for a data transfer. A transfer module transfers at least a block of data of a data transfer from a storage source to a storage target in a manner configured to satisfy a target bandwidth. A delay before transferring a block and/or a block size for the block may be selected based on a target bandwidth.
Method, device and system for establishing label switched path
The present application discloses a method, a device and a system for establishing an LSP. The method includes: allocating, by a proxy node device, a label for a destination node device, generating a label mapping message carrying the label, an address of the destination node device and an address of the proxy node device, and sending the label mapping message to an upstream node device to initiate establishment of a first LSP from an entry node device to the proxy node device; stitching, by the proxy node device, the first LSP with a second LSP to form a third LSP from the entry node device to the destination node device, where the second LSP is an LSP established between the proxy node device and the destination node device.
Methods to improve online diagnostics of valve assemblies on a process line and implementation thereof
Embodiments of a method and a system, which is configured to implement the method, to process data from one or more valve assemblies found e.g., on a process line. These embodiments can generate a listing that identifies how network/system bandwidth is allocated for the collection of data from the valve assemblies. The embodiments can also process the data to re-arrange the valve assemblies in the listing to better allocate the network/system bandwidth to certain ones of the valve assemblies that require more data to properly assess the operation of the valve assembly. In this way, further diagnostics using the data can identify any changes in operation of the valve assemblies that might be detrimental to the valve assembly and/or the process line in general.
Method for prioritizing network packets at high bandwidth speeds
The embodiments are directed to methods and appliances for scheduling a packet transmission. The methods and appliances can assign received data packets or a representation of data packets to one or more connection nodes of a classification tree having a link node and first and second intermediary nodes associated with the link node via one or more semi-sorted queues, wherein the one or more connection nodes correspond with the first intermediary node. The methods and appliances can process the one or more connection nodes using a credit-based round robin queue. The methods and appliances can authorize the sending of the received data packets based on the processing.
Dynamic thresholds for congestion control
Communication apparatus includes multiple interfaces configured for connection to a packet data network. A memory, coupled to the interfaces, is configured as a shared buffer to contain packets in multiple sets of queues for transmission to the network. Each set of queues receives in the shared buffer a respective allocation having an allocation size that varies over time in response to an amount of space in the shared buffer that is unused at any given time. A controller is configured to apply congestion control to a respective fraction of the packets that are queued for transmission from each set of queues in the shared buffer to the network, such that the respective fraction is set for each set of queues at any given time in response to a relation between a length of the queues in the set and the allocation size of the respective allocation at the given time.
FEED-FORWARD FILTERING DEVICE AND ASSOCIATED METHOD
A filtering device includes a low-pass filter (LPF), a noise estimation circuit and a first combining circuit. The LPF receives and filters a pre-filtering signal to generate an output signal of the filtering device. The noise estimation circuit estimates an estimated noise signal according to the output signal and the pre-filtering signal. The first combining circuit subtracts the estimated noise signal from an input signal of the filtering device to generate the pre-filtering signal.
Profile virtual conference attendees to enhance meeting interactions
An embodiment for profiling virtual conference attendees to enhance meeting interactions is provided. The embodiment may include receiving permission from one or more users to monitor one or more IoT devices for data associated with each user. The embodiment may also include selecting an initial weight for the IoT devices. The embodiment may further include analyzing the data for a trigger event. The embodiment may also include in response to determining at least one of the one or more users intends to participate, adding the at least one user to a dynamic participation queue. The embodiment may further include assigning a time interval for which each user who was added to the dynamic participation queue is able to participate. The embodiment may also include creating a dynamic profile for each user in attendance.
CROSS-LAYER AND CROSS-ACCESS TECHNOLOGY TRAFFIC SPLITTING AND RETRANSMISSION MECHANISMS
The present disclosure is related to Multi-Access Management Services (MAMS), which is a programmable framework that provides mechanisms for the flexible selection of network paths in a multi-access (MX) communication environment, based on an application's needs. The present disclosure discusses dynamic traffic splitting mechanisms, cross-layer and cross access technology traffic splitting mechanisms and retransmission mechanisms, multi-link packet reordering mechanisms, and link-aware packet duplication mechanisms. Generic Multi-Access (GMA) data plane functions are also integrated into the MAMS framework.
Communication apparatus, control method, and storage medium
If a communication apparatus is to transmit data to another communication apparatus and communication via a communication unit included in the other communication apparatus is not performable, whether or not to transmit a frame for causing a transition to a state where the communication via the communication unit included in the other communication apparatus is performable is selected based on an amount of data accumulated in a transmission queue in which the data is stored.
Online task dispatching and scheduling system and method thereof
The present disclosure relates to an online task dispatching and scheduling system. The system includes an end device; an access point (AP) configured to receive a task from the end device; one or more edge servers configured to receive the task from the AP, the one or more edge servers including a task waiting queue, a processing pool, a task completion queue, and a scheduler, wherein the AP further includes a dispatcher utilizing Online Learning (OL) for determining a real-time state of network conditions and server loads; and the AP selects a target edge server from the one or more edge servers to which the task is to be dispatched; and wherein the scheduler utilizes Deep Reinforcement Learning (DRL) in generating a task scheduling policy for the one or more edge servers.