Patent classifications
H04L47/2433
Surgical network determination of prioritization of communication, interaction, or processing based on system or device needs
A surgical hub within a surgical hub network may include a controller having a processor, in which the controller may determine a priority of a communication, an interaction, or a processing of information based on a requirement of a device communicating with the hub. The device may be a smart surgical device. The requirement of the surgical device may comprise data processed by a device component of an associated system The controller may prioritize communication of the data processed by the device component of the associate system with the surgical device. A network of surgical hubs may include a plurality of surgical hubs. Each hub may have one of a plurality of controllers, in which a first of the plurality of controllers is configured to distribute an execution of a process and data used by the process among at least a subset of the plurality of surgical hubs.
Network management system, management device, relay device, method, and program
The present invention offers a network management technique that can realize effective acquisition of various data transmitted from terminals, while reducing the network congestion. In a network management system including a plurality of terminals that can transmit data, a plurality of destination devices that perform respective predetermined processes based on the data, a relay device arranged between them, and a management device communicable with the terminals, the destination devices, and the relay device, the management device receives a request regarding required data from the destination devices; and, in response to the request, instructs to integrate data items that are to be relayed to individual destination devices. Upon receipt of the instruction, the relay device identifies and integrates data items that are to be relayed to the individual destination devices from among data items transmitted from the terminals, and transmits the integrated data to corresponding ones of the destination devices, based on the instruction.
Data transmission method, computing device, network device, and data transmission system
A data transmission method implemented by a network device, where the data transmission method includes receiving a first data packet sent by a transmit end, buffering the first data packet to a low-priority queue when the first data packet is sent by the transmit end during a first round-trip time (RTT) of a data transmission phase between the transmit end and a receive end, receiving a second data packet from the transmit end, buffering the second data packet to a high-priority queue when the second data packet is not sent by the transmit end during the first RTT, and forwarding the second data packet in the high-priority queue before the first data packet in the low-priority queue.
Communication path control device, communication path control method, and communication path control system
Provided is a communication path control device that transmits path information for controlling a path for transmitting data to a plurality of communication devices which are connected by a wired path and through which data addressed to the plurality of communication devices is sequentially forwarded.
Methods and apparatus for low latency operation in user space networking
Methods and apparatus for low latency operation in user space networking architectures. In one embodiment, an apparatus configured to enable low latency data transfer is disclosed. The exemplary embodiment provides a multiplexer that allocates a fixed portion of network bandwidth for low latency traffic. Low latency traffic is routed without the benefit of general-purpose packet processing. In one embodiment, network extensions for low latency operations are described. Specifically, an agent is described that enables low latency applications to negotiate for low latency access. In one embodiment, mechanisms for providing channel event notifications are described. Channel event notifications enable corrective action/packet processing by the low latency application. In one embodiment, mechanisms for providing interface advisory information are described. Interface advisory information may be provided asynchronously to assist in low latency operation.
SUPERVISED QUALITY OF SERVICE CHANGE DEDUCTION
Systems and methods are provided for monitoring traffic flow using a trained machine learning (ML) model. For example, in order to maintain a stable level of connectivity and network experience for the devices in a network, the ML model can monitor the data flow of each device and label each data flow based on its behavior and properties. The system can take various actions based on the labeled data flow, including generate an alert, automatically change network settings, or otherwise adjust the data flow from the device.
Unification sublayer for multi-connection communication
Managing Internet Protocol (IP) flows to produce multi-connection communication is contemplated, such as but not necessarily limited to managing a single IP flow simultaneously through disparate physical layers (PHYs). A unification sublayer may be configured as a logical interface between a network layer and a data link layer and/or the disparate PHYs to facilitating partitioning of IP packets included in the IP flow.
Supervised quality of service change deduction
Systems and methods are provided for monitoring traffic flow using a trained machine learning (ML) model. For example, in order to maintain a stable level of connectivity and network experience for the devices in a network, the ML model can monitor the data flow of each device and label each data flow based on its behavior and properties. The system can take various actions based on the labeled data flow, including generate an alert, automatically change network settings, or otherwise adjust the data flow from the device.
Congestion Control Method and Apparatus
A congestion control method and apparatus is disclosed. The method includes, when usage information of a buffer in a device satisfies a preset first condition, determining at least one first queue from the buffer, and performing congestion control on the at least one first queue, where the first condition includes a first threshold corresponding to the usage information of the buffer, and the at least one first queue is a queue whose queue delay is greater than or equal to a queue delay threshold or whose queue length is greater than or equal to a queue length threshold in a plurality of queues in the buffer. In this way, the device triggers a condition of congestion control based on a buffer status.
MITIGATION OF NETWORK ATTACKS BY PRIORITIZING NETWORK TRAFFIC
A computer method and system for prioritizing network traffic flow to a protected computer network. Network traffic flowing from one or more external hosts to the protected computer network is intercepted and intercepted data packets are dropped if forwarding the intercepted data packet to the protected network would cause the value of the bandwidth of network traffic flow to the protected network to exceed a configured overall traffic bandwidth threshold value associated with the protected network. If not dropped, the intercepted data packet is analyzed to determine a classification type for the intercepted data packet based upon prescribed criteria wherein each classification type has an assigned classification bandwidth threshold value, wherein the classification bandwidth threshold value is less than the overall traffic bandwidth threshold value for the protected network. The intercepted data packet is dropped if forwarding the intercepted data packet would cause the value of the bandwidth of traffic flow to the protected network to exceed the bandwidth threshold value assigned to the determined classification type of the intercepted packets.