H04L12/70

Bi-directional negotiation for dynamic data chunking

Systems and techniques for bi-directional negotiation for dynamic data chunking are described herein. A set of available features for a memory subsystem. The set of available features including latency of buffer locations of the memory subsystem. An indication of a first latency requirement of a first data consumer and a second latency requirement of a second data consumer may be obtained. A first buffer location of the memory subsystem for a data stream based on the first latency requirement may be negotiated with the first data consumer. A second buffer location of the memory subsystem for the data stream based on the second latency requirement may be negotiated with the second data consumer. An indication of the first buffer location may be provided to the first data consumer and an indication of the second buffer location may be provided to the second data consumer.

Network control device, communication system, network control method, program, and recording medium

Disclosed is a network control device for performing control of a system including a plurality of terminals and including a plurality of gateway devices that are coupled to a predetermined network, the network control device including a selection unit that selects a first gateway device used by a first terminal based on a quality between the first terminal and the plurality of gateway devices, a tunnel establishing unit that connects between the first terminal and each of other terminals that uses the first gateway device via a tunnel, and a path control unit that performs path control, such that, when a second gateway device used by a second terminal that is a communication destination of the first terminal is same as the first gateway device, traffic from the first terminal to the second terminal is routed through a tunnel between the first terminal and the second terminal, and when the second gateway device is different from the first gateway device, traffic from the first terminal to the second terminal is routed through the predetermined network.

Cognitive automation-based engine to propagate data across systems

Aspects of the disclosure relate to cognitive automation-based engine processing to propagate data across multiple systems via a private network to overcome technical system, resource consumption, and architecture limitations. Data to be propagated can be manually input or extracted from a digital file. The data can be parsed by analyzing for correct syntax, normalized into first through sixth normal forms, segmented into packets for efficient data transmission, validated to ensure that the data satisfies defined formats and input criteria, and distributed into a plurality of data stores coupled to the private network, thereby propagating data without repetitive manual entry. The data may also be enriched by, for example, correcting for any errors or linking with other potentially related data. Based on data enrichment, recommendations of additional target(s) for propagation of data can be identified. Reports may also be generated. The cognitive automation may be performed in real-time to expedite processing.

Adjustable receive queue for processing packets in a network device

A network device, such as a Network Interface Card (NIC), can have a receive queue (RxQ) that changes size based on whether the network device is in a normal operating mode or in a maintenance mode. In a normal operating mode, it is desirable that the receive queue has a smaller number of free buffers, to increase cache locality in a processor subsystem. However, there can be known periods when the receive queue can be overloaded. During a maintenance period, it is desirable that the receive queue absorbs a large burst of network packets while the processor subsystem is not processing the packets. A solution is to maintain a receive queue at a smaller percentage of its maximum during the normal operation mode, but then before or upon entering the maintenance mode, expand the receive queue to a larger size.

Technologies for balancing throughput across input ports of a multi-stage network switch

Technologies for balancing throughput across input ports include a network switch. The network switch is to generate, for an arbiter unit in a first stage of a hierarchy of stages of arbiter units, turn data indicative of a set of turns in which to transfer packet data from devices connected to input ports of the arbiter unit. The network switch is also to transfer, with the arbiter unit, the packet data from the devices in the set of turns. Additionally, the network switch is to determine weight data indicative of the number of turns represented in the set and provide the weight data from the arbiter unit in the first stage to another arbiter unit in a subsequent stage to cause the arbiter unit in the subsequent stage to allocate a number of turns for the transfer of the packet data from the arbiter unit in the first stage.

Television key phrase detection

Images of key phrases or hashtags appear on televised feeds. Image processing techniques, such as feature locating algorithms or character recognition algorithms, can be used to locate the images of key phrases in the images. Then, character recognition algorithms can be used to generate a list of candidate key phrases for the key phrase in image format. However, identification of the key phrase in image format is not completely accurate with conventional methods. Social media content items associated with the televised feed are used to filter the list of candidate key phrases. Using known information about the televised feed as well as about key phrases in text format in the social media content items, candidate key phrases in the list of candidate key phrases can be scored and, thus, a final candidate key phrase selected based on the scores.

DNS (domain name server)-based application-aware routing on SD-WAN (software-defined wide access network)
11057304 · 2021-07-06 · ·

Applications associated with the network data packet are identified by parsing the network data packet of the received network data packets to identify a second-level domain from a destination IP address and searching the second-level domain database to identify the application associated with the second-level domain. It is determined whether the network data packet comprises a DNS packet or a non-DNS packet. Responsive to the network data packet comprising a DNS packet, a second-level domain database in real-time is updated by storing the destination IP address in association with the second-level domain, the second-level domain associated with the application. Responsive to the network data packet comprising a non-DNS packet, a network policy for enforcement on the identified application and routing the network data packet in accordance with the network policy for the application is identified.

Data sharing system and data sharing method therefor

A data sharing system may include a storage module and at least two processing modules. The at least two processing modules may share the storage module and the at least two processing modules communicate to implement data sharing. A data sharing method for the data sharing system is provided. According to the disclosure, a storage communication overhead may be reduced, and a data access delay may be effectively reduced.

Flow-based local egress in a multisite datacenter

A method for a hypervisor to implement flow-based local egress in a multisite datacenter is disclosed. The method comprises: determining whether a first data packet of a first data flow has been received. If the first data packet has been received, then the hypervisor determines a MAC address of a first local gateway in a first site of a multisite datacenter that communicated the first data packet, and stores the MAC address of the first local gateway and a 5-tuple for the first data flow. Upon determining that a response for the first data flow has been received, the hypervisor determines whether the response includes the MAC address of the first local gateway. If the response includes a MAC address of another local gateway, then the hypervisor replaces, in the response, the MAC address of another local gateway with the MAC address of the first local gateway.

Switch with controlled queuing for multi-host endpoints

Communication apparatus includes multiple ports configured to serve as ingress and egress ports, such that the ingress ports receive packets from a packet data network for forwarding to respective egress ports. The ports include an egress port configured for connection to a network interface controller (NIC) serving multiple physical computing units, which have different, respective destination addresses and are connected to the NIC by different, respective communication channels. Control and queuing logic is configured to queue the packets that are received from the packet data network for forwarding to the multiple physical computing units in different, respective queues according to the destination addresses, and to arbitrate among the queues so as to convey the packets from the queues via the same egress port to the NIC, for distribution to the multiple physical computing units over the respective communication channels.