H04L41/083

Systems and methods for jointly optimizing WAN and LAN network communications

Described are systems and methods for jointly optimizing Wide Area Network (WAN) and Local Area Network (LAN) network communications. In one embodiment, a management device communicatively interfaced with a WAN and a LAN includes a collection module to collect LAN information from the LAN and WAN information from the WAN; an analysis module to jointly analyze the collected WAN information and the collected LAN information to identify an operational condition; and an implementation nodule to initiate a management event responsive to the operational condition being identified. In one embodiment, the management event includes generating and transmitting a diagnostics report responsive to a fault being identified. The management device may further generate and execute instructions to remedy the diagnosed fault.

Systems and methods for jointly optimizing WAN and LAN network communications

Described are systems and methods for jointly optimizing Wide Area Network (WAN) and Local Area Network (LAN) network communications. In one embodiment, a management device communicatively interfaced with a WAN and a LAN includes a collection module to collect LAN information from the LAN and WAN information from the WAN; an analysis module to jointly analyze the collected WAN information and the collected LAN information to identify an operational condition; and an implementation nodule to initiate a management event responsive to the operational condition being identified. In one embodiment, the management event includes generating and transmitting a diagnostics report responsive to a fault being identified. The management device may further generate and execute instructions to remedy the diagnosed fault.

Detecting and quantifying latency components in accessing cloud services

A latency processing system detects traffic at a cloud service end point and analyzes packets in the detected traffic to identify a network configuration of a client that is accessing the cloud service. Latency components corresponding to different parts of the network configuration are identified and quantified. A recommendation engine is controlled to generate and surface an output indicative of recommendations for reducing network latency.

Low delay networks for interactive applications including gaming

Aspects of the subject disclosure may include, for example, determining delay data for respective network delays through a communication network from respective gaming stations to a gaming provider database for implementing a multi-user online video game, determining available cloud nodes in a potential path in the communication network between a respective gaming station and the gaming provider database, determining potential network configurations for data communication between the respective gaming stations and the gaming provider database using the available cloud nodes and available communication links, identifying an optimum configuration for data communication between the respective gaming stations and the gaming provider database, wherein the optimum configuration provides a minimum fair delay for the respective gaming stations, and configuring the communication network according to the optimum configuration. Other embodiments are disclosed.

Method for enhancing throughput in blockchain network

In a hyper ledger-based blockchain network system, in order to adjust latency and throughput required by a specific hyper ledger-based network, by using a block size, an endorsement policy, the number of channels, and the number of vCPUs allocation, the latency and the throughput desired by a user are maintained.

Method for enhancing throughput in blockchain network

In a hyper ledger-based blockchain network system, in order to adjust latency and throughput required by a specific hyper ledger-based network, by using a block size, an endorsement policy, the number of channels, and the number of vCPUs allocation, the latency and the throughput desired by a user are maintained.

Quarantine for cloud-based services
11652790 · 2023-05-16 · ·

A quarantine system could be disposed between an outer firewall and an inner firewall. The quarantine system may include persistent storage containing mappings between computing devices disposed within the inner firewall and data sources disposed outside the outer firewall. The quarantine system may include one or more processors configured to perform operations that include requesting and receiving, based on the mappings, a software-related update from a data source, the software-related update being targeted for deployment on the computing devices. The operations may also include assigning the software-related update for review by a group of one or more agents authorized to approve or reject the software-related update. The operations may also receiving an indication that the software-related update has been approved by the one or more agents and, responsive to receiving the indication, transmitting, based on the mappings, the software-related update to a recipient device within the inner firewall.

COMPUTER NETWORK PLANNING

The disclosure is directed to a network planning tool for planning a topology of a computer network, e.g., for provisioning network capacity. The network planning tool evaluates various factors, e.g., demand projections between a pair of nodes, existing network topology, existing circuits, failure scenarios, and other constraints, and generates a set of circuits that satisfies various demand projections. The set of circuits is robust under failure scenarios and minimizes latency, costs and/or power consumption involved in satisfying the demand projections. The tool assigns each of the circuits to a spectral resource of a physical communication link, e.g., a wavelength of a fiber optic cable, using which it can propagate data traffic between the pair of nodes.

SYNCHRONIZATION OF ARTIFICIAL INTELLIGENCE BASED MICROSERVICES

Aspects of the subject disclosure may include, for example, receiving network-related information associated with a first RAN that includes a first RIC, obtaining, from an artificial intelligence (AI) model synchronization system associated with a second RAN, data relating to an AI model deployed by a second RIC of the second RAN, determining, based on the data relating to the AI model and the network-related information associated with the first RAN, that the AI model can be leveraged by the first RAN to improve network performance of the first RAN, performing synchronization with the AI model synchronization system to obtain the AI model, responsive to the determining that the AI model can be leveraged by the first RAN to improve the network performance of the first RAN, and causing the first RIC to deploy the AI model in the first RAN after the performing the synchronization. Other embodiments are disclosed.

SYSTEM AND METHOD FOR DATA FLOW OPTIMIZATION
20170366398 · 2017-12-21 · ·

The disclosure provides a networked computing system, comprising at least one network communication interface connected to at least one network, the at least one network communication interface being configured to receive data from and to send data to the at least one network, a control component, wherein the control component is adapted to configure routes, wherein the control component is configured to provide current input parameters on the routes, and wherein an application component is configured to output predicted configuration parameters for future route configurations based on predictions, based on the predicted configuration parameters output by the application component.