H04L12/919

Scalable edge computing

There is disclosed in one example a communication apparatus, including: a telemetry interface; a management interface; and an edge gateway configured to: identify diverted traffic, wherein the diverted traffic includes traffic to be serviced by an edge microcloud configured to provide a plurality of services; receive telemetry via the telemetry interface; use the telemetry to anticipate a future per-service demand within the edge microcloud; compute a scale for a resource to meet the future per-service demand; and operate the management interface to instruct the edge microcloud to perform the scale before the future per-service demand occurs.

METHODS, SYSTEMS AND APPRATUSES FOR OPTIMIZING TIME-TRIGGERED ETHERNET (TTE) NETWORK SCHEDULING BY USING A DIRECTIONAL SEARCH FOR BIN SELECTION

Methods, systems and apparatuses for scheduling a plurality of Virtual Links (VLs) in a Time-Triggered Ethernet (TTE) network by a network scheduling and configuration tool (NST) by establishing a collection of bins that corresponds to the smallest harmonic period allowing full network traversal of a time-triggered traffic packet in the network for determining available bin sets for sending the VL data by the NST; processing by a scheduling algorithm the VLs to be sent in accordance with a strict order comprising scheduling all the highest rate VLs prior to scheduling lower rate VLs; and scheduling reservations for the VLs in bins by tracking the available time available in each bin and optionally spreading the VL data across available bin sets by sorting a list of available bins by ascending bin utilization and by specifying a left-to-right or right-to-left sort order when searching for available bins based on a position in the timeline between the transmitter and receiver end stations.

Methods, systems and apparatuses for optimizing time-triggered ethernet (TTE) network scheduling by using a directional search for bin selection

Methods, systems and apparatuses for scheduling a plurality of Virtual Links (VLs) in a Time-Triggered Ethernet (TTE) network by a network scheduling and configuration tool (NST) by establishing a collection of bins that corresponds to the smallest harmonic period allowing full network traversal of a time-triggered traffic packet in the network for determining available bin sets for sending the VL data by the NST; processing by a scheduling algorithm the VLs to be sent in accordance with a strict order comprising scheduling all the highest rate VLs prior to scheduling lower rate VLs; and scheduling reservations for the VLs in bins by tracking the available time available in each bin and optionally spreading the VL data across available bin sets by sorting a list of available bins by ascending bin utilization and by specifying a left-to-right or right-to-left sort order when searching for available bins based on a position in the timeline between the transmitter and receiver end stations.

DISTRIBUTION FROM MULTIPLE SERVERS TO MULTIPLE NODES
20210042048 · 2021-02-11 ·

The embodiments of the present disclosure disclose a computer-implemented method, a system, and a computer program product for distributing data on multiple servers to multiple nodes in a cluster. In the method, each of M servers is instructed to divide data thereon into N data segments. M and N are integers greater than one. The M servers are instructed to send NM data segments on the M servers to N nodes in a cluster concurrently. For each of the M servers, the N data segments are sent respectively to the N nodes. When any given node in the cluster receives a data piece of a data segment from a server of the M servers, the given node is instructed to transmit the received data piece to remaining nodes in the cluster other than the given node.

Virtual workspace experience visualization and optimization
10924590 · 2021-02-16 · ·

A computer system to track and enhance performance of a virtual workspace system is provided. The computer system receives requests to profile phases of a distributed process executed by hosts coupled to one another via a network. Each of phase includes operations executed by processes hosted by the hosts. Each of phase either starts with receipt of a request via a user interface of a virtualization client or ends with provision of a response to the request via the user interface. The computer system identifies event log entries that each include an identifier of an event marking a start or an end of one of the operations, constructs a performance profile based on the event log entries, and transmits the performance profile to the user interface.

Multiplexed resource allocation architecture

A device configured to receive a data set and instructions for processing the data set from a network device. The device is further configured to identify data flow paths within the instructions and to parse the data set into data segments that correspond with the identified data flow paths. The device is further configured to generate an instruction segment for each data flow path by associating each data segment with a corresponding subset of commands for each data flow path, to assign each instruction segment to a resource unit, and to generate control information with instructions for combining processed data segments from the resource units. The device is further configured to receive processed data segments from the resource units, to generate the processed data set by combining the received processed data segments, and to output the processed data set to the network device.

Gateway address spoofing for alternate network utilization

Methods and systems for alternate network utilization are provided. Exemplary methods include: broadcasting by a hub an unsolicited announcement over a network to a plurality of devices coupled to a router, the unsolicited announcement being configured to cause at least some of the plurality of devices to store in a table a link-layer address of the hub as a link-layer address of the router; receiving by the hub a data packet from a device of the plurality of devices; and selectively directing by the hub the received packet to a first broadband network or a second broadband network using predetermined criteria.

Systems and methods for providing predicted web page resources

Systems, methods, and non-transitory computer-readable media can receive a web page request associated with a user ID from a client computing device. A set of gatekeeper conditions is determined based on the user ID. A set of predicted resources is determined based on the set of gatekeeper conditions. An initial package of resources is transmitted to the client computing device in response to the web page request. The initial package of resources comprises the set of predicted resources.

Method, apparatus, and computer program product for processing computing task

Implementations of the present disclosure relate to a method, apparatus and computer program product for processing a computing task. According to one example implementation of the present disclosure, there is provided a method for processing a computing task, comprising: in response to usage of multiple computing resources indicating that at least one part of computing resources among the multiple computing resources are used, determining a direction of a communication ring between the at least one part of computing resources; in response to receiving a request for processing the computing task, determining the number of computing resources associated with the request; and based on the usage and the direction of the communication ring, selecting from the multiple computing resources a sequence of computing resources which satisfy the number to process the computing task. Other example implementations include an apparatus for processing a computing task and a computer program product thereof.

METHODS, APPARATUS AND SYSTEMS TO SHARE COMPUTE RESOURCES AMONG EDGE COMPUTE NODES USING AN OVERLAY MANAGER

Methods, systems and apparatus disclosed herein create an overlay of nodes to permit the nodes to engage in a peer-to-peer resource bidding process. An example apparatus at an edge of a network includes a first configurer to configure a network interface of a first node of the network in a first configuration, the first configuration to permit the first node to participate in a peer-to-peer resource bidding process with a plurality of other nodes of the network. The apparatus further includes a second configurer to configure the network interface of the first node of the network in a second configuration, the second configuration to prevent the first node from participation in the peer-to-peer resource bidding process.