H04L69/30

LAYER FOUR OPTIMIZATION FOR A VIRTUAL NETWORK DEFINED OVER PUBLIC CLOUD

Some embodiments establish for an entity a virtual network over several public clouds of several public cloud providers and/or in several regions. In some embodiments, the virtual network is an overlay network that spans across several public clouds to interconnect one or more private networks (e.g., networks within branches, divisions, departments of the entity or their associated datacenters), mobile users, and SaaS (Software as a Service) provider machines, and other web applications of the entity. The virtual network in some embodiments can be configured to optimize the routing of the entity's data messages to their destinations for best end-to-end performance, reliability and security, while trying to minimize the routing of this traffic through the Internet. Also, the virtual network in some embodiments can be configured to optimize the layer 4 processing of the data message flows passing through the network.

Smart device control method and apparatus
10291713 · 2019-05-14 · ·

Methods and apparatuses are provided for controlling smart device in a smart home. In the method, a control device receives a control instruction sent by an instruction sending device and including a set of working mode information, the set of working mode information including at least one working mode information of at least one smart device type. The control device sets a working mode of a target smart device connected to the control device via a local area network (LAN) based on the set of working mode information, where the set of working mode information is pre-stored in the instruction sending device.

UNIFIED REAL-TIME AND NON-REAL-TIME DATA PLANE
20190141000 · 2019-05-09 ·

According to some embodiments, system and methods are provided, comprising at least one asset; a computer programmed with a data share module for the asset, the data share module for controlling data flow in the asset; the computer including a data share processor and a memory in communication with the data share processor, the memory storing the data share module and additional program instructions, wherein the data share processor is operative with the data share module and additional program instructions to perform functions as follows: receiving a message from a source at the data share module; determining, via the data share module, whether the source is one of a non-real time domain of the asset and the real-time domain of the asset; determining, via the data share module, when a destination is able to respond to the message, wherein the destination is one of the non-real time domain and the real-time domain, and wherein the destination is different from the source; transmitting, via the data share module, the message directly to the destination when the destination is able to respond to the message; receiving a response to the message; and generating an operating response of the asset based on the response. Numerous other aspects are provided.

Real-Time Adaptive Receive Side Scaling Key Selection

Selecting a receive side scaling (RSS) key is provided. It is determined whether a defined time interval expired. In response to determining that the defined time interval has expired, it is determined whether one or more keys in a set of randomly generated candidate RSS keys have a higher packet distribution score than an active RSS key. In response to determining that one or more keys in the set of randomly generated candidate RSS keys have a higher packet distribution score than the active RSS key, an RSS key having a highest packet distribution score is selected from the one or more keys in the set of randomly generated candidate RSS keys that have a higher packet distribution score than the active RSS key. The RSS key having the highest packet distribution score is used to distribute incoming network packets across a plurality of processors.

Real-Time Adaptive Receive Side Scaling Key Selection

Selecting a receive side scaling (RSS) key is provided. It is determined whether a defined time interval expired. In response to determining that the defined time interval has expired, it is determined whether one or more keys in a set of randomly generated candidate RSS keys have a higher packet distribution score than an active RSS key. In response to determining that one or more keys in the set of randomly generated candidate RSS keys have a higher packet distribution score than the active RSS key, an RSS key having a highest packet distribution score is selected from the one or more keys in the set of randomly generated candidate RSS keys that have a higher packet distribution score than the active RSS key. The RSS key having the highest packet distribution score is used to distribute incoming network packets across a plurality of processors.

ACCOUNTING AND ENFORCING NON-PROCESS EXECUTION BY CONTAINER-BASED SOFTWARE RECEIVING DATA OVER A NETWORK

Utilizing a computing device to determine and enforce limits on cloud computing containers receiving data over a network. A determination is made of total container time remaining available for a first container to execute in a computing environment. Processor packet receipt time is determined for receiving and processing of a packet or a batch of packets via a network stack associated with the computing device. An updated total container time remaining is calculated for the first container accounting for the processor packet receipt time. The updated total container time remaining is enforced by dropping a subsequent packet or batch of packets received at the network stack if the updated total container time remaining is insufficient.

RPS support for NFV by system call bypass
10230608 · 2019-03-12 · ·

A system for Receive Packet Steering (RPS) support for Network Function Virtualization (NFV) by system call bypass includes a memory, a plurality of central processing units (CPUs) in communication with the memory, an operating system, and a Network Interface Controller (NIC) including a receive queue. The system also includes a driver thread and a plurality of forwarding threads. The driver thread handles the receive queue of the NIC. In an example, a first forwarding thread of the plurality of forwarding threads executes a system call. The first forwarding thread executes on the first CPU. The system call, when executed, executes a monitor instruction on a first CPU to monitor for updates to a designated memory location and checks a condition. Checking the condition includes reading the designated memory location and determining whether information in the designated memory location indicates that a new packet for the first forwarding thread has arrived.

Orchestrating resources in a multilayer computing environment by sending an orchestration message between layers

Software that generates a message containing operations for multiple layers in a multi-layer environment, by performing the following operations: (i) receiving an operation to perform across a multilayer computing environment; (ii) generating a message for performing the operation across the multilayer computing environment, wherein the message includes a plurality of layer portions that include sub-operation(s) of the operation, wherein each layer portion corresponds to a respective layer in the multilayer computing environment; and (iii) orchestrating performance of the operation by sending the message between layers in the multilayer computing environment according to a sequence for performing sub-operation(s) indicated in the message, wherein when the message is located at a respective layer, the layer performs a respective set of sub-operation(s) according to the respectively corresponding layer portion for the layer in the message.

SYSTEMS AND METHODS FOR GENERATING, DEPLOYING, AND MANAGING DATA INFRASTRUCTURE STACKS

Generating, by a cloud-based system, a plurality of data infrastructure slices, each of the plurality of data infrastructure slices including a respective service; storing, by the cloud-based system, the plurality of data infrastructure slices; selecting, by the cloud-based server, at least two data infrastructure slices of the plurality of stored data infrastructure slices; generating, by the cloud-based system in response to the selection of the at least two data infrastructure slices of the plurality of data infrastructure slices, a data infrastructure stack comprising the selected stored data infrastructure slices, the data infrastructure stack capable of being executed in different third-party entity accounts of an on-demand cloud-computing platform; and deploying, by the cloud-based system, the data infrastructure stack in a particular third-party entity account of the on-demand cloud-computing platform.

Data transmission method and apparatus for terminal
10178051 · 2019-01-08 · ·

Embodiments of the present invention disclose a data transmission method and apparatus for a terminal. The terminal exchanges data of an application with a server through a first port by using a first access node; when one port in a second port set is in an enabled state, the terminal accesses one access node in a candidate access node set through the enabled port in the second port set, and exchanges, based on the Multipath TCP, the data of the application with the server by using an access node corresponding to the enabled port.