Patent classifications
H04L49/20
Parallel data processing for service function chains spanning multiple servers
Systems, computer-readable media, and methods are disclosed for parallel data processing for service function chains with network functions spanning multiple servers. An example system includes a first server hosting a first network function of a service function chain, a second server hosting a second network function of the service function chain, a mirror function deployed in a first switch to replicate a plurality of packets received by the system and to send respective copies of the plurality of packets to the first network function and to at least one of the second network function and a third network function of the service function chain, and a merge function deployed in a second switch to merge respective outputs of the first network function and the at least one of the second network function and the third network function.
Network traffic disruptions
Apparatus including a network switch which includes switching circuitry to switch packets, packet drop decision circuitry to identify a packet that is to be dropped, packet duplication circuitry to duplicate the packet that is to be dropped, producing a first packet and a second packet, and packet exporting circuitry to export the first packet to a memory external to the switch via direct memory access (DMA). Related apparatus and methods are also provided.
Technologies for accelerating edge device workloads
Technologies for accelerating edge device workloads at a device edge network include a network computing device which includes a processor platform that includes at least one processor which supports a plurality of non-accelerated function-as-a-service (FaaS) operations and an accelerated platform that includes at least one accelerator which supports a plurality of accelerated FaaS (AFaaS) operation. The network computing device is configured to receive a request to perform a FaaS operation, determine whether the received request indicates that an AFaaS operation is to be performed on the received request, and identify compute requirements for the AFaaS operation to be performed. The network computing device is further configured to select an accelerator platform to perform the identified AFaaS operation and forward the received request to the selected accelerator platform to perform the identified AFaaS operation. Other embodiments are described and claimed.
NETWORK INTERFACE DEVICE
A network interface device has data path circuitry configured to cause data to be moved into and/or out of the network interface device. The data path circuitry comprises: first circuitry for providing one or more data processing operations; and interface circuitry supporting channels. The channels comprises command channels receiving command information from a plurality of data path circuitry user instances, event channels providing respective command completion information to the plurality of data path user instances; and data channels providing the associated data.
NETWORK INTERFACE DEVICE
A network interface device has data path circuitry configured to cause data to be moved into and/or out of the network interface device. The data path circuitry comprises: first circuitry for providing one or more data processing operations; and interface circuitry supporting channels. The channels comprises command channels receiving command information from a plurality of data path circuitry user instances, event channels providing respective command completion information to the plurality of data path user instances; and data channels providing the associated data.
Service process control method and network device
A service process control method includes selecting, according to an execution policy of at least one service deployed on a network device, M data processors for processing a packet received by the network device, determining a processing sequence for the selected M data processors to process the packet, and invoking the selected M data processors to sequentially process, according to the processing sequence, the packet. An execution sequence for a data processor to process the packet is dynamically generated according to a policy set corresponding to the service.
Service process control method and network device
A service process control method includes selecting, according to an execution policy of at least one service deployed on a network device, M data processors for processing a packet received by the network device, determining a processing sequence for the selected M data processors to process the packet, and invoking the selected M data processors to sequentially process, according to the processing sequence, the packet. An execution sequence for a data processor to process the packet is dynamically generated according to a policy set corresponding to the service.
Prepopulation of caches
A system, process, and computer-readable medium for updating an application cache using a stream listening service is described. A stream listening service may monitor one or more data streams for content relating to a user. The stream listening service may forward the content along with time-to-live values to an application cache. A user may use an application to obtain information regarding the user's account, where the application obtains information from a data store and/or cached information from the application cache. The stream listening service, by forwarding current account information, obtained from listening to one or more streams, to the application cache, reduces traffic at the data store by providing current information from the data stream to the application cache.
Congestion control method and related device
Embodiments of this application disclose a congestion control method and a related device. A Transmission Control Protocol offload engine TOE sends a congestion control notification to a central processing unit CPU, where the congestion control notification instructs the CPU to obtain a target parameter, and the target parameter is used by the CPU to generate a congestion control calculation result. The TOE obtains the congestion control calculation result returned by the CPU, where the congestion control calculation result includes a congestion control window value. The TOE sends a packet based on the congestion control window value. In this application, the TOE and the CPU implement congestion control together. When a new congestion control algorithm emerges, the new congestion control algorithm may be applied without changing a structure of the TOE. Therefore, in this application, an upgrade period of the congestion control algorithm can be shortened, and flexibility can be improved.
Congestion control method and related device
Embodiments of this application disclose a congestion control method and a related device. A Transmission Control Protocol offload engine TOE sends a congestion control notification to a central processing unit CPU, where the congestion control notification instructs the CPU to obtain a target parameter, and the target parameter is used by the CPU to generate a congestion control calculation result. The TOE obtains the congestion control calculation result returned by the CPU, where the congestion control calculation result includes a congestion control window value. The TOE sends a packet based on the congestion control window value. In this application, the TOE and the CPU implement congestion control together. When a new congestion control algorithm emerges, the new congestion control algorithm may be applied without changing a structure of the TOE. Therefore, in this application, an upgrade period of the congestion control algorithm can be shortened, and flexibility can be improved.