Patent classifications
H04L67/566
Method and system for packet processing according to a table lookup
The present invention provides a method for packet processing according to a lookup table, comprising: receiving a packet, wherein the packet includes a packet header, and the packet header consists of control information; providing a lookup table with M entries, wherein each entry includes N conditions and a result/action indicator, and the M entries are sorted in a priority order; matching the information with the N conditions; and applying the result/action indicator in the matched entry with the highest priority on the packet.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR GENERATING AND USING NETWORK FUNCTION (NF) SET OVERLOAD CONTROL INFORMATION (OCI) AND LOAD CONTROL INFORMATION (LCI) AT SERVICE COMMUNICATIONS PROXY (SCP)
A method for generating and using network function (NF) set load information, the method includes, at a service communications proxy (SCP), receiving service based interface (SBI) requests from consumer NFs. The method further includes forwarding the SBI requests to producer NF instances that are members of an NF set. The method further includes receiving responses to the SBI requests from the producer NF instances. The method further includes determining NF instance load control information (LCI) for the producer NF instances using the responses. The method further includes computing, by the SCP and from the NF instance LCI for the producer NF instances, NF set LCI for the NF set. The method further includes communicating the NF set LCI for the NF set to at least one of the consumer NFs or using the NF set LCI for the NF set to select a producer NF instance within an NF set to provide a service for one of the consumer NFs.
METHODS, SYSTEMS, AND COMPUTER READABLE MEDIA FOR GENERATING AND USING NETWORK FUNCTION (NF) SET OVERLOAD CONTROL INFORMATION (OCI) AND LOAD CONTROL INFORMATION (LCI) AT SERVICE COMMUNICATIONS PROXY (SCP)
A method for generating and using network function (NF) set load information, the method includes, at a service communications proxy (SCP), receiving service based interface (SBI) requests from consumer NFs. The method further includes forwarding the SBI requests to producer NF instances that are members of an NF set. The method further includes receiving responses to the SBI requests from the producer NF instances. The method further includes determining NF instance load control information (LCI) for the producer NF instances using the responses. The method further includes computing, by the SCP and from the NF instance LCI for the producer NF instances, NF set LCI for the NF set. The method further includes communicating the NF set LCI for the NF set to at least one of the consumer NFs or using the NF set LCI for the NF set to select a producer NF instance within an NF set to provide a service for one of the consumer NFs.
Compatibility-based feature management for data prep applications
A method executes at a computing device having a display, processors, and memory. The device displays a user interface for a data preparation application, including icons in a flow element palette, each icon representing a parameterized operation that can be inserted into data preparation flows in a flow pane of the user interface. A user places icons into the flow pane, visually defining flow elements for a flow that extracts data from selected data sources, transforms the extracted data, and exports the transformed data. The device retrieves the version number of a corresponding server application running on a server. Using a feature matrix, the device determines which flow elements are not supported by the data prep server application according to the version number. When there are flow elements not supported by the data prep server application running on the server, the device indicates this to the user.
Distributed data stream programming and processing
Techniques are described herein for distributed data stream programming and processing. The techniques include sending a request indicating one or more regions of a program code to access a stream in a stream pool and to execute on a processing node in a processing nodes pool. The techniques also include accessing the stream defined in the one or more regions of the program code to service the request. Thereafter, the processing node is selected to use for execution of the one or more regions of the program code and the processing node executes one or more instances of the one or more regions of the program code.
Distributed data stream programming and processing
Techniques are described herein for distributed data stream programming and processing. The techniques include sending a request indicating one or more regions of a program code to access a stream in a stream pool and to execute on a processing node in a processing nodes pool. The techniques also include accessing the stream defined in the one or more regions of the program code to service the request. Thereafter, the processing node is selected to use for execution of the one or more regions of the program code and the processing node executes one or more instances of the one or more regions of the program code.
LOW ENTROPY BROWSING HISTORY FOR ADS QUASI-PERSONALIZATION
The present disclosure provides systems and methods for content quasi-personalization or anonymized content retrieval via aggregated browsing history of a large plurality of devices, such as millions or billions of devices. A sparse matrix may be constructed from the aggregated browsing history, and dimensionally reduced, reducing entropy and providing anonymity for individual devices. Relevant content may be selected via quasi-personalized clusters representing similar browsing histories, without exposing individual device details to content providers.
METHOD FOR DELEGATING THE DELIVERY OF CONTENT ITEMS TO A CACHE SERVER
The advent of end-to-end encryption systems has put an end to the use of “caching” methods which consisted of replicating and storing data flows relating to content items in a “cache” which is located on board one or more intermediate devices. However, the disappearance of these “caching” solutions affects the management of the resources of different communication devices, particularly by bringing about an increase in the number of connections between communication devices that is necessary for delivering content items to the user terminals. Unlike known “caching” techniques in which the content itself is stored in at least one cache memory of a cache server, the method relies on storing in a cache server all of the messages exchanged between the original server hosting the content and the cache server, leading to the delivery of the content to the cache server.
METHOD FOR DELEGATING THE DELIVERY OF CONTENT ITEMS TO A CACHE SERVER
The advent of end-to-end encryption systems has put an end to the use of “caching” methods which consisted of replicating and storing data flows relating to content items in a “cache” which is located on board one or more intermediate devices. However, the disappearance of these “caching” solutions affects the management of the resources of different communication devices, particularly by bringing about an increase in the number of connections between communication devices that is necessary for delivering content items to the user terminals. Unlike known “caching” techniques in which the content itself is stored in at least one cache memory of a cache server, the method relies on storing in a cache server all of the messages exchanged between the original server hosting the content and the cache server, leading to the delivery of the content to the cache server.
Cloud assisted machine learning
A method for training an analytics engine hosted by an edge server device is provided. The method includes determining a classification for data in an analytics engine hosted by an edge server and computing a confidence level for the classification. The confidence level is compared to a threshold. The data is sent to a cloud server if the confidence level is less than the threshold. A reclassification is received from the cloud server and the analytics engine is trained based, at least in part, on the data and the reclassification.