Patent classifications
H04L67/2866
Systems and methods for data linkage and entity resolution of continuous and un-synchronized data streams
The present disclosure is directed to a scalable, extensible, fault-tolerant system for stateful joining of two or more streams that are not fully synchronized, event ordering is not guaranteed, and certain events arrive a bit late. The system can ensure to combine the events or link the data in near real-time with low latency to mitigate impacts on downstream applications, such as ML models for determining suspicious behavior. Apart from combining events, the system can ensure to propagate the needed entities to other product streams or help in entity resolution. If any of the needed data is yet to arrive, a user can configure a few parameters to achieve desired eventual and attribute consistency. The architecture is designed to be agnostic of stream processing framework and can work well with both streaming and batch paths.
FLEXIBLE RESOURCE SHARING IN A NETWORK
A network processing device connects to one or more devices in a computing node and connects to one or more other network processing devices of other computing nodes. The network processing device identifies a policy for allowing devices in other computing nodes to access a particular resource of one of the devices in its computing node. The network processing device receives an access request to access the particular resource from another network processing device and sends a request to the device hosting the particular resource based on the access request and the policy.
FLEXIBLE RESOURCE SHARING IN A NETWORK
A network processing device connects to one or more devices in a computing node and connects to one or more other network processing devices of other computing nodes. The network processing device identifies a policy for allowing devices in other computing nodes to access a particular resource of one of the devices in its computing node. The network processing device receives an access request to access the particular resource from another network processing device and sends a request to the device hosting the particular resource based on the access request and the policy.
METHOD FOR CONFIGURING A COMMUNICATION NETWORK FOR THE CYCLICAL TRANSMISSION OF MESSAGES
The invention relates to a method for configuring an industrial real-time-capable communications network for the cyclical transmitting of messages (NWM), each comprising one or more data sets (DS1, . . . , DS4). The communications network (1) comprises a message source (10) for generating and cyclically sending the messages (NWM), at least one message sink (16, 17, 18) for receiving and processing the messages (NWM), as well as at least one network component (12, 14) which forwards messages (NWM) from the message source to the message sinks. The configuring comprises determining (S1) a network topology of the communications network (1) for the transmitting of a data stream to be sent from the message source (10) in the future, in which the messages (NWM) are each cyclically sent with all data sets. It is determined (S2) which of the data sets in the data stream a respective one of the message sinks wants to receive. In addition, a respective filter (12F, 14F) is determined (S3) and designed (S4) for each network component, whereby, from the network topology, the components connected to the respective network components and the data sets required for the connected message sinks are determined. In this way, during operation of the communications network (1), exclusively required data sets are transmitted in the messages (NWM) in a downstream direction of the data stream.
Methods and apparatus to schedule service requests in a network computing system using hardware queue managers
An example system to schedule service requests in a network computing system using hardware queue managers includes: a gateway-level hardware queue manager in an edge gateway to schedule the service requests received from client devices in a queue; a rack-level hardware queue manager in a physical rack in communication with the edge gateway, the rack-level hardware queue manager to send a pull request to the gateway-level hardware queue manager for a first one of the service requests; and a drawer-level hardware queue manager in a drawer of the physical rack, the drawer-level hardware queue manager to send a second pull request to the rack-level hardware queue manager for the first one of the service requests, the drawer including a resource to provide a function as a service specified in the first one of the service requests.
Methods and apparatus to schedule service requests in a network computing system using hardware queue managers
An example system to schedule service requests in a network computing system using hardware queue managers includes: a gateway-level hardware queue manager in an edge gateway to schedule the service requests received from client devices in a queue; a rack-level hardware queue manager in a physical rack in communication with the edge gateway, the rack-level hardware queue manager to send a pull request to the gateway-level hardware queue manager for a first one of the service requests; and a drawer-level hardware queue manager in a drawer of the physical rack, the drawer-level hardware queue manager to send a second pull request to the rack-level hardware queue manager for the first one of the service requests, the drawer including a resource to provide a function as a service specified in the first one of the service requests.
Method and system for transparent tcp proxy to containerized applications
Example aspects include techniques for implementing a transparent TCP proxy for containerized applications. These techniques may include receiving an outgoing packet from the containerized application via a container bridge and determining, based on a connection associated with the outgoing packet, whether the outgoing packet corresponds to an incoming packet identified by a first marking as being redirected through the TCP proxy. In addition, the techniques may include in response to determining that the outgoing packet corresponds to the incoming packet, adding a second marking to the outgoing packet to indicate that the outgoing packet is to be routed through the TCP proxy, sending the outgoing packet to the TCP proxy based on the second marking, and transmitting an outgoing processed packet to an external device having the destination address, the outgoing processed packet resulting from a performance of a proxy operation by the TCP proxy on the outgoing packet.
Method and system for transparent tcp proxy to containerized applications
Example aspects include techniques for implementing a transparent TCP proxy for containerized applications. These techniques may include receiving an outgoing packet from the containerized application via a container bridge and determining, based on a connection associated with the outgoing packet, whether the outgoing packet corresponds to an incoming packet identified by a first marking as being redirected through the TCP proxy. In addition, the techniques may include in response to determining that the outgoing packet corresponds to the incoming packet, adding a second marking to the outgoing packet to indicate that the outgoing packet is to be routed through the TCP proxy, sending the outgoing packet to the TCP proxy based on the second marking, and transmitting an outgoing processed packet to an external device having the destination address, the outgoing processed packet resulting from a performance of a proxy operation by the TCP proxy on the outgoing packet.
SYSTEMS AND METHODS FOR DATA LINKAGE AND ENTITY RESOLUTION OF CONTINUOUS AND UN-SYNCHRONIZED DATA STREAMS
The present disclosure is directed to a scalable, extensible, fault-tolerant system for stateful joining of two or more streams that are not fully synchronized, event ordering is not guaranteed, and certain events arrive a bit late. The system can ensure to combine the events or link the data in near real-time with low latency to mitigate impacts on downstream applications, such as ML models for determining suspicious behavior. Apart from combining events, the system can ensure to propagate the needed entities to other product streams or help in entity resolution. If any of the needed data is yet to arrive, a user can configure a few parameters to achieve desired eventual and attribute consistency. The architecture is designed to be agnostic of stream processing framework and can work well with both streaming and batch paths.
Curating proxy server pools
A system and method of forming proxy server pools is provided. The method comprises several steps, such as requesting a pool to execute the user's request and retrieving an initial group. The system checks the service history of an initial group, including whether any of the proxy servers in an initial group are exclusive to existing pools. The exclusive proxy servers in an initial group with eligible proxy servers are replaced when needed and new proxy server pools are formed. The system also records the service history of proxy servers and pools before and after the pools are created. The method can also involve predicting the pool health in relation with the thresholds foreseen and replacing the proxy servers below the threshold.