Patent classifications
G06F2209/547
METHOD AND SYSTEM FOR INTERACTION SERVICING
A system and a method for servicing user interactions are provided. The method includes: receiving, from each respective user, a respective request for a corresponding interaction; obtaining, for each interaction, request-specific information that relates to the received respective request and user-specific information that relates to the respective user; analyzing the request-specific information to determine at least one corresponding microservice that is usable for handling the interaction; and routing the request-specific information and the user-specific information to a respective destination that relates to the determined microservice. For any particular interaction, several corresponding microservices and several corresponding routes and destinations may be determined, and workload distribution metrics may be used to select optimum routes.
METHOD AND SYSTEM FOR RESOLVING PRODUCER AND CONSUMER AFFINITIES IN INTERACTION SERVICING
A system and a method for processing a message on a processing platform, such as a Kafka processing platform, are provided. The method includes: acquiring a plurality of partitions from the messaging platform; designating a first partition from among the plurality of partitions as a sticky partition; generating a plurality of routing keys that are configured to route messages to the sticky partition; using a first routing key from among the plurality of routing keys to identify a first service subscription; subscribing to a second service using the first routing key; and receiving a message transmitted by the second service.
METHOD AND SYSTEM FOR PROVIDING HIGH EFFICIENCY, BIDIRECTIONAL MESSAGING FOR LOW LATENCY APPLICATIONS
A system and a method for routing a message to an application over a connection oriented session in a Kafka messaging platform environment are provided. The method includes: acquiring a plurality of partitions from the Kafka messaging platform; designating a first partition from among the plurality of partitions as a sticky partition; generating a plurality of routing keys that are configured to route to the sticky partition; receiving a subscription from a service that corresponds to a first application; transmitting, to the first application, a first routing key that identifies the subscription from among the plurality of routing keys; and receiving messages from Kafka services that are routed by the first routing key to the first application. For any particular application or set of applications, a plurality of connection oriented sessions may be used to achieve load balancing and high availability.
STREAMING RESOURCE MANAGEMENT
Systems and methods for allocating processes to queues are provided, which provides more efficient execution of batch jobs in various embodiments. Queue priorities are assigned while process priorities and queue limits are assigned to processes. A set of queues is determined by matching the queue priority to the process priority of a process. Batch numbers for the set of queues are determined, each batch number indicating groups of messages to be processed. First queues and second queues from the set of queues are determined, the first queues having higher batch numbers than the second queues and a number of queues up to a queue limit of the process. The first queues are processed using the process. The queue priority of the second queues is decremented and the second queues are processed by another process with the process priority that matches the decremented queue priority of the second queues.
Local Transparent Extensibility and Routing Slip Extensibility for Business Process Execution Language
In order to achieve location transparency and routing slip extensibility, a system and a method for orchestrating a web service using Business Process Execution Language are disclosed. The method includes: receiving a message, wherein the message comprises an address identifying an extension element; determining, from the address, a location of the extension element identified by the address; responsive to determining the location of the extension element, directing the message to an appropriate location; and storing the message in a computer readable storage medium.
METHOD AND APPARATUS FOR AUTHORIZING API CALLS
Some embodiments of the invention provide a system for defining, distributing and enforcing policies for authorizing API (Application Programming Interface) calls to applications executing on one or more sets of associated machines (e.g., virtual machines, containers, computers, etc.) in one or more datacenters. This system has a set of one or more servers that acts as a logically centralized resource for defining and storing policies and parameters for evaluating these policies. The server set in some embodiments also enforces these API-authorizing policies. Conjunctively, or alternatively, the server set in some embodiments distributes the defined policies and parameters to policy-enforcing local agents that execute near the applications that process the API calls. From an associated application, a local agent receives API-authorization requests to determine whether API calls received by the application are authorized. In response to such a request, the local agent uses one or more parameters associated with the API call to identify a policy stored in its local policy storage to evaluate whether the API call should be authorized. To evaluate this policy, the agent might also retrieve one or more parameters from the local policy storage.
OS OPTIMIZED WORKFLOW ALLOCATION
A computer implemented method implemented on an allocation computing unit for distributing a pre-defined workflow comprising a nonempty set of workflow components, the workflow components being ordered in a directed acyclic precedence graph, onto a set of general purpose computing units comprising at least two general purpose computing units.
DEFINING AND DISTRIBUTING API AUTHORIZATION POLICIES AND PARAMETERS
Some embodiments of the invention provide a system for defining, distributing and enforcing policies for authorizing API (Application Programming Interface) calls to applications executing on one or more sets of associated machines (e.g., virtual machines, containers, computers, etc.) in one or more datacenters. This system has a set of one or more servers that acts as a logically centralized resource for defining and storing policies and parameters for evaluating these policies. The server set in some embodiments also enforces these API-authorizing policies. Conjunctively, or alternatively, the server set in some embodiments distributes the defined policies and parameters to policy-enforcing local agents that execute near the applications that process the API calls. From an associated application, a local agent receives API-authorization requests to determine whether API calls received by the application are authorized. In response to such a request, the local agent uses one or more parameters associated with the API call to identify a policy stored in its local policy storage to evaluate whether the API call should be authorized. To evaluate this policy, the agent might also retrieve one or more parameters from the local policy storage.
Template driven approach to deploy a multi-segmented application in an SDDC
A simplified mechanism to deploy and control a multi-segmented application by using application-based manifests that express how application segments of the multi-segment application are to be defined or modified, and how the communication profiles between these segments. These manifests are application specific. Also, in some cases, deployment managers in a software defined datacenter (SDDC) provide these manifests as templates to administrators, who can use these templates to express their intent when they are deploying multi-segment applications in the datacenter. Application-based manifests can also be used to control previously deployed multi-segmented applications in the SDDC. Using such manifests would enable the administrators to be able to manage fine grained micro-segmentation rules based on endpoint and network attributes.
Controlling transaction requests between applications and servers
Concepts for controlling transaction requests delivered between applications and servers via a decentralized architecture. In such concepts, the delivery of transaction requests is controlled in consideration of information regarding groups of transaction requests that may cause transaction collisions if processed in parallel. Such groupings of transaction request may be defined, modified and updated at run-time, based on previous or current observed transaction collisions.