G06F2209/548

Cognitive computing using a plurality of model structures
11610137 · 2023-03-21 · ·

A method of populating a data set includes generating a plurality of models that model the behavior of an agent process, where the plurality of models includes a first model, a second model, and a third model. The method also includes using the plurality of models to generate one or more requests for one or more external data sources, and using the plurality of models to select a plurality of queries from a data store of predefined queries. The plurality of queries are selected by the plurality of models to request information that is missing from the data set. The method also includes populating at least a portion of the data set using information received in response to the one or more requests for the one or more external data sources and the plurality of queries.

PACKET PROCESSING LOAD BALANCER

Examples described herein include a device interface; a first set of one or more processing units; and a second set of one or more processing units. In some examples, the first set of one or more processing units are to perform heavy flow detection for packets of a flow and the second set of one or more processing units are to perform processing of packets of a heavy flow. In some examples, the first set of one or more processing units and second set of one or more processing units are different. In some examples, the first set of one or more processing units is to allocate pointers to packets associated with the heavy flow to a first set of one or more queues of a load balancer and the load balancer is to allocate the packets associated with the heavy flow to one or more processing units of the second set of one or more processing units based, at least in part on a packet receive rate of the packets associated with the heavy flow.

PROCESSING OF CONTROLLER-STATE-MESSAGE QUERIES
20230079551 · 2023-03-16 · ·

A computer system that processes state messages is described. During operation, the computer system receives, associated with communication network devices in a network, the state messages, where the state messages include different types of state messages having associated priorities. Then, the computer system computes identifiers of the state messages based at least in part on second identifiers of clients associated with or connected to the communication network devices, where, for a given state message, the computer system computes an identifier of the given state message based at least in part on a second identifier of a given client associated with information in the given state message. Next, the computer system may selectively assign the state messages to dedicated message queues having associated processing priorities based at least in part on the computed second identifiers and/or the types of state messages.

METHODS AND SYSTEMS OF SCHEDULING COMPUTER PROCESSES OR TASKS IN A DISTRIBUTED SYSTEM
20230130644 · 2023-04-27 ·

A cloud computer system is provided that includes a plurality of computer devices and a database. The plurality of computer devices execute a plurality of virtual machines, with one of the virtual machines serving as a controller node and the remainder serving as worker instances. The controller node is programmed to accept a request to initiate a distributed process that includes a plurality of data jobs, determine a number of worker instances to create across the plurality of computer devices, and cause the number of worker instances to be created on the plurality of computer devices. The worker instances are programmed to create a unique message queue for the corresponding worker instance, and store a reference for the unique message queue that was created for the corresponding worker to the database. The controller node retrieves the reference to the unique message queues and posts jobs to the message queues for execution by the worker instances.

MANAGING DATABASE QUOTAS WITH A SCALABLE TECHNIQUE

A method and system for providing a scaling quota for a database system have been developed. The method defines a product that is defined by a client using a quota application programming interface (API). A report is created for the defined product with the quota API that is unique to the defined product and specifies a product quota and a limit endpoint for the report. The product quota is managed with a message broker by keeping an updated quota count for each report and product quota. An approval or rejection message is generated by the message broker for the client once the updated quota count reaches the limit endpoint. Finally, a response to the approval or rejection message from the client is generated for the database client by a limit provider application programming interface (API).

DATA TRANSFER PRIORITIZATION FOR SERVICES IN A SERVICE CHAIN
20230124885 · 2023-04-20 ·

An apparatus comprises at least one processing device configured to monitor, by a first service in a service chain, a first set of processing queues comprising two or more different processing queues associated with two or more different priority levels. The processing device is also configured to process, by the first service, a given portion of data stored in at least one of the two or more different processing queues in the first set of processing queues. The processing device is further configured to determine prioritization information associated with the given portion of the data and to select, based on the prioritization information, a given one of two or more different processing queues in a second set of processing queues associated with a second service in the service chain, and to store the given portion of the data in the given processing queue in the second set of processing queues.

Fulfillment of requests stored on a message queue

According to examples, an apparatus may include a processor and a memory on which is stored machine readable instructions that may cause the processor to determine whether a request is stored in a message queue, in which the apparatus may be inside of a domain and the message queue may be outside of the domain. Based on a determination that a request is stored in the message queue, the processor may pull the request from the message queue through a domain boundary, fulfill the request to cause a response to the request to be generated, and forward the response to the message queue through the domain boundary.

Adaptive rule trigger thresholds for managing contact center interaction time

A method includes (a) receiving and storing interaction time data associated with processes of a communication distributor server for an end-user network having an associated contact center with agent instances; (b) determining whether a trigger specified in a first logical directive is initiated; (c) upon determining that the trigger specified in the first logical directive is initiated, determining whether a metric related to the customer communications with the end-user network satisfies the condition in the first logical directive; and (d) upon determining that the metric related to the customer communications with the end-user network satisfies the condition in the first logical directive, providing the operation to at least one of the end-user network or the communication distributor server.

Cognitive Convergence Engine for Resource Optimization

Arrangements for using a cognitive convergence engine for resource optimization are provided. Requests for service, such as a loan, an account, or the like, may be received via different communication channels. The requests for service may be aggregated based on type of request and transferred to a cloud environment for evaluation. A request may be evaluated to determine whether it is eligible for bot processing. If so, the request may be transferred to a bot server for processing. If not, the request may be evaluated to identify a best fit resource for processing. Identifying the best fit resource may include scores computed by a plurality of computational instances or virtual machines configured to process the computations. A number of computational instances may be determined based on a volume of requests. The computational instances may then be deployed to determine a best fit resource for the first request.

Low power and low latency GPU coprocessor for persistent computing

Systems, apparatuses, and methods for implementing a graphics processing unit (GPU) coprocessor are disclosed. The GPU coprocessor includes a SIMD unit with the ability to self-schedule sub-wave procedures based on input data flow events. A host processor sends messages targeting the GPU coprocessor to a queue. In response to detecting a first message in the queue, the GPU coprocessor schedules a first sub-task for execution. The GPU coprocessor includes an inter-lane crossbar and intra-lane biased indexing mechanism for a vector general purpose register (VGPR) file. The VGPR file is split into two files. The first VGPR file is a larger register file with one read port and one write port. The second VGPR file is a smaller register file with multiple read ports and one write port. The second VGPR introduces the ability to co-issue more than one instruction per clock cycle.