G06F16/33295

MULTI-AGENT-BASED INFORMATION PROCESSING METHOD, ELECTRONIC DEVICE AND STORAGE MEDIUM

A multi-agent-based information processing method includes: receiving an information processing request, in which the information processing request includes input information; inputting the input information to a first agent and obtaining output information of the first agent, in which the first agent determines one or more second agents from a set of agents based on the input information; and obtaining response information corresponding to the input information based on the output information of the first agent and the one or more second agents.

USER PROFILING USING CHAIN-OF-THOUGHT KNOWLEDGE GRAPHS FOR QUERYING A MACHINE LEARNING SYSTEM
20260057254 · 2026-02-26 ·

Techniques are disclosed for a machine learning model, such as a large learning model (LLM), that incorporates a model of a chain of thought of a particular user when responding to a query from the user. In one example, a system generates a knowledge graph of a chain of thought of the user. The knowledge graph comprises nodes representing topics present within past queries by the user and edges representing a co-occurrence between the topics. The system determines, based on a topic present within a query from the user and the knowledge graph, a goal query comprising a goal topic. The system provides, to a machine learning model, the user to generate, by the machine learning model, a response. The machine learning model is constrained to include the goal topic of the goal query within the response. The system outputs, for display, the response to the query.

SYSTEMS AND METHODS FOR ORCHESTRATING INTERACTION WITH AN ARTIFICIAL INTELLIGENCE APPLICATION

Systems and methods for orchestrating interaction with an artificial intelligence (AI) application receive, via an AI agent, a message from a user; modify the message to elicit identification of one or more plugins or functions required to respond; process the modified message in a first AI container to generate a list of the required plugins or functions; load the identified plugins or functions into a second AI container; generate, in the second AI container, an initial computational inference process based on the message and the plugins or functions; determine whether all information required to execute the inference process is available; when all required information is available, execute the inference process, generate a reply, and send the reply to the user via the AI agent; and when information is missing, generate a query requesting the unavailable information and send the query to the user via the AI agent.

SCANNER PERFORMING SCAN PROCESS BASED ON RECOMMENDED PARAMETER SET OUTPUTTED BY TRAINED MACHINE LEARNING MODEL
20260059060 · 2026-02-26 ·

A scanner includes a scanning engine. The scanner outputs one or more questions related to generation of scan data through a user interface, and receives one or more answers through the user interface when a scan instruction is received. The scanner performs transmits question-and-answer information to a server through a communication interface. The question-and-answer information includes the questions and answers, and associates each question with a corresponding answer. The scanner performs a scan process when a recommended parameter set is received from the server through the communication interface. The recommended parameter set is outputted by a trained machine learning model based on the question-and-answer information. The scan process is based on the recommended parameter set. The scan process includes reading an original using the scanning engine to generate scan data. The scanner outputs the scan data or an object based on the scan data.

SYSTEMS AND METHODS FOR ORCHESTRATING INTERACTION WITH AN ARTIFICIAL INTELLIGENCE APPLICATION

Systems and methods for orchestrating interaction with an AI application include receiving a message from a user via an AI agent and generating an initial computational inference process based on the message. The system queries an API repository to determine whether all required information to execute the process is available. If available, the system executes the inference process, generates a reply based on the process and the API repository, and sends the reply to the user via the AI agent. If information is missing, the system determines whether secondary sources provide the necessary information. If available, the system executes the inference process, generates the reply, and sends it via the AI agent. If required information is not available through the API repository or secondary sources, the system generates a query requesting the missing information from the user and sends the query via the AI agent.

SYSTEMS AND METHODS FOR ORCHESTRATING INTERACTION WITH AN ARTIFICIAL INTELLIGENCE APPLICATION

receive, via an artificial intelligence (AI) agent, a first message from a user; generate an initial computational inference process based at least in part on the first message; determine whether or not all information required to execute the initial computational inference process is available to the processor; when a determination is made that all information required to execute the initial computational inference process is available: execute the initial computational inference process; generate a first reply based on the initial computational inference process; and send the first reply to the user via the AI agent; and when a determination is made that not all information required to execute the initial computational inference process is available: generate a first query requesting the unavailable information from the user; and send the first query to the user via the AI agent.

SYSTEMS AND METHODS FOR ORCHESTRATING INTERACTION WITH AN ARTIFICIAL INTELLIGENCE APPLICATION

Systems and methods for orchestrating interaction with an artificial intelligence (AI) application in a contact center environment receive, via an AI agent, a voice message from a user; convert the message from voice to text; generate an initial computational inference process based on the text message; determine whether or not all information required to execute the initial computational inference process is available to the processor; when a determination is made that all information required is available: execute the initial computational inference process; generate a text reply based on the initial computational inference process; convert the text reply to a voice reply; and send the voice reply to the user via the AI agent; when a determination is made that information is unavailable: generate a text query requesting the information; convert the text query to a voice query; and send the voice query to the user via the AI agent.

INFORMATION RETRIEVAL SYSTEM
20260056988 · 2026-02-26 ·

This information retrieval system provides an answer corresponding to a question using a large language model. A context information retrieving unit retrieves a document database with a characteristic vector of the question and thereby acquires as context information a text group that a similarity level between the characteristic vector and a characteristic vector of the text group satisfies a predetermined condition. In the document database, character vectors and page numbers of text groups obtained by dividing a document are registered. A prompt generating unit generates a prompt that includes the question and the context information. An answer acquiring unit acquires an answer corresponding to the prompt using a large language model. An answer outputting unit outputs as an answer corresponding to the question the context information and a page number associated with the context information together with the answer corresponding to the prompt.

Natural language search and knowledge management using deep learning
12561353 · 2026-02-24 · ·

An application executing on a processor may receive a natural language request. A large language model (LLM) may determine, for each of a plurality of data sources, a respective data source configuration. The LLM may generate, for each data source, a respective query based on the natural language request and the data source configuration. The application may, based on the configuration, process the queries against the plurality of data sources. The LLM may receive, based on the processing, a plurality of results from the plurality of data sources and generate a natural language response to the natural language request. The natural language response may include an indication of a first result of the plurality of results. The application may output the natural language response for display.

Generative interface for multi-platform content

Embodiments described herein relate to systems and methods for automatically generating content for a generative answer interface of a collaboration platform. The system receives a natural language user input identifying corresponding blocks of text or snippets using a content extraction service. A prompt is generated using the blocks of text and is used to obtain a generative response. The generative response and links to corresponding content are displayed in the generative answer interface and can be inserted into content of the collaboration platform. The systems and methods described use a network architecture that includes a prompt generation service and a set of one or more purpose-configured large language model instances (LLMs) and/or other trained classifiers or natural language processors used to provide generative responses for content collaboration platforms.