MULTI-AGENT SIMULATION SYSTEM AND METHOD
20260057331 ยท 2026-02-26
Inventors
- Timothy Zheng (Markham, CA)
- Hisham Ladha (Montreal, CA)
- Masha Solaja (Toronto, CA)
- Andrew Chen (Toronto, CA)
Cpc classification
International classification
Abstract
Methods, systems, and techniques for performing a multi-agent simulation. A first large language model (LLM) is prompted to act as a sales agent of a financial institution to generate a sales pitch. A second LLM is prompted to act as a client of the financial institution to engage in a conversation with the first LLM in response to the sales pitch. A third LLM is prompted to act as a judge to generate and output a score of the conversation between the first and second LLMs. The score is saved and/or output to a display. The multi-agent simulation is used to create a digital twin of an actual conversation between the sales agent and client.
Claims
1. A multi-agent simulation method, comprising: (a) using a first large language model (LLM) prompted to act as a sales agent of a financial institution to generate a sales pitch; (b) using a second LLM prompted to act as a client of the financial institution to generate a response to the sales pitch; (c) using a third LLM prompted to act a judge to generate and output a score of the response; and (d) outputting the score to a file or display.
2. The multi-agent simulation method of claim 1, (a) further comprising: initializing each of the first, second, and third LLM with a respective system prompt defining, for a respective LLM, a role, a background, a personality, one or more rules, or combinations thereof, (b) wherein the first LLM is prompted to sell a product, the second LLM is prompted to indicate a level of interest in the product in response to the sales pitch, and the third LLM is prompted to analyze the response to generate the score based on the level of interest.
3. The multi-agent simulation method of claim 2, (a) wherein the first LLM is initialized with a prompt comprising information on the product and/or one or more policies of the institution; and (b) wherein the second LLM is initialized with a prompt comprising a summary of information corresponding to the client.
4. The multi-agent simulation method of claim 3, (a) wherein the summary is generated using a fourth LLM by querying at least one knowledge source about the client; and (b) wherein the fourth LLM is initialized with a prompt comprising a name of the client.
5. The multi-agent simulation method of claim 4, further comprising: (a) querying a summary database for the summary; and (b) generating the summary using the fourth LLM if the summary for the client is not present in the summary database.
6. The multi-agent simulation method of claim 4, (a) wherein the at least one knowledge source comprises a database of the financial institution storing information on the client; and (b) wherein the summary comprises past interactions with the client and/or a portfolio of the client.
7. The multi-agent simulation method of claim 4, (a) wherein the at least one knowledge source is external to the financial institution; and (b) wherein the querying comprises a world wide web search comprising: (i) splitting a query string into first chunks, (ii) generating first embeddings from the first chunks; (iii) generating first vectors from the first embeddings; and (iv) querying a world wide web search engine using the first vectors; and (c) wherein the querying utilizes retrieval augmented generation.
8. The multi-agent simulation method of claim 7, (a) wherein the fourth LLM performs the world wide web search by iteratively querying the world wide web search engine a first predetermined number of times; and (b) wherein the fourth LLM generates each subsequent query based on results of previous queries.
9. The multi-agent simulation method of claim 8, wherein one or more queries correspond to one or more predefined questions.
10. The multi-agent simulation method of claim 8, wherein generating each subsequent query using the fourth LLM comprises: (a) splitting query results into second chunks; (b) generating second embeddings from the second chunks; and (c) generating second vectors from the second embeddings for processing by the fourth LLM.
11. The multi-agent simulation method of claim 1, further comprising: generating and outputting a reasoning for the score.
12. The multi-agent simulation method of claim 1, (a) wherein the third LLM generates multiple scores; (b) wherein a number of the multiple scores is a second predetermined number; (c) wherein the score is an average of the multiple scores; (d) wherein the multiple scores are iteratively generated over multiple scoring runs each associated with a respective reasoning, and (e) wherein the reasoning associated with the one of the multiple scores closest to the averaged score is output as the reasoning for the average score.
13. The multi-agent simulation method of claim 1, (a) wherein the first LLM generates a plurality of sales pitches using a plurality of LLMs, each configured to generate a respective sales pitch corresponding to a respective product; (b) wherein the second LLM generates a plurality of responses corresponding to the plurality of sales pitches; and (c) wherein the third LLM generates a plurality of scores corresponding to the plurality of sales pitches.
14. The multi-agent simulation method of claim 13, further comprising, after using the second LLM to generate the response, iteratively refining the sales pitch by: (a) generating a refined sales pitch by processing the response from the second LLM using the first LLM; and (b) generating an updated response to the refined sales pitch by processing the refined sales pitch using the second LLM; (c) wherein the updated response is used as the response for generating the refined sales pitch for a subsequent iteration.
15. The multi-agent simulation method of claim 14, (a) wherein the third LLM generates the score for each iteration based on the refined sales pitch and the updated response, the sales pitch being iteratively refined until the score is above a threshold value; or (b) wherein the sales pitch is iteratively refined for a third predetermined number of iterations.
16. The multi-agent simulation method of claim 2, (a) wherein an agent class object is used to initialize each LLM; (b) wherein each of the respective system prompt is provided to a constructor of the agent class object to initialize each LLM; and (c) wherein a function invoking agent class objects of the first and second LLM is called to cause the first and second LLMs to interact.
17. The multi-agent simulation method of claim 16, wherein each LLM inherits functionality of a shared LLM model and corresponds to a child agent class object of the agent class object.
18. The multi-agent simulation method of claim 4, (a) wherein each of the respective system prompt is generated according to a respective template; (b) wherein the respective template comprises at least one placeholder in addition to the role, the background, the personality, the one or more rules, or combinations thereof; and (c) wherein the at least one placeholder accepts: information on the product and/or one or more policies of the institution for prompting the first LLM; the summary for prompting the second LLM; the response for prompting the third LLM; and the name of the client for prompting the fourth LLM.
19. A multi-agent simulation system, comprising at least one processing unit configured to perform a method comprising: (a) using a first large language model (LLM) prompted to act as a sales agent of a financial institution to generate a sales pitch; (b) using a second LLM prompted to act as a client of the financial institution to generate a response to the sales pitch; (c) using a third LLM prompted to act a judge to generate and output a score of the response; and (d) outputting the score to a file or display.
20. At least one non-transitory computer readable medium having stored thereon computer program code that is executable by at least one processor at that, when executed by the at least one processor, causes the at least one processor to perform a multi-agent simulation system method comprising: (a) using a first large language model (LLM) prompted to act as a sales agent of a financial institution to generate a sales pitch; (b) using a second LLM prompted to act as a client of the financial institution to generate a response to the sales pitch; (c) using a third LLM prompted to act a judge to generate and output a score of the response; and (d) outputting the score to a file or display.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0041] In the accompanying drawings, which illustrate one or more example embodiments:
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
DETAILED DESCRIPTION
[0050] A multi-agent system (MAS) is a system that comprises multiple intelligent agents that each fulfill a particular role. Each agent may be implemented using one or more large language models (each an LLM). Collectively, through appropriate prompting and context windows, the different intelligent agents may collectively form a MAS that mimics a real-world environment. Creating a multi-agent system that suitably mimics a real-world problem is technically challenging. Namely, the various LLMs used to implement the multi-agent system need to be appropriately selected and prompted, with suitably large context windows, in order to adequately mimic the real-world environment.
[0051] At least some of the embodiments described herein are directed at a multi-agent system that is designed to mimic the sales ecosystem of a financial institution. By creating a virtual world that is a digital twin of that ecosystem, interactions between clients (represented by client agents), researchers (represented by researcher agents), financial institutions (represented by sales agents) and managers (represented by judging/manager agents) can be simulated, and the outcomes analyzed to guide real-world behavior. In at least some embodiments, the interactions can be used to generate and provide product recommendations. Each agent is engineered with and ingests various prompts that helps them output a prediction of expected, real-world behavior. The client agent, for example, is given prompts to help represent goals and needs of actual clients, and comprises information such as current financial products owned by the client and trading history, often available to the financial institution.
[0052] In at least some embodiments, these prompts may be enhanced by scraping a data source; the data source may be Internet content, such as data available on the World Wide Web. For example, clients may have websites that provide information such as the client's mission statement, or quarterly/annual financial statements together with related management discussion and analysis. This data may be scraped and used as a client summary that is provided as a prompt to the client agent. Similarly, the financial institution's website may be scraped to obtain information such as revenue targets and used to customize the sales agent's prompts. The sales agent can also be provided with specific goals, such as to create revenue for the financial institution and the client as well as to ensure that the client has a need for potential product offerings.
[0053] The client and sales agents interact with each otherthey talkin the context of the sales agent attempting to sell the client agent a product, which simulates their interactions in real life. The conversation is recorded and analyzed by the judging agent to gauge the client agent's receptiveness to onboarding/purchasing that product, which can be used to guide real-world interactions. In at least some embodiments, each product may have its own, dedicated sales agent. During the judging agent's deliberation, the judging agent may also consider previous conversations recorded between the sales and client agents and/or real-world salesperson-client interactions to increase accuracy.
[0054] In at least some embodiments, the judging agent scores each simulated conversation between a client agent and a sales agent for each product that the client agent may potentially purchase. This allows a real-world individual to take the ranked products and focus on those with the higher scores, thereby leveraging LLMs, and more particularly the context window of LLMs that permits them to maintain all or substantial portions of the conversation between the agents, to better simulate real-world performance between diverse parties with distinct interests.
[0055] In at least some embodiments, a research agent can also be implemented, responsible for collecting information pertaining to the client that is used to prompt the client agent.
[0056] A number of challenges exist for selling or recommending products to a client. Within a financial institution, the cross-selling process is largely manual, which places significant operational burden on the people involved in the process. Recommending suitable products can also be difficult as it depends on a large number of factors such as interests and financial capabilities/behaviors of the client as well as the details of the products themselves. Further, salespersons of the financial institution do not always have visibility into what documents the institution currently holds for a client, and hence which asset class creates minimal additional work to onboard a client onto. In addition, salespersons often do not have a way to simulate what a cross-selling conversation looks like for clients with specific sets of attributes and business activities.
[0057] Accordingly, the present disclosure can provide systems and methods for a multi-agent simulation system which can be used for product recommendations. For example, the parties involved in cross-selling and product recommendations can be simulated as agents in order to determine suitable products based on attributes of both the products and the clients.
[0058] Referring now to
[0059] Referring now to
[0060]
[0061] In some embodiments, the user may also provide the system 300 with the potential product(s) they are interested in selling. The user can name the product(s) or select them from a list of available products. In particular, the system 300 can be implemented by a particular entity such as a financial institution and accordingly is coupled to or comprises database(s) storing propriety information associated with the entity such as product offerings and details thereof. Once the client and product information is received by the system 300, agents 302a-c, 304, 306, and 310 are generated/initialized, as described below.
[0062] Referring to
[0063] The client agent 304 is prompted using a system prompt 312 and can simulate the role of the existing or prospective client, for example as identified by the user. The system prompt 312 comprises a first portion with instructions (in
[0064] The client summary 502 may be supplied and generated by a researcher agent 306, which as described above may scrape data sources such as search engines that search the World Wide Web or knowledge bases in the form of client databases internal to the financial institution. Those internal client databases may store information such as transactions the client has completed, cross-selling history for similar clients, and information generated by in-house research analysts. The research agent 306 is prompted to parse and retrieve data from external data sources, which is used to generate the client summary 502. The research agent 306 can also supplement the externally retrieved data with the internal data described above. To initialize the research agent 306, a system prompt can be input to supply the agent with their role and personality.
[0065] Following appropriate prompting, the system 300 prompts the client agent 304 to interact separately with each of the sales agents 302a-c, with the result being three respective simulated conversations 308. The system 300 may impose a limit on the number of exchanges for each conversation and/or impose a limit on the duration of conversation, both of which can be predefined or set by the user. Each of those conversations 308 are scored by a judging or analyst agent 310, which provides three respective scores 314. Scoring may comprise applying tone/sentiment analysis to the conversations, which the judging agent 310 may be instructed to take into account through a corresponding prompt. For example, the judging agent 310 can, based on the conversations 308, gauge the reception of the client agent 304 to the product offered by each of the sales agents 302a-c (e.g., interested in trying vs. not trying the product). Additionally, the judging agent 310 may also be prompted or required to provide reasoning for the generated score.
[0066] As shown in
[0067] The scores generated by the judging agent 310 and/or the conversations 308 are returned to the user. The user can accordingly use the scores to determine which product(s) to recommend or sell to the given client (e.g., which product(s) may be more likely to be purchased). The user can also use the conversation 308 to formulate the selling strategy, for example by using the talking/selling points of sales agents 302a-c based on how receptive the client agent 304 is.
[0068]
[0069] As shown in
[0070] In particular, the sales agent 302 receives context materials such as policies 406 in the form of sales mandates, financial institution policies, and desk overviews, which are summaries of products sold by desks directed at selling products of various types (e.g., a commodities desk for making commodities trades). These are provided as context to the LLM used for the sales agent 302 following which the sales agent 302 creates and delivers a sales pitch to the client agent 304, which responds. Both the pitch and the response are expressed in a string format.
[0071] In some embodiments, the conversations 308 may each be a single exchange. For example, the sales agent 302 can generate a sales pitch for its corresponding product, output as a string. Correspondingly, the sales pitch is ingested by the client agent 304, which outputs the response comprising indication of interest or non-interest in the product and the reasoning to the sales pitch as a string. The judging agent 310 can analyze this conversation 308 and output the corresponding score and reasoning (if applicable), for example as a string. In this embodiment, the sales pitch may be omitted from the conversation 308, with the judging agent 310 only analyzing the response by the client agent 304 to save computational resources.
[0072] In an alternative embodiment, one or more conversations 308 may comprise multiple exchanges between the agents 302, 304 which are iteratively generated. For example, each response from the client agent 304 may be ingested by the sales agent 302. In response, the sales agent 302 may modify/improve their sales pitch or address concerns/reservations about the product based on the response, which is then be ingested by the client agent 304 to generate an updated response. This can be useful in cases where the client agent 304 indicates that they are not interested in the product within the response. This process may be iteratively repeated until the client agent 304 demonstrates interest in the product within the response or up to a maximum number of preset or user-defined exchanges. The conversations 308 can then comprise the series of exchanges or optionally the final exchange for analysis by the judging agent 310. In some embodiments, the judging agent 310 can evaluate the sales pitch and/or response for each exchange between the agents 302, 304 (e.g., for each iteration) to output a score corresponding thereto. The client agent 304 may be considered to be interested in the product if the score is above a predefined threshold, after which exchanges may be terminated.
[0073] As described above, the judging agent 310 subsequently scores the conversations 308, outputting both the score and reasoning for the score. The judging agent 310 may re-run its scoring analysis any number of times, with the resulting score being the average of the numeric scores generated during the individual runs, and the resulting set of reasons provided with the score being the set of reasons provided in respect of a score for any one of the runs that is closest to the average score. In particular, to ensure consistency is scoring, the judging agent 310 can independently iterate a preset or user-defined number of analyses on the conversations 308. For example, if the judging agent 310 performs three scoring runs resulting in number scores of 5/10, 8/10, and 10/10 with respective sets of reasons for each, the average numeric score is 7.7/10 and the reasons generated for the 8/10 score are provided in connection with the averaged 7.7/10 score. In some embodiments, all generated scores and the corresponding reasonings may be output.
[0074] An LLM server 402 hosts the LLMs used for the sales agents 302a-c (referred to in
[0075] More particularly, in
[0076] A ClientAgent class 604, SalesAgent class 606, AnalystAgent class 608, and ResearcherAgent 610 class, respectively corresponding to and used in initializing the client agent 304, sales agents 302a-c, judging agent 310, and researcher agent 306, inherit the attributes of the Agent class 602 (e.g., each a child of the agent class 602). As described below in respect of the constructors for the client agent 304 and sales agent 302, the ClientAgent and SalesAgent classes 604, 606 additionally comprise client_data and sales_data variables, respectively. The client_data and sales_data parameters are used for data entry and respectively populate the system prompt of the client agent 304 and sales agent 306 with relevant client/sales data that will help the agent set up its persona (e.g., directly by the system or entered by the user). In particular, the client_data can correspond to client summary 502 as generated by the researcher agent 306. The sales_data can correspond to and comprise information about the product as well as the policies 406. As described above, the context materials may be provided via the sales_data parameter when calling the sales agent's 302 constructor, as described above.
[0077] The ResearcherAgent class 610 similarly comprises the additional client_data variable, and also a get_company_brief function that receives as input the number of searches or the maximum number of searches to be done using a World Wide Web search engine, such as the Google search engine, and that returns the factual client summary 502 as a string. Note that the client_data received by the ResearcherAgent class 610 may, but need not, be equivalent to the client_data of the ClientAgent class 604. In particular, the client_data input to the ResearcherAgent class 610 may be limited to the name of the client (e.g., for a prospective client) or internal information such as those available in internal database(s)/documents for the client (e.g., for an existing client). The researcher agent 306 initialized from the ResearcherAgent class 610 subsequently refines and supplements the client information to generate the client summary 502, which may be used for the sales_data constructor of the ClientAgent class 604.
[0078] To initialize the agents 304, 302, 306, and 310 for the system 300, the agent classes 604, 606, 608, and 610 may be called. The constructors/variables of the agent classes 604, 606, 608, and 610 are utilized to provide information that would provide the basis for the corresponding agents 304, 302, 306, and 310. Once initialized, the agents 302, 304, 306, and 310 can be directly invoked and interacted with, for example using the generate_response method, where the history of the responses generated by the agent can be accessed using the conversation_history variable. In some embodiments, the constructors can comprise the system prompts used to initialize the agents.
[0079] As discussed in respect of the
[0080] The researcher agent 306 may use retrieval augmented generation, in which the researcher agent 306 references an authoritative knowledge base outside of its training data sources before generating a response, to obtain the most relevant results from its queries to other knowledge bases. Once research completes in this way, the researcher agent 306 provides a summary in the form of a brief that encapsulates the results of its research; these comprise the factual client summary 502, which is output as a string.
[0081] The researcher agent 306 in this embodiment is tasked with finding the mission statement, investment philosophy, product desires, and other relevant aspects of the client that may be useful for an actual salesperson to know and that are useful for mimicking the actual performance of a managing employee of the client.
[0082] The researcher agent 306 uses these materials and outputs the facts portion of the system prompt 312 as a string, for example comprising the client summary 502. The system prompt, including this facts portion, may be provided via a client data parameter when calling the client agent's 304 constructor.
[0083]
[0084] The query string can be sent to a query engine that splits the string into chunks of text. Embeddings are formed from those chunks at block 702, for example using a BAAI BGE model; and vectors are generated from those embeddings at block 704, for example using the FAISS library. Those vectors are used for a query at block 706, for example by using QA Retriever to perform Google searches based on the vectors, the results of which are returned to the LLM server 402 that is fulfilling the functionality of the researcher agent 306. In some embodiments, the search results can be processed, for example by the query engine and text splitter. In particular, the query engine may use bs4 (e.g., HTML and XML) and OCR (e.g., image and PDF) to process the scraped data and transform it into structured natural language results. The text splitter can split the converted natural language results into chunks for processing by the researcher agent 306, which in some embodiments includes converting the chunks into embeddings and then vectors using blocks 702 and 704, in the same manner as described above.
[0085] By analyzing the returned search results, the researcher agent 306 can generate a further query string, if the researcher agent 306 determines it necessary and limit on the number of searches has not been reached. For example, the researcher agent 306 can generate a new query to explore a different aspect of the client or to further explore a particular aspect. If further queries are required, the process repeats by returning to block 708 where a new query is generated. At each iteration, the researcher agent 306 may summarize the search results for the current query. Alternatively, if the researcher agent 306 determines that the scope of search results is satisfactory and/or the limit on the number of searches has been reached, the query results over the iterations are summarized to generate the client summary 502.
[0086] In some embodiments, the queries may be a series of pre-determined, pre-generated or pre-defined questions/queries, for example equal to the number of searches. Accordingly, the researcher agent 306 may perform the queries in sequence (of the questions). The questions can be given to the researcher agent 306 as a prompt in a template comprising placeholders for the client_data parameter (e.g., the client name).
[0087]
[0088] The multi-agent simulation system 300 may be used to initiate a conversation in respect of a particular product between the sales and client agents 302, 304 by running the following logic, where each element to the left of = correspond to a function: [0089] researcher_agent=ResearcherAgent(Company Name, [onboarded products]) [0090] speaking_points, company_brief=researcher_agent.construct_company_brief(num_search_limit) [0091] client_agent ClientAgent(companybrief) [0092] sales_agent=SalesAgent(Sold Product is XYZ, speaking_points, company_brief) [0093] analyst_agent=AnalystAgent( ) [0094] run_simulated_conversation(client_agent, sales_agent)
[0095] The above code is run for each client, and for each product that may be sold to that client, for which a score is to be generated.
[0096] As shown above, the researcher agent 306 (researcher_agent) is initialized with the name of the client (Company Name) and the current client holdings ([onboarded products]) corresponding to the available client information in the research materials 404 in the client_data parameter for its system prompt. In return, the researcher agent 306 outputs the client summary 502 (company_brief) as well as a list of potential talking points for key aspects of the client (speaking_points) given a maximum number of queries (num_search_limit) using its constructor. The client agent 304 (client_agent) is initialized with a system prompt comprising the client summary 502 (company_brief) given using the client_data parameter.
[0097] The sales agent 302 (sales_agent) is initialized with a system prompt comprising the information about the product it is selling (Sold Product is XYZ), given using the sales_data parameter. In this embodiment, the system prompt for initializing the sales agent 302 also comprises the client summary 502 (company_brief) as well as the potential talking points (speaking_points), which can be provided using additional parameters.
[0098] The judging agent 310 (analyst_agent) can be initialized without any input. The conversations between the agents 302 and 304 can be simulated using a run simulation function (run_simulated_conversation), the results of which analyzed by the judging agent 310.
[0099] Note that the agents 302, 304, 306, and 310 can also be generally initialized for their role and task prior to receiving product and client specific prompts, as these roles and tasks can be consistent between clients and products. For example, the role and prompt parameters of a sales agent 302 may be pre-initialized to specify that they are a financial professional that is trying to sell a product.
[0100] Example prompts for the sales agent 302, client agent 304, researcher agent 306, and judging agent 310 are provided below. In the given example, each prompt includes a system prompt identifying the background, context, and role for the tasks (e.g., a financial sales agent interacting with a client). The system prompt can also identify the high-level description of the task (e.g., selling a product). The system prompt can also outline requirements and rules for the agents (e.g., consider the company background and highlight product benefits) as well as their outputs including a data or file type of the agent output (e.g., output in JSON). The system prompt can also specify the personality of the agent (e.g., charismatic and persuasive), if applicable. Each system prompt may be generated according to a respective prompt template, for example a prompt template may be appliable for each agent type. The prompt template can comprise placeholders for parameter modification and data insertion. For example, the prompt for the client agent 304 can comprise a placeholder for the client summary 502 and the prompt for the sales agent 302 can comprise a placeholder for the product information.
Example Sales Agent 302 Prompt
[0101] You are Dave, a sales representative at ABC Financial Institution, the largest investment bank in Canada and a leader in the global space, and your role today is in interacting with a potential client. You are making an outreach call to a potential client to sell the product described below.
[0102] A summary of your role and responsibilities is: {sales_mandate_summary}
[0103] Remain professional and begin your response naturally, explaining each step of your reasoning based on the following guidelines: [0104] 1. Background: Consider your company's background, industry, mission statement, and any other relevant information. [0105] 2. Personality: You are charismatic and persuasive. You care about closing the deal but you also care about providing the best service possible.
[0106] In your sales pitch to the client, highlight the benefits of the product, address any concerns the client may have, and provide a compelling argument for why the product is the best choice for the client.
[0107] In the above example, the sales_data parameter is given in {sales_mandate_summary}, which includes information about the product as well as policies 406.
Example Client Agent 304 Prompt
[0108] You are a fund manager in charge of an investments portfolio, and are responsible for managing the accounts your company has with financial institutions.
[0109] Remain professional and begin your response naturally, explaining each step of your reasoning based on the following guidelines: [0110] 1. Background: Consider your company's background, industry, mission statement, and any other relevant information. [0111] 2. Personality: You are pragmatic and somewhat budget-conscious. You care about efficiency and value for money but you also care about receiving top tier service.
[0112] Weigh the costs and the benefits to come to your conclusion and vary the strength of your response depending on how much a product is useful in your view, and any potential drawbacks.
[0113] It is okay to reject a product if it does not meet your self-assessed criteria.
[0114] Conclude your statement with a summary of your decision and any additional comments. Do not ask any follow-up questions and go straight to your decision.
[0115] You must remain true to the following company brief, and are to embody the values of your firm and make a decision on whether to purchase a product from a sales representative.
[0116] Company Brief: {company brief}
[0117] In the above example, the client_data parameter is given in {company brief}, which comprises the client summary 502.
Example Judging Agent 310 Prompt
[0118] You are an analyst reviewing the sales conversation between a sales representative and a potential client. You are tasked with determining a score for how receptive the client is to the sales pitch. [0119] 1. Background: Consider the context of the conversation, the product being sold, and the client's needs. You care more about the long-term relationship between the client and the corporation. Even if the sale might not benefit the corporation in the short term, it might be beneficial in the long term. [0120] 2. Personality: You are analytical and detail-oriented. You care about accuracy. Do not worry about hurting the client, corporation or the sales analyst. Your score should be unbiased based on your position and should highlight the needs of the client.
[0121] Evaluate the sales conversation and provide a score between 0 and 10, along with a brief reasoning for your score. A score of 0 indicates that the client is not receptive to the sales pitch, while a score of 10 indicates that the client is highly receptive.
[0122] You must provide a response in the following JSON format:
TABLE-US-00001 { score: < score>, reasoning:
Example Researcher Agent 306 Prompts
[0123] The following prompt is input to the researcher agent 306 to cause it to perform research: [0124] You are a professional researcher with access to the Google search engine. Your task is to gather information about the company {client_name} to provide a comprehensive overview of their background, industry, mission statement, and any other relevant information useful to a representative of the company in charge of managing the company's investment decisions, goals and strategy. [0125] Go into detail about the company's history, financials, products, services, and any other relevant information that will help the representative understand the company's position in the market, and their interests. [0126] Begin this chat with a search query with no other additional comments or annotations that will help you learn more about the company, which will be answered by results from the search engine. [0127] Your search query should be formatted as a question, and one an expert Googler would make to learn more about the company. Do not add any other elements like Let's ask a question or Search Query: as they will break the search engine query. You must answer with only the search query, verbatim. [0128] Keep track of the learnings from the search results, and identify what further information is required and keep responding with a single follow-up query to the search engine to gather more information. [0129] Continue asking follow-up questions to responses. [0130] Once you receive the instruction: Summarize your findings you must produce a summary of the results of your web searches. You should not mention that you are unsure about a certain point in the summary, instead omit it entirely. [0131] Start your summary immediately, do not preface it with any comments as it is not useful. Get to the point and be concise but, include all relevant information. The summary should be formatted without any extra comments at any point. [0132] Currently the suite of products that the company trades with us are: [0133] {existing_products_list}
[0134] In the above example, the researcher agent 306 is given the client name in {client_name} and the client portfolio in {existing_products_list} for the client_data parameter.
[0135] The following is an example conversation involving the researcher agent 306 querying the Google search engine in response to the above prompt. [0136] Initial Query: What is the
Key Facts:
[0140] 1. Eligibility:
How it Works:
[0147] 1. Contributions: Employees and employers make contributions to the
Conclusion
[0151]
Key Features:
[0162] Contributions: Canadians contribute a portion of their earnings to the
Here is a Summary of the Most Relevant Information:
[0169] 1. Investment Objective:
[0176] The following is generated by the researcher agent 306 as a result of its research, and which may be used as the client summary 502: [0177] Company Brief: Here is a summary of my findings:
History and Mission Statement:
[0178]
Financials and Investment Strategy:
[0179]
Key Investment Strategies:
[0180] 1. Diversification:
Other Facts:
[0184] 1. Investment Team:
Conclusion
[0187]
[0189]
[0190] At 802, client information such the name of a potential or existing client is received, for example from a user of a financial institution interested in recommending one or more products to the client. Further client information available from internal data sources of the financial institution such as client portfolio can also be retrieved, if present. The one or more products may also be identified. At 804, one or more sales agents 302, a client agent 304, a researcher agent 306, and a judging agent 310 are initialized, for example using the agent classes 604, 606, 608, 610, as described above.
[0191] At 806, the researcher agent 306 is provided with the client information including the client name. The researcher agent 306 can query data or knowledge sources internal and external to the financial institution to gather information pertaining to the client, such as mission statement, investment philosophy, and product desires. In particular, the researcher agent 306 may iteratively perform a series of queries on the world wide web to gather client information, up to a predetermined number of queries. Each subsequent query may be based on the preceding queries and returned search results. At 808, the researcher agent 306 generates the client summary 502 from the client information collected using the queries.
[0192] At 810, the one or more sales agents 302 are prompted to generate their respective sales pitch. Each sales agent 302 and respective sales pitch corresponds to a respective one of the products, generated using the corresponding product information provided to the sales agent 302. At 812, the client agent 304 is provided with the client summary 502. The client agent 304 generates a response to each sales pitch based on the information available in the client summary 502 and the sales pitch itself. In each response, the client agent 304 outlines the level of interest in the respective product as well as the corresponding reasons.
[0193] At 814, the judging agent 310 evaluates each of the responses and/or sales pitches and outputs a score for each product. Each score is generated based on the reception of the client agent 304 to the respective sales pitch and can indicate a likelihood of the client to purchase the product. A higher score can indicate that the user could be more successful in recommending the corresponding product to the client. The judging agent can iteratively generate a plurality of scores for each response/sales pitch and out an average of the scores as the final score. Additionally, the judging agent 310 can also provide reasoning for the generated score(s).
[0194] The processor used in the foregoing embodiments may comprise, for example, a processing unit (such as a processor, microprocessor, or programmable logic controller) or a microcontroller (which comprises both a processing unit and a non-transitory computer readable medium). Examples of computer readable media that are non-transitory include disc-based media such as CD-ROMs and DVDs, magnetic media such as hard drives and other forms of magnetic disk storage, semiconductor based media such as flash media, random access memory (including DRAM and SRAM), and read only memory. As an alternative to an implementation that relies on processor-executed computer program code, a hardware-based implementation may be used. For example, additional example processing units comprise an application-specific integrated circuit (ASIC), field programmable gate array (FPGA), system-on-a-chip (SoC), artificial intelligence accelerator, or other suitable type of hardware implementation may be used as an alternative to or to supplement an implementation that relies primarily on a processor executing computer program code stored on a computer medium.
[0195] The embodiments have been described above with reference to flow, sequence, and block diagrams of methods, apparatuses, systems, and computer program products. In this regard, the depicted flow, sequence, and block diagrams illustrate the architecture, functionality, and operation of implementations of various embodiments. For instance, each block of the flow and block diagrams and operation in the sequence diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified action(s). In some alternative embodiments, the action(s) noted in that block or operation may occur out of the order noted in those figures. For example, two blocks or operations shown in succession may, in some embodiments, be executed substantially concurrently, or the blocks or operations may sometimes be executed in the reverse order, depending upon the functionality involved. Some specific examples of the foregoing have been noted above but those noted examples are not necessarily the only examples. Each block of the flow and block diagrams and operation of the sequence diagrams, and combinations of those blocks and operations, may be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
[0196] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting. Accordingly, as used herein, the singular forms a, an, and the are intended to include the plural forms as well, unless the context clearly indicates otherwise (e.g., a reference in the claims to a processor or the processor does not exclude embodiments in which multiple processors are used). It will be further understood that the terms comprises and comprising, when used in this specification, specify the presence of one or more stated features, integers, steps, operations, elements, and components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and groups. Directional terms such as top, bottom, upwards, downwards, vertically, and laterally are used in the following description for the purpose of providing relative reference only, and are not intended to suggest any limitations on how any article is to be positioned during use, or to be mounted in an assembly or relative to an environment. Additionally, the term connect and variants of it such as connected, connects, and connecting as used in this description are intended to include indirect and direct connections unless otherwise indicated. For example, if a first device is connected to a second device, that coupling may be through a direct connection or through an indirect connection via other devices and connections. Similarly, if the first device is communicatively connected to the second device, communication may be through a direct connection or through an indirect connection via other devices and connections.
[0197] Use of language such as at least one of X, Y, and Z, at least one of X, Y, or Z, at least one or more of X, Y, and Z, at least one or more of X, Y, and/or Z, or at least one of X, Y, and/or Z, is intended to be inclusive of both a single item (e.g., just X, or just Y, or just Z) and multiple items (e.g., {X and Y}, {X and Z}, {Y and Z}, or {X, Y, and Z}). The phrase at least one of and similar phrases are not intended to convey a requirement that each possible item must be present, although each possible item may be present.
[0198] It is contemplated that any part of any aspect or embodiment discussed in this specification can be implemented or combined with any part of any other aspect or embodiment discussed in this specification, so long as such those parts are not mutually exclusive with each other.
[0199] The scope of the claims should not be limited by the embodiments set forth in the above examples, but should be given the broadest interpretation consistent with the description as a whole.
[0200] It should be recognized that features and aspects of the various examples provided above can be combined into further examples that also fall within the scope of the present disclosure. In addition, the figures are not to scale and may have size and shape exaggerated for illustrative purposes.