COMPUTER-IMPLEMENTED SYSTEM AND METHOD FOR COLLECTING FEEDBACK
20210295186 · 2021-09-23
Inventors
Cpc classification
International classification
Abstract
A computer-implemented system in which information is obtained form a user using an automated (i.e. non-human, computerised) interface that is arranged to dynamically adapt questioning based on received feedback from the user. The automated interface may comprise: (i) an agent for extracting information (e.g. meaning, sentiment, etc.) from feedback information from the user and using it to determine a direction for further questioning, and (ii) an agent for controlling the manner in which the further questioning is expressed to the user. The interface may be embodiment as a feedback collection manager communicatively connected over a network with a client device to receive the feedback information, wherein the feedback collection manager comprises: an AI-based topic generator module configured to generate a queiy topic; and an automated interactive question generator for directing an interactive information exchange with the client device using the generated queiy topic.
Claims
1. A computer-implemented system for collecting user feedback, the system comprising: a client device configured to collect and communicate feedback information from a user; and a feedback collection manager communicatively connected over a network with the client device to receive the feedback information, wherein the feedback collection manager comprises: an AI-based topic generator configured to generate, using the feedback information received from the client device, a query topic; and an automated interactive question generator for directing an interactive information exchange with the client device using the generated query topic.
2. The system of claim 1, wherein the feedback information collected by the client device comprises answer data received in response to a question from the automated interactive question generator, and behavioural data collected from the user when providing the answer data.
3. The system of claim 2, wherein the answer data comprises a spoken response from the user.
4. The system of claim 2, wherein the behavioural data comprises emotional state data for the user.
5. The system of claim 4, wherein the feedback information collected by the client device comprises facial images of the user, and the emotional state data is extracted from the facial images.
6. The system of any one of claim 2, wherein the feedback collection manager is arranged to detect, based on the behavioural data collected from the user when providing the answer data, a sentiment associated with the answer data, and the automated interactive question generator is arranged to use both the detected sentiment and the generated query topic in the interactive information exchange with the client device.
7. The system of claim 1, wherein the automated interactive question generator comprises a natural conversation module arranged to engage in interactive dialogue with the user.
8. The system of any one of claim 2, wherein the behavioural data comprises physiological data for the user.
9. The system of claim 2, wherein the feedback information is in relation to the user's exposure to a stimulus, and the client device is further configured to collect the behavioural data from the user during the user's exposure to the stimulus.
10. The system of claim 9, wherein the feedback information communicated by the client device includes the behavioural data.
11. The system of claim 1, wherein the feedback collection manager is arranged to receive the feedback information relating to different users from a plurality of client devices.
12. The system of claim 1, wherein the AI-based topic generator module comprises a topic repository and a topic determination agent coupled to the topic repository, wherein the AI-based topic generator module is configured to: extract input data from the feedback information; process the input data by the topic determination agent to generate output data that is indicative of the query topic.
13. The system of claim 12, wherein the topic repository comprises a world model from which the topic determination agent is arranged to output a new query topic based on the input data.
14. The system of claim 12, wherein the topic repository comprises a database of previously received feedback information from which the topic determination agent is arranged to generate output data that is indicative of a related query topic based on the input data.
15. The system of claim 12, wherein the topic determination agent is arranged to compare the output data with an objective, and select the query topic based on the result of the comparison.
16. The system of claim 12, wherein the topic determination agent comprises a machine learning algorithm that uses a model selected from the set of models consisting of a supervised learning model, a adversarial network model and a model obtained from reinforcement training.
17. The system of claim 1, wherein the feedback collection manager comprises a starter topic repository and the automated interactive question generator is arranged to commence the interactive information exchange by selecting a starter topic for use from the starter topic repository.
18. The system of claim 17, wherein the starter topic is selected based on a stimulus to which the user is exposed.
19. The system of claim 18, wherein the starter topic is selected based on behavioural response data from the user that is collected during the user's exposure to the stimulus.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0028] Embodiments of the invention are discussed in detail below with reference to the accompanying drawings, in which:
[0029]
[0030]
[0031]
[0032]
DETAILED DESCRIPTION; FURTHER OPTIONS AND PREFERENCES
[0033] Embodiments of the invention relate to a system and method of collecting feedback data from a user in response to a certain stimulus, e.g. consumption of a piece of media content, such as a video, advertisement, song, or the like. The stimulus may be any kind of human-computer interaction. The invention may be applicable in any scenario where information is sought in relation to a stimulus or the performance of an action.
[0034]
[0035] The system 100 in this example provides three main elements: (i) a content provider for supplying media content to a user environment, (ii) a user environment equipped with means for collecting and communicating user input (such as behavioural data, feedback answer data, etc.), and (iii) a computer-implemented feedback collection capable of communicating over the network to obtain feedback from the user environment.
[0036] The system 100 comprises one or more client devices 102 associated with a user environment 101, i.e. belonging to or associated with a given user. The client devices 102 are configured to enable the user to communicate over the network, and may further be configured to playback media content, e.g. via speakers or headphones and/or a display 104. The client devices 102 may also comprise or be connected to behavioural data capture apparatus, such as webcams 106, microphones, etc. Example client devices 102 include smartphones, tablet computers, laptop computers, desktop computers, etc.
[0037] The system 100 may also comprise one or more client sensors units, such as a wearable device 105 for collecting physiological information. Examples of physiological parameters that can be measured include voice analysis, heartrate, heartrate variability, electrodermal activity (which may be indicative of arousal), breathing, body temperature, electrocardiogram (ECG) signals, and electroencephalogram (EEG) signals.
[0038] The client devices 102 are communicably connected over a network 108, such that they may receive media content 112 to be consumed, e.g. from a content provider server 110. This is one example of a stimulus presented to a user on which feedback can subsequently be collected by the system. However, the invention need not be limited to this feedback. In particular, the stimulus may be supplied outside the networked environment shown in
[0039] The system 100 is configured to enable a two-way feedback collection interaction 114 between the user environment 101 and a feedback collection manager 120 across the network 108. According to the invention, the feedback collection manager 120 is implemented wholly on a computer, and therefore may not require any human intervention during normal operation to collect feedback. The feedback collection manager 120 is discussed in more detail with respect to
[0040] During the feedback collection process, the feedback collection manager 120 drives an interactive information exchange in which question and answer data 122 are exchanged between the feedback collection manager 120 and user environment 101. The interactive information exchange may be a dynamic questionnaire that is delivered over a suitable real-time web-based forum. Preferably, the interactive information exchange is a spoken exchange, in which a user hears a spoken question and replies using speech. However, other types of exchange are possible, e.g. text-based exchanges such as a web chat or the like. The feedback collection manager 120 may comprise an information seeking agent 124, which may be an computer-based entity for engaging in dialogue with a user. The information seeking agent 124 may be an AI-based natural language agent, such as Google's Duplex system, or the like. In this example, the information seeking agent 124 is configured to lead the interactive session using information received from the user environment.
[0041] The feedback collection manager 120 may utilise machine learning techniques to direct the information seeking agent 124 towards questions on a certain topic or area of interest. For example, the information seeking agent 124 or feedback manager may be provided with or configured with one or more high level objectives. A high level objective may be set for an interaction with a user, and may be aimed at seeking certain information in relation to a known event or stimulus. As described below, the feedback collection manager 120 may input information obtained from the user environment to a topic generator module to yield an output that can be used by the information seeking agent 124 to drive the interactive information exchange with the user, i.e. to provide one or more topics that will form the basis for the questions in the question and answer data 122.
[0042] The information seeking agent 124 may be arranged to direct the interactive information exchange in a manner that strikes a balance between the fulfilment of the objective(s) (e.g. based on an amount or quality of information in the received answer data) and the duration of the interactive information exchange. This can be a balance between cost (in terms of user time and system resources) and information maximisation and/or verification. However, other factors may be taken into account. In particular, the user's attentiveness or other measurement of engagement with the interactive information exchange may be used to determine when the exchange can be brought to a close. For example, if a user is detected to have a low level of attentiveness, the information seeking agent 124 may be configured to terminate the interactive information exchange even if it is otherwise desirable to obtain further answer data to fulfil the objective(s).
[0043] In one example, the topic generator module may include a topic determination agent that is based on a machine learning world model, i.e. an algorithm trained on a broad range of external data (e.g. news reports, wiki pages, or other data sources that provide information that can be used to derive relations between words and phrases). In combination, the high level objective(s) and user input can be used in conjunction with the world model and/or information relating to the known stimulus, to drive the interactive information exchange with the user.
[0044] The topic determination agent may be a supervised learning algorithm that is obtained from training data that includes real human-based feedback interactions (e.g. traditional questionnaires), in which other user data (e.g. behavioural data) is collected during exposure to the stimulus and/or during feedback collection. Emotional state data may be obtained from the behavioural data, as is known. Training data that includes behavioural data of this kind may be included in the world model discussed above.
[0045] In another example, the topic determination agent may be based on an adversarial network, in which candidate topics are generated by one network and evaluated by another. In a further example, the topic determination agent may be based on reinforcement learning, e.g. with an objective to learn something specific from the feedback.
[0046] The topic determination agent may have access to a list of prescribed topics, whereby the avenues of questioning available in the dynamic questionnaire is bound within the prescribed topics. The list of prescribed topics may be arranged as a local model or topic repository, i.e. containing interrelated threads of information that relate to the prescribed topics, whereby the local model can return threads of information that correlate or match input data in order to assist in determining a query topic. Such a questionnaire may be considered to operate under a global template, although the specific order of topics and line of questioning may be different from user to user. In other examples, however, the topic determination agent may enable the generation and exploration of new topics based on the collected feedback. The generation of new topics may be driven based the underlying objective of the topic determination agent and the world model discussed above. As mentioned above, the local model or topic repository may also include feedback information (i.e. query topics and answer data, including emotional state information) from multiple users who have engaged in an interactive information exchange.
[0047] The objectives may be of different types. For example, one objective may be to seek verification of information in previously received feedback. In another example, an objective may be to seek information in relation to a certain target (e.g. an item such as brand name, commercial product, film character, etc.). Where there is a plurality of objectives, the feedback collection manager 120 may include an objective manager that operates to prioritise which objective is to be used for a subsequent query topic. The prioritisation may be done by applying a weighting that favours query topic that correlate to or are otherwise associated with a given objective. The weighting may be dynamically updated during the course of the interactive information exchange. For example, as answer data is obtained that increases the fulfilment of a certain objective, the weighting towards that objective may be reduced so that subsequent query topic are more likely to explore other objectives. The prioritisation of objectives can be achieved in other ways, e.g. through the use of an adversarial network such as an generative adversarial network in which the different objectives are used for the generator and discriminator.
[0048] In this example, the client devices 102 are arranged to send behavioural information and/or physiological data over the network for use by the feedback collection manager 120. The behavioural information and/or physiological data that is transmitted may be collected during exposure to the stimulus, whereby the feedback collection manager 120 is provided with some initial information regarding the user's response to that stimulus. Additionally or alternatively, the behavioural information and/or physiological data that is transmitted may be collected in real time during the interactive information exchange with the information seeking agent 124. In this case, the behavioural information and/or physiological data can be used to inform the line of questioning, i.e. the generation of topics by the topic determination agent.
[0049] References to “behavioral data” or “behavioral information” herein may refer to visual aspects of a user's response. For example, behavioral information may include facial response, head and body gestures or pose, and gaze tracking.
[0050] In this example, the information sent to the feedback collection manager 120 may include a user's facial response 116, e.g. in the form or a video or set of images captured of the user during exposure to the stimulus. Where the image frames depict facial features, e.g. mouth, eyes, eyebrows etc. of a user, and each facial feature comprises a plurality of facial landmarks, the behavioural data may include information indicative of position, shape, orientation, shading etc. of the facial landmarks for each image frame.
[0051] The image data may be processed on respective client devices 102, or may be streamed to the feedback collection manager 120 over the network 108 for processing.
[0052] The facial features may provide descriptor data points indicative of position, shape, orientation, sharing, etc., of a selected plurality of the facial landmarks. Each facial feature descriptor data point may encode information that is indicative of a plurality of facial landmarks. Each facial feature descriptor data point may be associated with a respective frame, e.g. a respective image frame from the time series of image frames. Each facial feature descriptor data point may be a multi-dimensional data point, each component of the multi-dimensional data point being indicative of a respective facial landmark.
[0053] The emotional state information may be obtained directly from the raw data input, from the extracted descriptor data points or from a combination of the two. For example, the plurality of facial landmarks may be selected to include information capable of characterizing user emotion. In one example, the emotional state data may be determined by applying a classifier to one or more facial feature descriptor data points in one image or across a series of images. In some examples, deep learning techniques can be utilised to yield emotional state data from the raw data input.
[0054] The user emotional state may include one or more emotional states selected from anger, disgust, fear, happiness, sadness, and surprise.
[0055] The information may also include the associated media content 112 or a link or other identifier that enables the feedback collection manager 120 to access the media content 112 that was consumed by the user.
[0056] The information sent to the feedback collection manager 120 to may also include physiological data 118, e.g. transmitted directly by the wearable device 105, or by one or the client devices 102 if the wearable device 105 is paired therewith. The client devices 102 may be arranged to process raw data from the wearable device, whereby the physiological data 114 transmitted to the feedback collection manager 120 may comprise data already processed by the client device 102.
[0057] The feedback information collected by the feedback collection manager is stored in data storage 126. The feedback data may be associated with the behavioural data that was obtained during the collection process. Combined feedback and behavioural data of this kind from multiple users can be stored and made available as another type of input for the topic determination agent, as discussed below.
[0058]
[0059] As shown in
[0060] A second data type is answer data 204, which is information received from the user during the interactive information exchange, e.g. in reply to questions issued by the information seeking agent. The feedback collection manager 120 may be arranged to perform syntactical and semantic analysis on the answer data to extract relevant information therefrom.
[0061] A third data type is user data 206, which may be profile information about the user, e.g. demographic and/or geographic information, and/or information concerning the user's preferences, etc.
[0062] A fourth data type is aggregated data 214 from other users. This may be useful, for example, in enabling a comparison of a current user's answer and emotional state in replying to a certain question with similar information obtained across a plurality of users. This may enable identification of unusual replies, which in some circumstances may indicate new avenues or topics for questioning.
[0063] The various data types may be input to a topic generator module 207, which executes the topic determination agent discussed above to determine one or more topics (e.g. subject or intention) for subsequent questioning. This information is fed to the information seeking agent to formulate a suitable interaction. In one example, the feedback collection manager 120 may use the answer data 204 and extracted emotion state information to perform sentiment analysis. The results of the sentiment analysis can be supplied to the information seeking agent to facilitate formulation of a suitable interaction.
[0064] In a further development, the topic generator module may also take account of a predicted answer in the determination of a topic for questioning. For example, an output from the topic generator module may be a plurality of candidate avenues for pursuing the interactive information exchange. The candidate avenues may generally comprise any of (i) pursuing questioning on current topic (e.g. explore previous answer, to verify it or obtain more information), (ii) open questioning on new topic, or (iii) finish the exchange. For specifically, each candidate avenue may comprise a particular topic to be explored. For example, under (i), the continued questioning might be to try to understand the cause of an observed emotional state, or to try to resolve conflicting data.
[0065] The plurality of candidate avenues may all be supplied to the interactive question generator 208, which can generate corresponding candidate interactions, which in turn are supplied to an answer predictor 212. The answer predictor 212 may comprise a machine learning model that is trained to predict user replies to questions. The output from the answer predictor 212 may be used to select one of the candidate interactions to send to the user. Additionally or alternatively, the output from the answer predictor 212 may be compared (e.g. by the topic determination agent) with the actually obtained answer. This comparison may enable unexpected replies to be identified and explored.
[0066]
[0067] As discussed above, in one example each objective may have an associated weighting, whereby the judgement performed by the topic determination agent can prioritise output data for one objective over others. The objectives manager 218 may be configured to monitor the received answer data to assess an extent to which each objective is fulfilled. The objectives manager 218 may adjust weighting of the objectives based on the received answer data, e.g. to prioritise objectives for which insufficient answer data is received. The weighting for objectives may be determined by other factors, e.g. external input.
[0068] In one example, the topic repository/world model 220 may comprise a database of available avenues for the interactive information exchange. The topic repository 220 may be static, i.e. a fixed set of topics, or may be updatable to include new avenues based on the answer data. The topic determination agent 216 is arranged to generate one or more relevant topics based on the input data. The result from the model is supplied to the interactive question generator.
[0069] In another example, the topic repository/world model 220 represents a world model vector space in which the content from the training set is positioned by relevance. The topic determination agent 216 may operate to assemble one or more query vectors using the input data (i.e. user response data and data relating to the stimulus). The topic determination agent may generate an output topic that is based on similar vectors in the world model vector space. The world model vector space may be configurable depending on the context. For example, if the feedback required concerned a certain type of product, the world model could be limited to information related to such products. In this way the world model may be tailored to the subject matter or context of the feedback collection process.
[0070] The topic repository/world model 220 also includes a repository 222 that stores one or more predetermined starter topics that can be used to initiate the interactive information exchange. The starter topic may be the same for all users, or may be selectable, e.g. based on the mental state data collected for the user while exposed to the stimulus.
[0071]
[0072] The method continues with a step 304 of collecting behavioural and/or physiological data from the user while exposed to the stimulus, e.g. during playback of the media content. The behavioural data may comprise image data (e.g. of the user's face) collected by a webcam or the like, and audio data collected from a microphone. The physiological data may be obtained using a wearable sensor.
[0073] The method continues with a step 306 of initiating an automated interactive feedback session between the user and the feedback collection manager discussed above. The initiation step may be triggered by sending an electronic message to the user other the network (e.g. via email, or through a messaging application). The automated interactive feedback session may comprise a exchange of messages between the user and feedback collection manager. The messages may comprise any one or more of text, audio data (e.g. speech) and video data. The interactive feedback session may take the form of a dynamic questionnaire in which the feedback collection manager requests information from (e.g. poses questions to) the user.
[0074] The method continues with a step 308 of collecting answer data and behavioural and/or physiological data from the user during participation in the feedback session. The collected behavioural and/or physiological data may thus supplement the textual, audio and/or video data that constitutes the user's reply (i.e. the answer data). The method may include a step 312 of storing the collected answer data and behavioural and/or physiological data (collectively referred to as “feedback data”) in a suitable repository.
[0075] As explained above, the feedback collection manager is fully automated, and operates on the basis of an AI-based interaction control algorithm that determines the direction and topics for the interactive information exchange. The method continues with a step 310 of dynamically selecting, using the interaction control algorithm (also referred to above as a topic determination agent), an aim or topic for a subsequent question in the interactive information exchange. The method continues with a step 314 of determining and sending a next message in the interactive feedback session based on the topics determined at the previous step. This step may be performed by a chatbot or the like that is configured to assemble message content (e.g. language, syntax and sentiment) based on an input aim or topic.
[0076] Upon determining that all relevant topics are exhausted, or based on some other termination criteria, the method may end by terminating the interactive feedback session. The termination criteria may comprise session duration, or user engagement level being below a threshold. The user engagement level may be determined based on collected behavioural data.
[0077] In use, the feedback collection process of the invention provides by which pertinent feedback information can be obtained in an efficient manner from users. The process enables dynamic targeting of questioning in an informed (and repeatable) manner by utilising a machine learning algorithm that is responsive to different types of answer data. This enables the direction of questioning to be influenced by the user replies in a consistent way. This is an improvement compared with inflexible fixed questionnaire scripts, which do not permit exploration of unusual or ambiguous replies. It is also an improvement compared with a freeform feedback process, where the lack of structure makes aggregation and summarising difficult and time-consuming.
[0078] Moreover, by being sensitive to behavioural data during the collection of feedback, the process may enable “non-useful” feedback, e.g. from a disengaged user, to be filtered efficiently. In one example, this manifests as early termination of the interactive feedback session, which may save network resources and prevent the topic determination agent from becoming distorted.