METHOD AND SYSTEM FOR INFLUENCING USER INTERACTIONS

20240129284 ยท 2024-04-18

Assignee

Inventors

Cpc classification

International classification

Abstract

The invention relates to a computer-implemented method of pre-processing a user access request to an online communications session. The method comprises of the steps of transferring (62) the user to a pre-communication session, communicating (64) an honesty primer (70, 80) to the user, the honesty primer (70, 80) includes at least a message and/or action and/or exercise and transferring (69) the user to the communication session. The method further comprises the steps of determining (61) a first integrity indicator (?1) of a user sending an access request to a communication server before transferring the user to the pre-communication session, especially based on metadata of the access request and in particular selecting (63) and/or adapting an honesty primer (70, 80) based on the first integrity indicator (?1), and in particular determining a second integrity indicator (?2) based on adjustment parameters (?(?)) of the honesty primer (70 80). In particular, the first integrity indicator (?1) is determined based on reliability parameters of the metadata, which include at least one of geolocation, browser language, time, interaction with previous pre-communication and/or communication sessions, integrity indicators (?) of previous pre-communication sessions and/or communication sessions. Additionally or alternatively to the step of determining a first integrity indicator, the method comprises the steps of analyzing (66) user interaction with the pre-communication session, especially with the honesty primer (70,80), in particular analyzing response behavior parameters of the user in the pre-communication session and/or adjustment parameters (?(?)) of the honesty primers (70,80) and/or user response content and determining (67) a third (?3) integrity indicator based on the analysis.

Claims

1. A computer-implemented method of pre-processing a user access request to an online communications session, the method comprising the steps of: transferring the user to a pre-communication session, communicating an honesty primer to the user, the honesty primer includes at least a message and/or action and/or exercise, transferring the user to the communication session, Wherein the method comprises the steps before transferring the user to the pre-communication session, determining a first integrity indicator of a user sending an access request to a communication server especially based on metadata of the access request, in particular, the first integrity indicator is determined based on reliability parameters of the metadata, which include at least one of geolocation, browser language, time, interaction with previous pre-communication and/or communication sessions integrity indicators of previous pre-communication sessions and/or communication sessions and/or the steps analyzing user interaction with the pre-communication session, determining a third integrity indicator based on the analysis.

2. The computer-implemented method according to claim 1, comprising the steps of: determining an integrity score based on the first integrity indicator and/or second integrity indicator and/or third integrity indicator, transferring the user to the communication session if the integrity score meets a predetermined condition and/or adapting the pre-communication session.

3. The method according to claim 1, comprising the steps of analyzing the interaction of the user with the adapted honesty primer, determining a fourth integrity indicator, and/or fifth integrity indicator based on the analysis, analyzing the first integrity indicator and/or the second integrity indicator and/or the third integrity indicator and/or a fourth integrity indicator and/or fifth integrity indicator, determining an integrity score based on the analysis and only in case of the integrity score meeting a predetermined condition (?), transferring the user to the communication session.

4. The method according to claim 1, comprising the steps of: collecting sensor data, especially camera and/or microphone data, analyzing the sensor data and determine reliability parameters based on the analysis.

5. The method according to claim 1, comprising the steps of: providing a database of access prompts, each access prompt comprising an honesty primer, an indication of an expected user response and an adjustment parameter for adjusting an integrity indicator, selecting one or more access prompts of the data base based on the first integrity indicator.

6. The method according to claim 1, comprising the steps of: providing a self-learning computer program structure for analyzing reliability parameters and/or adjustment parameters and/or response behavior parameters and/or user response content and/or security parameters, analyzing reliability parameters and/or adjustment parameters and/or response behavior parameters and/or user response content and/or security parameters, determining the first integrity indicator of the user by the self-learning computer program structure based on the analysis, selecting and/or adapting the one or more honesty primers based on the analysis by the self-learning computer program structure.

7. The method according to claim 1, comprising the steps of: analyzing user interaction with the pre-communication session, in particular with the honesty primer, by the self-learning computer program structure, especially analyzing reliability parameters and/or adjustment parameters and/or response behavior parameters and/or user response content, determining the second and/or third integrity score of the user by the self-learning computer program structure based on the analysis, adapting the pre-communication session according to the second and/or third integrity indicator, by the self-learning computer program structure.

8. The method according to claim 6, the method comprising the steps of analyzing the interactions of multiple users with pre-communication sessions and/or communication sessions, especially for negative interaction patterns, by the self-learning computer program structure, based on this analysis determining the first integrity indicator of a single user, selecting an access prompt based on the first integrity score.

9. The method according to claim 8, comprising the following steps: receiving a content of a user response to the access prompt, determining a second integrity indicator based on the adjustment parameter of the access prompt and at least one of the first integrity indicator, a previous integrity indicator, and the content of the user response, determining a response behavior parameter of the user response, the response behavior parameter including a temporal characteristic of the user response, determining a third integrity indicator based on the determined response behavior parameter, determining an integrity score based on at least one of the third integrity indicator, a difference between the second and the third integrity indicator, an identifier of the access prompt presented to the user, and the content or behavior parameter of the user response, if the integrity score meets a predetermined condition, transferring the user access to the communications session.

10. The method according to claim 6, wherein the self-learning computer program structure is trained using predictor variables, the predictor variables are at least one of the metadata, historical values of the network locations, the geographical locations, the times of day, identifiers of access prompts presented, the user responses, the access integrity parameter and/or user responses, each response comprising a response content and one or more response behavior parameters.

11. The method according to claim 6, wherein the self-learning computer program structure is continuously trained.

12. The method according to claim 6 comprising using the machine learning model to determine the said predetermined condition.

13. A method for training a self-learning computer program structure using the following steps: collecting data preparing data, training a supervised machine-learning algorithm, testing a model, implementing the model.

14. A method according to claim 13, the method comprising the step of using an unsupervised learning algorithm to create features from inputs as discovery function.

15. A method according to claim 13, the method comprising the step of analyzing principal components.

16. A data processing system comprising means for carrying out the method of claim 1.

17. A computer program product comprising computer-executable instructions for performing a method according to claim 1.

18. A computer-readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of claim 1.

19. A computer program product comprising computer-executable instructions for performing a method according to claim 13.

20. A data processing system comprising means for carrying out the method of claim 13.

Description

[0185] The invention will be described in detail with reference to the attached drawings, in which:

[0186] FIG. 1 shows a schematic view of a prior art access control system.

[0187] FIG. 2 shows a schematic view of a first example of a system according to the invention.

[0188] FIG. 3 shows a schematic view of a second example of a system according to the invention.

[0189] FIG. 4 shows a simplified flow diagram of a first example of a method according to the invention.

[0190] FIG. 5 shows a simplified flow diagram of a second example of a method according to the invention.

[0191] FIG. 6a shows a representation of an honesty primer.

[0192] FIG. 6b shows a representation of an adapted honesty primer.

[0193] FIG. 7 shows a diagram of a method of pre-processing a user access request to an online communications session.

[0194] FIG. 8 shows a diagram of data points used to calculate an integrity score.

[0195] It should be noted that the figures are provided merely as an aid to understanding the principles underlying the invention, and should not be taken as limiting the scope of protection sought. Where the same reference numbers are used in different figures, these are intended to indicate similar or equivalent features. It should not be assumed, however, that the use of different reference numbers is intended to indicate any particular degree of difference between the features to which they refer.

[0196] FIG. 1 shows in schematic view a system in which a user's device 1 (e.g. computer, tablet, smartphone etc.) requests access 2 to a communications session 11 running on a communications server 10. As discussed above, measures are deployed to ensure the integrity of the user's interaction with the communications session 11. The communications session may be any type of online service or interaction which is accessible over an unsecured network such as the internet, for example, and which requires a certain level of integrity from the user.

[0197] Firstly, the user's access request 2 is subjected to an authentication analysis, by reference to authentication information 13, to determine whether or not the user has the necessary privileges or credentials for accessing the communications session. Depending on the result of the authentication, access to the communications session 11 is either granted, or not granted, or granted on the basis of restricted privileges.

[0198] Secondly, the user's interaction history with the communication session is analyzed by a pattern recognition module 14, which searches the interaction history data of all users for patterns of unwanted user interactions based on rules stored in knowledge base 15. If unwanted user interaction patterns are detected, a suitable sanction (not discussed here) can be imposed on the particular user(s). Unwanted interaction patterns may include actual or attempted fraud, such as providing implausible or inconsistent information, or SQL injection or other attempts to breach security.

[0199] FIG. 2 shows the schematic of a first example of a system according to the invention. Here, the system of FIG. 1 is adapted with the addition of a pre-processing system or device. The pre-processing system or device initiates a pre-processing protocol with the user's device. The pre-processing protocol determines and enhances the integrity of the user's interaction with the communication session 11 in advance, before the user's access request to the communications session 11 is established. In addition, the pre-processing protocol includes presenting selected messages (so-called nudges) and/or tasks, which are determined using machine learning to enhance the integrity of users' subsequent interactions with the communications session 11, or to deter users 1 from making fraudulent access requests 2 to the communications session 11. The pre-processing protocol may be initiated instead of (or in addition to) the integrity enforcement measures 12, 13, 14, 15.

[0200] Different to the communication server 10 of FIG. 1, the communication server 10 of FIG. 2 comprises an access integrity module (AID module) 21. This AID module 21 comprises means for initiating an adaptive integrity enhancement sequence (AIE sequence) with the user's device 1.

[0201] Further, the system as shown in FIG. 2 comprises a dedicated external access integrity parameter server (AIP server) 20. The AIP server 20 comprises an AIE sequence prompts database 26, an AIE sequence server module 25 for selecting prompts from the database 26 and an API server 23 for communicating with user device 1.

[0202] Upon receiving an access request 2 by a user device 1, the AID module 21 initiates the AIE sequence with the user's device 1. The AIE sequence is performed by the AIP server 20 accessed for example over the internet.

[0203] The AIE sequence is routed through the AID module 21 of the communications server 10. In this case, it is implemented using an existing protocol such as TCP/IP, with the AID module 21 acting as an API client to the API server module 23 of the AIP server 20. Otherwise, it may be implemented as a dedicated protocol.

[0204] Alternatively to the routing through the AID module 21, the AIE sequence exchange may be established directly between the AIP server 20 and the user's device 1, as indicated by reference sign 24.

[0205] As will be described below, the AIE sequence server module 25 of the AIP server 20 selects AIE sequence prompts from database 26 in dependence on the progress of the pre-processing exchange with the user's device 1. The AIE sequence prompts are also called access prompts here.

[0206] Each AIE sequence prompt in database 26 comprises content for communication or presentation to the user device 1, the content for communication is also called an honesty primer in this invention. Further, each AIE sequence prompt comprises one or more expected user responses, and an adjustment parameter for adjusting an integrity indicator. The adjustment parameter is also called an access integrity parameter adjustment (AIPA) parameter. The adjustment parameter is associated with the particular AIE sequence prompt. The adjustment parameter is used to determine an access integrity (AI) parameter value, here also called an integrity indicator.

[0207] In one embodiment of the invention, when the integrity indicator matches or exceeds a predefined condition, the access request 2 to the communications session 11 is established.

[0208] In a second embodiment of the invention, the access request 2 is established and the first integrity indicator is passed to the communications session 11, such that the communications session 11 can be performed using a level of integrity surveillance commensurate with the AI parameter value. The calculation of the AIP value will be described in more detail below.

[0209] Through use of a system as disclosed in FIG. 2, the occurrence of unwanted user interaction patterns a posteriori can be reduced a priori, thereby reducing the amount of post-processing 14, 15 required.

[0210] Further, the determination of the integrity of the access requests 2 allows for adjusting the allocation of resources applied to each request:

[0211] Access requests 2 which are determined to be of higher integrity are routed to the communications session 11 within a shorter time (e.g. 5 seconds), thereby reducing processing requirements of the system as a whole.

[0212] Access requests 2 which are determined to be of lower integrity may also be routed to the communications session 2 within a relatively short time (e.g. 10 to 15 seconds), but in this case the access integrity parameter may be passed to the communications session 11 such that the communications server 10 adapts a security level or a stringency of integrity tests carried out on the user's interactions with the communications session 11. In this way, a communications server 10 handling many thousands of access requests 2 per hour or per minute can automatically optimize the allocation of its processing resources (e.g. pattern recognition, which is highly processor intensive) to access requests 2 which have lower integrity indicator.

[0213] FIG. 3 shows a schematic view of a second example of an AIP server 20. This AIP server 20 also comprises an AIE sequence prompts database 26, an AIE sequence server module 25 for selecting access prompts from the database 26 and an API server 23 for communicating with user device 1 as mentioned in relation to FIG. 2. Additionally, this AIP server 20 comprises a machine learning model 30 for optimizing the AIE sequence and a data storage 27.

[0214] As in the example illustrated in FIG. 2, the AIE sequence server module 25 selects access prompts from the prompt database 26, for communication to the user device 1 via an API server 23. Access prompts are selected according to an adaptive sequence, determined by the machine learning model 30.

[0215] Metadata 28 is provided from the user's device 1 to the AIE sequence server module 25 as predictor input to the machine learning model 30. Such metadata 28 includes, for example, an originating IP address of the user's device 1, or a geographical region or location associated with the IP address, and a local time of day at the geographical region or location.

[0216] Further, predictor parameters 29.sub.1 and 29.sub.2 may also be used as predictor variables for the machine learning model 30. Their use depends on what is required by the particular type of communications session 11 for which the access request 2 is being pre-processed, and which parameters have been used to train the model.

[0217] Predictor parameter 29.sub.1 indicates verification data fed back from post-processing verification module 14 of communications server 10. Parameter 29.sub.1 is used for ongoing training of the model 30: With each newly available training dataset 29.sub.1 the model 30 is automatically updated. For this, the API client 21 and the API server 23 include the necessary instructions for coordinating the feedback of outcome data verified by module 14 with the corresponding predictor data.

[0218] Using the model 30 in this way, an estimated updated value 31 of the AI parameter, also called a second integrity indicator 31, is generated for each successive access prompt communicated to the user device 1. The second integrity indicator 31 is based on at least one of [0219] the metadata, [0220] a prompt identifier of the prompt, [0221] a user response content, [0222] user response behavior parameters (e.g. temporal characteristics).

[0223] The second integrity indicator 31 is used by the AIE sequence server module 25 to decide whether the access request 2 should be established or not. If it is to be established, the current integrity indicator is provided to the communications session 11 as described above. Alternatively, or in addition, the model 30 may be used to generate a selection of the next access prompt to transmit to the user device 1, thereby generating an optimized sequence of access prompts for optimizing the integrity indicator under the particular conditions and response of the access request 2 being pre-processed.

[0224] Further, historical data of the access request are stored in data storage 27. Historical data include, for example, previously-presented access prompts, previous user-responses, and/or previous user response patterns. User response patterns include temporal information such as the speed or duration of a user response, or an input stroke (e.g. keystroke) pattern.

[0225] According to one variant, a time delay between presentation time of an access prompt and a start of a user response may be taken as a time taken by the user to read a message of the access prompt, and this delay parameter may be combined with information about the length and/or complexity of the message to calculate a baseline reading speed of the user presenting the access request 2. The calculated baseline reading speed may be used to weight subsequent temporal user response parameters.

[0226] FIG. 4 shows a first example of a method according to the invention.

[0227] The process is started by multiple users 1.sub.1, 1.sub.2, 1.sub.n making multiple access requests 2.sub.1, 2.sub.2, 2.sub.n to communications session 11.

[0228] In a first step 42, a first integrity indicator ?1, in particular a first value of an access integrity parameter is determined. For example, access request 2.sub.1 is received from user device 1.sub.1. At step 42, the first integrity indicator ?.sub.1 is first assigned to the access request 2.sub.1. In this example, the first integrity indicator ?.sub.1 is assigned an initial predetermined value of 0.5, although other initial values could be used. The first integrity indicator is assigned on the basis of metadata 28 from the user device 1.sub.1, using rules in a knowledge base (not shown).

[0229] In step 43 a succession of access prompts 41 are selected from access prompt database 26. Each access prompt comprises a content, one or more expected user responses, and an adjustment parameter ?(?). The content and nature of the access prompts are varied so that some access prompts effect greater enhancement of the integrity indicator of the access request 2 than others, and are therefore associated with larger values of the adjustment parameter ?(?).

[0230] In step 44, the selected access prompt 41 is communicated to the user device 11, and the user response 45 is received in step 46. In step 47, the current integrity indicator ? is updated by an amount determined by the adjustment parameter ?(?) as a function of the content of the user response 45, with reference to rules in knowledge base 52. In step 48, the current integrity indicator ? is updated by an amount determined by the adjustment parameter ?(?) as a function of the user response pattern, information of the user response 45, with reference to rules in knowledge base 52. Step 49 provides an option for pausing the process, which may optionally entail an additional adjustment (not shown) of the integrity indicator ?. If the process is paused, it will be reset to step 43 with a new integrity indicator ?, selected to take into account the user's decision to pause. At step 50, if the integrity indicator ? of the AIP is greater than a predetermined value K, the access request 2 is routed to the communications session 11. If not, then the process iterates with a new access prompt 41 selected from database 26. The condition K for progress to the communications session 11 may be a numerical threshold, or it may be a function of the various available parameters (metadata 28, access prompt selection, historical data, user response content, user response pattern etc.) and rules in the knowledge base 52.

[0231] According to one variant of the invention, the adjustment parameter ?(?) is defined not as a numerical value but as a function of a different parameter such as one or more of the temporal characteristics of the user response.

[0232] Optionally, the access request 2 may be routed 51 to the communications session 11 with the integrity indicator ? being passed as a parameter for use by the communications session 11, as described above.

[0233] FIG. 5 shows a variant of the method of FIG. 4, in which the pre-processing of the access requests 2 is carried out under control of a machine-learning model 30. In this case the model is trained using conventional supervised machine learning techniques, using metadata 28, historical access request parameters (metadata 28, prompt selection, historical data, user response content, user response patterns, pause events etc.) and records of the corresponding integrity outcomes. Model 30 takes as its predictors one or more of the metadata 28, the current access prompt selection, content and pattern of the current user response 45, historical data 27 of the access request 2 including previous access prompt selections, previous AIP values, previous user responses, previous user response patterns, and/or bespoke parameters 29. Output response variables of the model 30 are the adjustment parameter, a selection of the next access prompt to be communicated to the user device 1, and/or a predetermined value K of the condition for transferring the access request 2 onward to the communications session 11 of communications server 10.

[0234] FIG. 6a shows a representation of an honesty primer 70. The honesty primer 70 comprises a message 71 with a title My honesty pledge and a text body Most people report in good faith. Further, the honesty primer 70 comprises an icon 72 and an exercise, in this case a slider 73.

[0235] FIG. 6b shows a representation of an adapted honesty primer 80 in case of the integrity score being met and the user being transferred to the communication session. The adapted honesty primer 80 also comprises a message 81, icon 82. Both the message 81 and the icon 82 are adapted compared to the message 71 and the icon 72. The message 81 shares the same title My honesty pledge as the message 71, but comprises an adapted text body Thank you. We value your honesty! Dr. Cain& his audit team will review your report. Further, the exercise 73 is exchanged with an icon 83.

[0236] FIG. 7 shows a diagram of a method of pre-processing a user access request to an online communications session. The user wants to join a communication session and tries to access a webpage. This starts the process, as in the first step 60 the user device sends the access request for a communication session to the server. In the next step 61, the self-learning computer program determines a first integrity indicator (?.sub.1) (see FIGS. 4 and 5) to determine the level of expected honesty of the user in the communication session. For this, the self-learning computer program uses the metadata of the access request. Specifically, the self-learning computer program uses reliability parameters of the metadata to determine the first integrity indicator (?.sub.1) (see FIGS. 4 and 5). These reliability parameters are at least one of [0237] geolocation, [0238] browser language, [0239] time, [0240] Interaction with previous pre-communication and/or communication sessions [0241] Integrity indicators of previous pre-communication sessions and/or communication sessions.

[0242] In the next step 62, the user is transferred to a pre-communication session. The pre-communication session is a session accessed before the actual communication session. The predetermined condition is set by the self-learning program structure to allow users determined to be more honest easy access while leading the users determined to be less honest into the pre-communication session to influence their behaviors in the communication session.

[0243] Such an implementation of the influence using a pre-communication session allows the method to be adapted to various processes and thus is easily integrated into different applications.

[0244] In the following step 63 the self-learning program structure selects an honesty primer (70, 80, see FIG. 6a, 6b) out of a data base and determines a second integrity indicator (?.sub.2, see FIGS. 4 and 5). The honesty primer is adapted to the first integrity indicator and may be adapted to further parameters, especially according to the reliability parameter, such as different languages according to browser language, time of day etc.

[0245] For example, an algorithm as represented in the following table may be used to adapt the honesty primer:

TABLE-US-00001 CPI Time Index Stamp Hesitation Risk Smiley Eye Message 1-50 6am-12pm 1-3 secs 90-100 Happy No No 12pm-6am.sup. 1-3 secs 80-90 Happy No No 100+ 6am-12pm 0.5-3 secs 60-80 Indifferent yes No 12pm-6am.sup. 0-0.5 secs 0-60 sad No yes CPI = Transparency International: Corruption Perception Index

[0246] The honesty primer is a message and/or an action and/or an exercise. For example, the user may be presented with an honesty primer as described in FIG. 6.

[0247] The honesty primer is stored with an adjustment parameter and an expected user response as an access prompt in the data base. The adjustment parameter relates the indication of how much the integrity indicator is expected to change by showing the honesty primer. This allows for an easy calculation of the second integrity indicator.

[0248] The expected user response relates to the user reaction which is supposed to be provoked by the honesty primer. In the example of the FIG. 6, the honesty primer of a slider is supposed to provoke the user to swipe the slider at a certain speed. Thus, if the slider is swiped too quickly, the provoked user reaction and the expected user response is not the same. This allows for an easy way of describing, if the honesty primer worked as intended.

[0249] In the following step 64 the honesty primer is sent to the user device. The honesty primers are communicated based on java script events and/or call backs to the parent site.

[0250] The user response and response content are sent to the server in step 65. This allows for the self-learning computer program in step 66 to analyze the user response and the response content as response behavior parameters and determine a third integrity indicator.

[0251] In step 67, the first, second and third integrity indicator, as well as user response pattern and information of the user response (45, see FIGS. 4 and 5), are used to determine an integrity score of the user. The integrity score is compared to a predetermined condition, whether access to the communication server is allowed for this user or denied. The self-learning computer program adapts and determines the condition, whether the access is granted.

[0252] In the following table, different scenarios for the integrity indicator are presented:

TABLE-US-00002 Speed before pledge Speed to Speed after Scenarios Time of day start pledge pledge Total 6AM-12PM- FAST/ FAST/ FAST/ number of 4PM-8PM- MEDIUM/ MEDIUM/ MEDIUM/ combinations 6AM SLOW SLOW SLOW with current intervals (n = 108): daily time intervals (4) x speed before pledge start (3) x speed to pledge (3) x speed after pledge (3) Doesnt realise 6AM-12PM FAST FAST FAST purpose of pledge or doesn't care various FAST FAST SLOW combinations various FAST SLOW SLOW combinations various FAST SLOW FAST combinations various SLOW SLOW SLOW combinations various SLOW FAST FAST combinations various SLOW SLOW FAST combinations various combinations SLOW

[0253] Shorter times show a quicker pace of the user going through the pre-communication session and thus show a higher risk at dishonesty.

[0254] The integrity indicator is used to indicate, if a user is indicated to be more honest. Thus, the integrity indicator is lower in users considered to be less honest. For example, a first user later in the day may show a lower integrity indicator, as experiments have shown, that users later in the day are statistically more dishonest. The different measured response parameters are used to calculate an integrity indicator, which are used to calculate the overall integrity score.

[0255] Examples for the calculation of the integrity score of two users are shown in the following two-part table:

TABLE-US-00003 Speed before Time of day pledge start Speed to pledge Scenarios (0-25 points) (0-25 points) (0-25 points) User 1 5pm 0.24 secs. 0.31 secs. (15 points) (10 points) (10 points) User 2 9am 1.02 0.73 secs (25 points) (25 points) (20 points) Scenarios User 1 User 2

[0256] In step 68, as the integrity score does not meet the predetermined condition, the honesty primer is adapted.

[0257] The honesty primers (70, 80 FIGS. 6a, 6b) are customizable either by an algorithm using machine learning to adapt the honesty primer to enhance the effectiveness and/or by the customer. Customizable are in particular text content, colors, sequence, interaction methods (i.e. swipe, puzzle, fingerprint etc.), continue button and duration of display.

[0258] The adapted honesty primer is then sent to the user device, and the process starts anew on step 64.

[0259] In step 69, the integrity score meets the predetermined condition (?) (see FIGS. 4 and 5) and the user is transferred to the communication session.

[0260] For executing the method a system, for example as shown in FIG. 5, is used. The software implementation comprises a java script widget, which can be used for mobile and browser versions and is thus easily integratable into existing processes.

[0261] FIG. 8 shows an example diagram of the data points used for the assessment of the integrity score 95. The data point of the metadata used in this example is the time of day 91. The data points of the response behavior parameters used are the time until starting the interaction 92 with the honesty primer, the interaction speed 93 with the honesty primer and the time to continue 94 to the communication session. The data points are used to determine integrity indicators, which are used to select and adapt the honesty primer. Thus, for example, a user sending an access request later 99 in the day will get a different honesty primer to another user sending an access request early 98 in the morning.

[0262] It has been shown, that communication sessions in the afternoon, fast pre-communication sessions and above-average hesitation to continue to the communication session are each predictive of dishonesty. Thus, this data points are used to initially assess the integrity score of a user. For example, a user 96 in the afternoon, with faster 101 than usual interaction with the honesty primer, has a lower integrity score than a user 97 early in the morning with slower 102 interaction speed.

[0263] Further, through users found to be dishonest after a communication session, this assessment is adjusted. Further data points or combinations of data points, that predict dishonesty as the model will confirm, reject, adapt or suggest new combinations of above data that will predict dishonesty.

[0264] With feedback about users who have been found to be dishonest, the self-supervised learning structure will analyze and learn the metadata and behavioral data from those users, to adapt algorithm for predicting dishonesty and apply these findings to others.