COMPUTERIZED METHOD AND SYSTEM FOR PERSONALIZED STORYTELLING

20180012261 · 2018-01-11

    Inventors

    Cpc classification

    International classification

    Abstract

    A method and system are proposed for presenting information relating to a product to a potential customer for the product, upon the user scanning a 2D barcode using a mobile device. The information is presented to the customer as a story generated by an agent-based storytelling system. The story is personalized to the customer using online multimedia. It can be used to conduct mobile branding and advertisement, and is able thereby to augment offline shopping with online shopping experience.

    Claims

    1. A computer system for presenting to a user personalised content relating to a product which is available for purchase, the computer comprising: an interface for receiving a message from a mobile device of the user, the message encoding (a) information about the user, and (b) a captured image of a barcode associated with the product; a server, comprising: (i) a processor and (ii) a data storage device, storing: (a) further information relating to the user; (b) program instructions, the program instructions being operative to cause the server to implement one or more agents, to: extract from the message the information about the user, and use it to obtain the further information relating to the user; use the barcode to obtain product information relating to the product, including multimedia data; use the further information and the product information, to construct a story incorporating the multimedia data; and use the interface to transmit data to the mobile device, to present the story to the user.

    2. A system according to claim 1 in which the program instructions are further operative to cause the processor to obtain, using the internet, additional data relating to the user from at least one social media site, the additional data also being used to construct the story.

    3. A method for performance by a computer system, for presenting to a user personalised content relating to a product which is available for purchase, the method comprising the computer system: receiving a message from a mobile device of the user, the message encoding (a) information about the user, and (b) a captured image of a barcode associated with the product; extracting from the message the information about the user, and using it to obtain the further information relating to the user; using the barcode to obtain product information relating to the product, including multimedia data; using the further information and the product information, to construct a story incorporating the multimedia data; and using the interface to transmit data to the mobile device, to present the story to the user.

    4. A method according to claim 3 in which the barcode is a 2D barcode.

    5. A method according to claim 3 further including the computer system obtaining, using the internet, additional data relating to the user from at least one social media site, the additional data being used to construct the story.

    6. A method according to claim 3, further including, upon a command by the user, transmitting the story to a social media site for viewing by other individuals associated with the user.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0019] A non-limiting embodiment of the invention will now be described for the sake of example only with reference to the following figures, in which:

    [0020] FIG. 1 shows schematically an embodiment of the invention;

    [0021] FIG. 2 is composed of FIG. 2(a), which shows an interface used by a designer function in the embodiment of FIG. 1, and FIGS. 2(b)-(c) which show a drag and drop operation performed using the interface; and

    [0022] FIG. 3 is composed of FIGS. 3(a) and 3(b) which show interfaces presented by the designer function to define respectively concepts and causal relationships in a story.

    DETAILED DESCRIPTION OF THE EMBODIMENTS

    [0023] Referring firstly to FIG. 1, an embodiment 1 of the invention of the invention is shown. The embodiment 1 is a system for using personalized storytelling to promote mobile branding and advertisement. The embodiment 1 uses the DIRACT (i.e. direct and act) storytelling architecture described in detail in [2], and a plurality of DIRACT agents. Each agent is an intelligent software entity that can sense the user information and context information, process the information with previous knowledge and feedback to the user. It is goal-oriented which can automate its actions by itself.

    [0024] The embodiment communicates (over any wireless communication network) with the smartphones 2, 3 of a plurality of respective customers. For simplicity, the respective smartphones 2, 3 of two customers “A” and “B” are shown, but the number of customers may be much higher.

    [0025] The customers A and B, and their smartphones, are located in a retail location (e.g. a store, or shopping mall). Each of their smartphones 2, 3 includes an application which the respective customer uses to scan a 2D barcode 4 located in proximity to certain a certain product. For example, if the product is goods, the barcode may be printed on packaging of the goods. If the product is a service (such as a travel booking service) the 2D barcode may be displayed at a location where the service is offered. The application then sends a message including the 2D barcode, and data about the respective customer (e.g. any one or more of a user id, his or her current location, and/or the current time, etc.) to the embodiment 1.

    [0026] The embodiment 1 uses the barcode to obtain information about the product, and uses this in combination with stored information about each customer, to generate a respective story of branding and advertisement. The story will be presented on the respective customer's smartphone 2, 3 in the form of media infusion of text, audio, and video. Customer A and customer B thus will get different respective story upon scanning the same 2D barcode, due to their different preferences and different related online multimedia.

    [0027] The embodiment 1 is a multi-agent system, which contains the following running agents: [0028] a) Extract Agent: After each customer scans the 2D barcode of the product, the 2D barcode is sent to the Extract Agent. The Extract Agent will use the 2D barcode to extract product information about the product (e.g. its name, its producer, a description etc.) from a database inside the embodiment 1 or outside (such as a database to which the embodiment is connected using the internet). The Extract Agent further obtains from the message information about the customer (customer id, current location, current time etc.). The two forms of information extracted by the Extract Agent are sent to “Process Agent” for further processing. [0029] b) Storage Agent: The Storage Agent stores further information (“recorded customer information”) about each of the customers (e.g. their gender, preference, education, profession, and/or previous shopping activities etc.) and information about the product. The Storage Agent pre-processes the data to most related elements for the Process Agent to customize the branding story. [0030] c) Process Agent: The Process Agent is the kernel of the embodiment 1. The Process Agent receives the data extracted by the Extract Agent, and uses the information about the customer to generate a request to the Storage Agent to send the Process Agent the recorded customer information. The Process Agent then analyses the various items of information gathered from the Extract Agent and Storage Agent, and generates a story. The story may include further information gathered over the interne from one or more online data sources, such as social media (e.g. Facebook® and Twitter®). The story includes branded content which will interest the customer (e.g. advertisement commercial, friends' comments on the product). The Process Agent will also update the “Storage Agent” about the user's feedback about the generated story after the processing, in order to provide more engaging stories in the future. The Process Agent will rank the most important information about the product and the customer. The story event will be generated based on the information in the order of the rank. [0031] d) Storyteller Agent: The Storytelling agent presents the story received from the Process

    [0032] Agent to the customer. It generates, for example by interacting with the application on the respective smartphone 2, 3, a graphical user interface on the smartphone 2, 3 which is suitable for the corresponding customer. Unlike conventional personalized video generation (such as in [8]), the story is presented as a virtual storybook, with narratives from the storyteller. The story incorporates data from the social media. In this way it resembles the personalized content in [9], but in [9] the content is not advertising content, and is not extracted using a barcode.

    [0033] Importantly, the Process Agent is able to derive the personalized storytelling based on the user preferences and real-time context changes. It does this using a Evolutionary Fuzzy Cognitive Map (E-FCM), which is a computational model previously proposed by us to model the causal relationships among a number of concepts, and simulate the concept state change [7]. An E-FCM designer is developed to author a story with a collection of story scenes. The software has an intuitive user interface with simple drag-and-drop operations to generate story and simulate easily. The main user interface of the designer is shown in FIG. 2(a), and FIGS. 2(b) and 2(c) illustrate a drag and drop operation performed using the interface.

    [0034] The interface is a tool to generate the logic of the “Process Agent” to select the most relevant story event based on user interactions as well as external stimuli.

    [0035] In FIGS. 2(b) and (c), unit 3 and unit 4 are two objects of story. The user is able to construct the logic easily with an intuitive graphical user interface, with a simple drag and drop operation. In other words, with the designer, it is easy to construct the story with user interactions and context changes with intuitive drag-and-drop operations.

    [0036] The designer further produces interfaces as shown in FIGS. 3(a) and 3(b). The interface of FIG. 3(a) is used to set concepts of the story (user preferences, contexts, story scenes). The interface of FIG. 3(b) is used to set the causal relationships. By changing the parameters in FIGS. 3(a) and (b), the user can construct the value for each story event, and the weight to activate different story events. For example, a girl prefers advertisement of harmless dolls while a man prefers power and speed. By setting different weights to activate different story events, different users can create personalized stories. Using the interfaces of FIGS. 3(a) and (b), it is easy to set the concepts as well as the causal relationships.

    [0037] Once a customer has viewed the story using the corresponding smartphone 2, 3, he or she may be enabled to post it to a social media website for viewing by other individuals (e.g. potential customers) associated with the customer (e.g. part of the customer's social network on that site). For example, the application on the smartphone 2, 3 may be operative to receive a command from the customer, to post the story to the social media website (e.g. using the system 1, or directly). For example, the story may be posted to a Facebook® page of the customer, for viewing by the customer's Facebook® friends. This enable the shopping-together experience with friends remotely.

    [0038] Although only a single embodiment of the invention has been described in detail, many variations are possible as will be clear to a skilled reader, within the scope of the appended claims.

    References

    [0039] The disclosure in the following references is incorporated by reference in its entirety.

    [0040] 1. Jonathan Gottschall, “Why storytelling is the ultimate weapon?” http://www.fastcocreate.com/1680581/why-storytelling-is-the-ultimate-weapon.

    [0041] 2. Yundong Cai, Zhiqi Shen, Chunyan Miao, Ah-Hwee Tan: DIRACT: Agent-Based Interactive Storytelling. International Conference on Agent Technology 2010: 273-276. [0042] 3. F. Charles, S. Mead, and M. Cavazza. Character-driven story generation in interactive storytelling. In Virtual Systems and Multimedia, 2001. Proceedings. Seventh International Conference on, pages 609-615, Berkeley, Calif., 2001. [0043] 4. B. Magerko and J. Laird. Building an interactive drama architecture. In First International Conference on Technologies for Interactive Digital Storytelling and Entertainment, pages 226-237, Darmstadt, Germany, 2003. [0044] 5. M. Mateas and A. Stern. Façade: An experiment in building a fully-realized interactive drama. In Game Developers Conference, Game Design track, San Jose, USA, 2003. [0045] 6. R. M. Young, M. O. Riedl, M. Branly, A. Jhala, R. J. Martin, and C. J. Saretto. An architecture for integrating plan-based behavior generation with interactive game environments. Journal of Game Development, 1(1):51-70, 2004. [0046] 7. Y. Cai et al., “Context Modeling with Evolutionary Fuzzy Cognitive Map in Interactive Storytelling,” IEEE Int'l Conf. Fuzzy Systems (WCCI 08), IEEE CS Press, 2008, pp.

    [0047] 2320-2325. [0048] 8. WO 2007/053898, “Personalised video generation”, [0049] 9. WO 2008/043143, “Personalised content generation”.