Generation of animation using icons in text
09953450 · 2018-04-24
Assignee
Inventors
Cpc classification
G09B5/065
PHYSICS
G09B5/062
PHYSICS
G10L2021/105
PHYSICS
International classification
Abstract
There is described a method for creating an animation, comprising: inserting at least one icon within a text related to the animation, the at least one icon being associated with an action to be performed by one of an entity and a part of an entity, at a point in time corresponding to a position of the at least one icon in the text, and a given feature of an appearance of the at least one icon being associated with one of the entity and the part of the entity; and executing the text and the at least one icon in order to generate the animation.
Claims
1. A method for creating an animation using text, the method comprising: receiving from a text input, a text comprising one or more entities and one or more events, the events comprising one or more of: actions to be performed by an entity or by a part of the entity, and words to be transformed into speech; receiving a user input inserting at least one icon within the text, said at least one icon for embedding a non-textual content into the text; generating the animation for display on a display unit, the generating comprising executing the text and said at least one icon; wherein executing the text comprises constructing a visual depiction of the text and performing one or more of: generating a visual depiction of the actions performed by the entity or by the part of the entity, and converting the words into a speech using a text-to-speech module; and wherein executing said at least one icon includes executing the non-textual content of the icon wherein said execution occurs at a point in time corresponding to a position of said at least one icon in said text, said point in time being equivalent to the time at which an order of execution reaches the at least one icon in the text.
2. The method as claimed in claim 1, wherein said inserting at least one icon comprises inserting a plurality of icons at a same location within said text and said executing said text and said at least one icon comprises executing said plurality of icons consecutively.
3. The method as claimed in claim 2, wherein said inserting a plurality of icons at a same location within said text comprises inserting at least one separator symbol between said plurality of icons.
4. The method as claimed in claim 1, wherein said inserting at least one icon comprises inserting a plurality of icons at a same location and said executing said text and said at least one icon comprises executing said plurality of icons simultaneously.
5. The method as claimed in claim 1, wherein said given feature of an appearance is one of a color of said at least one icon, a shape of said at least one icon, and a depiction of said at least one icon.
6. The method as claimed in claim 1, wherein said given feature is associated with said entity and an additional feature of said appearance of said at least one icon is associated with said part of said entity.
7. The method as claimed in claim 6, wherein said position of said at least one icon in said text is further dependent on said part of said entity.
8. The method as claimed in claim 1, wherein said inserting at least one icon comprises inserting a plurality of icons to be regrouped together to form a continuous sequence of icons and executed in sequence according to a set of predefined synchronization rules.
9. The method as claimed in claim 1, wherein said inserting at least one icon comprises dragging and dropping said icon directly into said text from an icon box.
10. The method as claimed in claim 1, wherein said inserting at least one icon comprises dragging and dropping said icon directly into said text from a world view.
11. The method as claimed in claim 1, wherein said inserting at least one icon comprises inputting a textual command that will generate said icon.
12. The method of claim 1 wherein the order of execution of an event within the text is language specific and depends on a direction in which the text is written/read.
13. The method as claimed in claim 1, wherein the non-textual content includes one or more of: video, audio, image, action, time markers, and spatial markers.
14. A system for creating an animation from text, comprising: at least one processor in one of at least one computer and at least one server; and at least one application coupled to the at least one processor, said at least one application being configured for: receiving, from a text input, a text comprising one or more entities and one or more events, the events comprising one or more of: actions to be performed by an entity of by a part of the entity, and words to be transformed into speech; receiving a user input inserting at least one icon and embedding the at least one icon at a given position in the text, said at least one icon for embedding a non-textual content into the text; generating the animation for display on a display unit, the generating comprising executing the text and said at least one icon; wherein executing the text comprises constructing a visual depiction of the text and performing one or more of: generating a visual depiction of the actions performed by the entity or by the part of the entity, and converting the words to a speech using a text-to-speech module; and wherein executing the at least one icon includes executing the non-textual content of the icon wherein said execution occurs at a point in time corresponding to the given position of said at least one icon in said text, said point in time being equivalent to the time at which an order of execution reaches the icon in the text.
15. The system as claimed in claim 14, wherein said at least one processor is located in said at least one computer and said at least one application is located in said at least one computer.
16. The system as claimed in claim 14, wherein said at least one processor comprises a first processor located in said computer and a second processor located in said server, and said at least one application comprises a first application coupled to the first processor and a second application coupled to the second processor, said first processor configured for receiving said text, sending said text to said server, and receiving said animation, and said second processor configured for receiving said text from said computer, and executing said text and said at least one icon, and sending said animation to said computer.
17. The system as claimed in claim 14, wherein said at least one application is further configured for determining animation information using said text and said at least one icon, and generating said animation using said animation information.
18. The system as claimed in claim 17, wherein said at least one processor comprises a processor located in said at least one computer and said at least one application comprises an application coupled to said processor, said application configured for determining said animation information and generating said animation.
19. The system as claimed in claim 17, wherein said at least one processor comprises a first processor located in said at least one computer and a second processor located in said at least one server and said at least one application comprises a first application coupled to the first processor and a second application coupled to the second processor, said first processor configured for determining said animation information, sending said animation information to said server, and receiving said animation, and said second processor configured for receiving said animation information, generating said animation, and sending said animation to said first processor.
20. The system as claimed in claim 17, wherein said at least one processor comprises a first processor located in said at least one computer and a second processor located in said at least one server and said at least one application comprises a first application coupled to the first processor and a second application coupled to the second processor, said first processor configured for sending said text to said server and receiving said animation, and said second processor configured for receiving said text, determining said animation information, generating said animation, and sending said animation to said first processor.
21. The system of claim 14, wherein the order of execution of an event within the text is language specific and depends on a direction in which the text is written/read.
22. The system as claimed in claim 14, wherein the non-textual content includes one or more of: video, audio, image, action, time markers, and spatial markers.
23. A method for creating an animation using text, the method comprising: receiving from a text input, a text comprising one or more entities, one or more events comprising one or more of: actions to be performed by an entity or by a part of the entity, and words to be transformed into speech, and at least one icon for embedding a non-textual content into the text; generating the animation for display on a display unit, the generating comprising executing the text and said at least one icon; wherein executing the text comprises constructing a visual depiction of the text and performing one or more of: generating a visual depiction of the actions performed by the entity or by the part of the entity, and converting the words into a speech using a text-to-speech module; and wherein executing said at least one icon includes executing the non-textual content of the icon wherein said execution occurs at a point in time corresponding to a position of said at least one icon in said text, said point in time being equivalent to the time at which an order of execution reaches the at least one icon in the text.
24. The method as claimed in claim 23, wherein the non-textual content includes one or more of: video, audio, image, action, time markers, and spatial markers.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Further features and advantages of the present invention will become apparent from the following detailed description, taken in combination with the appended drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)
(36)
(37)
(38)
(39)
DETAILED DESCRIPTION
(40) A text box, also called a block, may be used to enter text. The input text can also come from a word processor or any other type of text input, but a text box will be used throughout the description to simplify understanding and should not be construed as limiting. A traditional timeline located outside the text box is used for the timing and synchronization of events. The execution of the content of the text box is made in accordance with the timeline and the resulting scene is displayed in a window.
(41)
(42) The general arrangement of content within a block can be either homogeneous or heterogeneous. A homogenous block is one that contains only one type of content; for example, only text, or only video or only audio, etc. A heterogeneous block contains two or more types of content.
(43) The positioning order of the content and/or icons representative of content within a block dictates the temporal order of execution of the content. When block 104 is activated, text 105 is executed. During the execution of text 105, the content associated with icon 106 is executed when this icon 106 is reached. For example, if icon 106 represents a video, this video is executed. When block 107 is activated, as audio icon 107b is located after video icon 107a, the video associated with video icon 107a is first executed and the execution of the audio track associated with audio icon 107a occurs once the execution of the video is completed.
(44) In one embodiment, text holds a prominent position with respect to the other types of content since it is usually the primary tool for storytelling. All types of content other than text can be embedded within text in the form of commands or icons. For example, if a block containing text describes a scene, then other content can be inserted within the text to enrich the scene. At a certain point within the scene, background music may be added. This is done by adding a reference to an audio file directly within the text. The reference may be in the form of an icon, or by a textual command. It should be understood that any reference that can be inserted into a block can be used.
(45) In all subsequent figures, a generic icon symbol of the type of icon 106 is used to represent any type of content other than text that can be embedded within text and the location of the icon within the text dictates the time at which the content associated with the icon is executed.
(46)
(47)
(48) In one embodiment, icons are associated with an entity.
(49) In one embodiment, a color code is used to discriminate the entities. Each entity is associated with a unique color and icons having this color cause an action to be performed on or by the entity associated with the same color. All icons associated with a given entity share the same colour. Block 310 presents an example of a color code. For example, color 1 (white) is associated with an entity Bob and color 2 represented by horizontal lines is associated to a second and different entity such as Mary. In block 307, all icons have the same shape and the color of icons 308 and 309 is used to discriminate the entities. For example, icon 308 has color 1 and is therefore associated with the entity Bob while icon 309 has color 2 and is associated with the entity Mary. Icon 308 is set to Make a scary face and icon 309 is set to Jump. When block 307 is executed, the entity Bob makes a scary face which causes the entity Mary to jump. This allows for a visual indicator of association between actions and entities.
(50) In another embodiment, icons 308 and 309 are associated with different parts of a same entity. For example, icon 308 having the first color is associated with the upper body of an entity and icon 309 having the second color is associated with the lower body of the same entity. An entity can be broken down in more than two parts and each part is associated with a corresponding icon having a unique color.
(51) It should be understood that any feature of the appearance of an icon can be used to create an association with an entity or a part of an entity. The shape, the size and the color of an icon are examples of features that can be used. In another example, all icons have the same shape and color but a drawing is inserted into the icons to discriminate the entities or the parts of a same entity, as illustrated in
(52) In one embodiment, the appearance features used to discriminate entities or parts of an entity are automatically assigned by the software. For example, if a colour code is used to discriminate the entities, colours are automatically assigned by the software for each new entity created. However, the user has the ability to change the colours given. Each entity within a given scene has a specific and unique colour. No two entities can have the same colour within the same scene. Alternatively, colors can be set manually by the user.
(53) In one embodiment, a first feature of the appearance of icons is used to discriminate entities and a second feature of the appearance of the same icons is used to discriminate parts of the entities. For example, the color of an icon is used to represent the entity with which the icon is associated while the shape of the icon is used discriminate the parts of the entities, as illustrated in
(54) The concept of Channels can be applied to any entity of the animation. The entity can be a character and the channels are associated with parts or members of a body, such as a right foot, eyes, etc. The entity can also be an object (which is traditionally inanimate) such as a chair. A channel can be associated with the legs of the chair (to walk) and another channel can be associated with the backrest which might incorporates a face, for example.
(55) While
(56) While
(57)
(58) In one embodiment, each channel can have its own track. Taking the example of icons 413a-c in block 411, the space between the two text parts can be divided into three locations. The first location can be reserved for channel A (icon 413a), the second location for channel B (icon 413b) and the third position for channel E (icon 413c).
(59) In one embodiment, the relative position of each track is logical: the head icon (circle) 414a is above the torso icon 414b (square) and that should be above the leg icon (star) 414c.
(60)
(61) In one embodiment, icons representative of causal events can be grouped together to form a group icon which facilitates the manipulation of these causal events. Grouping causal events enables to keep track of all the causal events and to move them all together.
(62)
(63)
(64)
(65) In one embodiment, the words that are typed to convert to a given icon can be predetermined as in walk or thumbs up to specify a walk icon or a thumbs up icon, or they can be text interpreted by natural language processing (NLP). An NLP based system would take a description and attempt to understand the user's intentions and insert the appropriate icon within the block. For example, the user writes walk and the system recognizes that the walk action is related to the legs and converts the text command into the appropriate channel (leg) icon. In another example, if the user wrote point the thumbs up then turn it down, the NLP system would interpret this to mean the insertion of a thumbs up icon followed immediately by a thumbs down icon.
(66)
(67) In one embodiment, the order of the blocks determines their order of execution. The direction horizontal Or vertical is language specific, depending on how sentences within the language are written, as illustrated in
(68) In the same or a further embodiment, timing and synchronization within the blocks is controlled mainly via text or dialog, as illustrated in
(69) Yet another option, to be used separately or in combination with the above techniques, the order and type of action icons determines the timing of their execution, i.e. the execution of action icons follows specific rules when several action icons are placed one after the other in the text. Many rules are possible, illustrated by the following two rules for action icons:
(70) Rule 1The rule of interfering action icons: sequential action icons will either be executed consecutively or simultaneously depending on whether they interfere or not, as illustrated in
(71) Rule 2The rule of mutable priority via channel type hierarchy: this rule dictates the execution timing of action icons based on a hierarchy of action icon types. The exact hierarchy of action icon types is itself variable both in number and precedence. An example of a hierarchy from higher to lower precedence is as follows: Walks, Postures, Gestures, Looks, Facial Expressions. Any action icon with a higher precedence will create an implicit timing marker for all action icons with a lower precedence even though they don't interfere. For example, referring to
(72) Yet another way of controlling timing and synchronization is to have a separator symbol forcing the synchronization of action icons. A visual method of forcing synchronization to be consecutive is achieved with a separator symbol. In one embodiment, the symbol | (748, 749) is chosen as the separator symbol as illustrated in
(73)
(74)
(75)
(76) In one embodiment, during the paste procedure, each generic entity within the saved sequence (E1, E2, etc) is reassigned to a particular entity 824 within the new scene (Mike, John, etc). Since each generic entity within the saved sequence is coloured, the assignment of new entities (from the new scene) depends on the colours already in use in the new scene. The choice of assignment for the colours might be done automatically by software that makes all colour choices based on default rules, manually the user has complete control, or some combination of both where the software makes suggestions of colour assignments and the user has to validate the suggestions. The paste process allows the sequence to be pasted in either a compact form 829 or an expanded form 828. The expanded form will have all the icons visible and accessible while the compact form will have the action sequence contained in a single group icon, Group dance 1 for example.
(77)
(78)
(79)
(80) The modifications of blocks illustrated in
(81) It should be understood that icons associated with entities can be used to create 1D, 2D or 3D animations.
(82) In one embodiment, a machine comprising a processor and a memory is adapted for creating an animation using icons. An application coupled to the processor is configured for inserting at least one icon within a text related to the animation in accordance with the method described above. The text is displayed on a display unit and can be input by a user via a user interface comprising an input device such as a keyboard, for example. Alternatively, the text may be present in the memory or received via communication means. The user positions the icons in the text using the user interface. Each icon is associated with a corresponding action to be performed by an entity. The user inserts each icon at a position in the text corresponding to the point in time at which the corresponding action is to be performed in the animation. Each icon has a feature that is associated with the entity which will perform the corresponding action in the animation. The application is further configured for executing the text and the icons in order to generate the animation. The application executes the text and each time an icon is reached, the application determines the action information related to the icon. The action information comprises all needed information for creating the action in the animation. For example, the action information comprises the type of the action associated with the icon, the start time of the action which is determined using the position of the icon in the text, the entity performing the action which is determined using the feature of the icon, and the like.
(83)
(84) The processor 1206 is coupled to an application Configured for inserting at least one icon within a text related to the animation in accordance with the method described above. The user of the machine 1202 inputs the text via the user interface 1214. Alternatively, the text may be uploaded from the memory 1208 or received from the communication means 1210. The text is displayed on the display unit 1212 such that the user can insert icons in the text. Each icon is associated with a corresponding action to be performed by a corresponding entity. The position of each icon in the text corresponds to the point in time at which the corresponding action is to occur in the animation. Furthermore, a given feature of the appearance of each icon is associated with the corresponding entity. The application coupled to the processor 1206 is further configured for sending animation information to the server 1204 and receiving the animation from the server 1204 via communication means 1210. The animation information comprises all of the information required for creating the animation. For example, the animation information comprises the text related to the animation and a description of the action to be performed such as the type of actions, the entities performing the actions, the start time of the actions, the duration of the action, etc.
(85) The processor 1216 is coupled to an application configured for receiving the animation information from the computer 1202 via the communication means 1220, generating the animation using the animation information, and sending the animation to the computer 1202 via the communication means 1220. The resulting animation comprises the actions associated with the icons. The start time of each action in the animation corresponds to the position of the corresponding icon within the text.
(86) Once the animation is created on the server 1204, the animation is sent to the computer 1202 and stored in memory 1208. The user can display the animation on the display unit 1212.
(87) In one embodiment, the server 1204 receives the text related to the animation and the icons inserted in the text from the computer 1202. In this case, the application coupled to the processor 1206 is configured for sending the text and the icons to the server 1204. The application coupled to the processor 1216 is configured for determining the animation information using the received text and icons in order to generate the animation. The application executes the text and each time an icon is reached, the application determines the action information related to the icon. The application is configured for determining the type of the action associated with the icon, the start time of the action using the position of the icon in the text, and the entity performing the action using the feature of the icon. For example, a circular icon may be associated with the head of an entity and yellow may be associated with the entity Jim. In this example, the head can only perform a single action, namely smiling. When it encounters the circular and yellow icon, the application coupled to the processor 1216 determines the action Jim smiling. The start time of the Jim smiling action is defined by the position of the yellow and circular icon in the text. A predetermined duration may be associated with the smiling action.
(88) In one embodiment, icon parameters are associated with an icon. For example, the icon parameters can comprise the type of the action associated with the icon, the duration of the action, and the like. Referring back to the yellow and circular icon, more than one action may be associated with a circular icon, For example, the actions smiling and grimacing can be associated with the icon. In this case, the chosen action to be associated with the icon is defined in the icon parameters. The duration of the action can also be defined in the icon parameters. For example, the icon parameters may be set to the smiling action and to 2 second duration. In this embodiment, the user defines the icon parameters, and the application coupled to the processor 1206 is further configured for sending the icon parameters of each icon to the server 1204. The application coupled to the processor 1216 is further configured for generating the animation using the icon parameters associated with each icon.
(89) In one embodiment, the application coupled to the processor 1206 is configured for determining the animation information using the text and the icons inserted in the text. If icon parameters are associated with the icons, the application also uses the icon parameters to generate the animation information. In this case, the animation information comprises the text related to the animation and the action information which defines all of the actions associated with the icons. The application coupled to the processor 1206 is further configured for sending the text and the action information to the server 1204. The application coupled to the processor 1216 is configured for receiving the transmitted text and the action information, and for generating the animation using the text and the action information.
(90) It should be understood that the application coupled to the processor 1216 of the server 1204 can use any method known to a person skilled in the art for generating the animation using animation information.
(91) It should be noted that the embodiments of the invention described above are intended to be exemplary only. The scope of the invention is therefore intended to be limited solely by the scope of the appended claims.