G06T2213/04

Using dialog and contextual data of a virtual reality environment to create metadata to drive avatar animation

One or more services may generate audio data and animations of an avatar based on input text. A speech input ingestion (SII) service may identify tags of objects in a virtual environment and associate tags of those objects with words in the input text, which may be stored as metadata in speech markup data. This association may enable an animation service to generate gestures toward objects while animating an avatar, or may be used to create animations or effects of the object. The SII service may analyze input text to identify dialog including multiple speakers associated with the text. The SII service may create metadata to associate certain words with respective speakers (avatars) of those words, which may be processed by the animation service to animate multiple avatars speaking the dialog.

Systems and methods for generating content for a screenplay

Systems and methods are disclosed herein for generating content based on format-specific screenplay parsing techniques. The techniques generate and present content by generating new dynamic content structures to generate content segments for output on electronic devices. In one disclosed technique, a first instance of a first character name is identified from the screenplay document. A first set of character data following the first instance of the first character name from the screenplay document and preceding an instance of a second character name from the screenplay document is then identified. Upon identification of the first set of character data, a content structure including an object is generated. The object includes attribute table entries based on the first set of character data. A content segment is generated for output based on the content structure (e.g., a 3D animation of the first character interacting within a scene).

Techniques for ontology driven animation
10438392 · 2019-10-08 ·

A stylesheet data structure includes a plurality of stylesheet records, each comprising an ontology concept field, a presentation instruction field, and a presentation identifier field. Techniques for ontology driven animation includes receiving a request to render an instance of a first concept in an annotation with an associated ontology. It is determined whether a stylesheet file includes a first stylesheet record that indicates the first concept, wherein the first stylesheet record also indicates a first presentation identifier. If so, then an instance of a first component of the first concept is rendered according to a presentation instruction indicated in a second stylesheet record that also indicates the first presentation identifier. In some embodiments, the instance of the first component of the first concept is an instance of the first concept and the second stylesheet record is the first stylesheet record.

SYSTEMS AND METHODS FOR GENERATING CONTENT FOR A SCREENPLAY

Systems and methods are disclosed herein for generating content based on format-specific screenplay parsing techniques. The techniques generate and present content by generating new dynamic content structures to generate content segments for output on electronic devices. In one disclosed technique, a first instance of a first character name is identified from the screenplay document. A first set of character data following the first instance of the first character name from the screenplay document and preceding an instance of a second character name from the screenplay document is then identified. Upon identification of the first set of character data, a content structure including an object is generated. The object includes attribute table entries based on the first set of character data. A content segment is generated for output based on the content structure (e.g., a 3D animation of the first character interacting within a scene).

System, method and apparatus for generating hand gesture animation determined on dialogue length and emotion

System, method and apparatuses directed to a paradigm of manuscript generation and manipulation combined with contemporaneous or simultaneous visualization of the text or other media being entered by the creator with emotion and mood of the characters being conveyed graphically through rendering. Through real time calculations, respective characters are graphically depicted speaking and interacting physically with other characters, pursuant to directive found in a manuscript text.

Method for sharing emotions through the creation of three-dimensional avatars and their interaction

A two-dimensional image is transformed into at least one portion of a human or animal body into a three-dimensional model. An image is acquired that includes the at least one portion of the human or animal body. An identification is made of the at least one portion within the image. Searches are made for features indicative of the at least one portion of the human or animal body within the at least one portion. One or more identifications are made of a set of landmarks corresponding to the features. An alignment is a deformable mask including the set of landmarks. The deformable mask includes a number of meshes corresponding to the at least one portion of the human or animal body. The 3D model is animated by dividing it into concentric rings, quasi rings and applying different degrees of rotation to each ring.

Method and Apparatus for Implementing Animation in Client Application and Animation Script Framework

A method for implementing animation in a client application, includes receiving an animation code written in a script language from a server, the animation code including a logic script and an animation description script; parsing the logic script in the animation code, and obtaining a view identifier, an animation identifier, and a pre-obtained correspondence relationship between the view identifier and the animation identifier included therein; determining a view component to be driven and corresponding to the view identifier in a client application based on a correspondence relationship between view identifiers and view components, and reading an animation description to be implemented and corresponding to the animation identifier in the animation description script according to the animation identifier corresponding to the view identifier; and determining that loading of the animation description to be implemented in the view component to be driven according to a condition provided by the logic script.

METHODS AND SYSTEMS FOR MEDIATING MULTIMODULE ANIMATION EVENTS
20190102929 · 2019-04-04 ·

Systems and methods for mediating multimodule animation events may be provided. A mediating module may generate animation content in real time by combining visual data of an animation subject received from a subject interface module, animation assets received from an asset module, and animation instructions, and transmit the animation content to a viewer interface module, whereby a viewer can view the animation content.

SYSTEM FOR GENERATION OF CUSTOM ANIMATED CHARACTERS

At least one general aspect can include a method of receiving an animation from an animation memory and selecting a character from a plurality of displayed characters. The character can include at least one of a head, a body, or a limb. The method can include applying the animation to the character to generate a customized character having the animation, and trigger posting of the customized character having the animation using an application.

System for parametric generation of custom scalable animated characters on the web

A graphic character object temporary storage stores parameters of a character and associated default values in a hierarchical data structure and one or more animation object data represented in a hierarchical data structure, the one or more animation object data having an associated animation, the graphic character object temporary storage and the animation object data being part of a local memory of a computer system. A method includes receiving a vector graphic object having character part objects which are represented as geometric shapes, displaying a two dimensional character, changing the scale of a part of the displayed two dimensional character, and storing an adjusted parameter in the graphic character object temporary storage as a percentage change from the default value, displaying a customized two dimensional character, applying keyframe data in an associated animation object data to the character parts objects, and displaying an animation according to the keyframe data.