G06T2213/04

GENERATION OF STORY VIDEOS CORRESPONDING TO USER INPUT USING GENERATIVE MODELS
20230118966 · 2023-04-20 ·

The present disclosure provides systems and methods for video generation corresponding to a user input. Given a user input, a story video with content relevant to the user input can be generated. One aspect includes a computing system comprising a processor and memory. The processor can be configured to execute a program using portions of the memory to receive the user input, generate a story text based on the user input, generate a plurality of story images based on the story text, and output a story including the story text and a story video having content corresponding to the story text, wherein the story video includes the plurality of story images. Additionally or alternatively, the story video can include audio data and a plurality of generated animated videos, each animated video corresponding to a story image in the plurality of story images.

SYSTEM, APPARATUS AND METHOD FOR FORMATTING A MANUSCRIPT AUTOMATICALLY
20170309309 · 2017-10-26 ·

System, method and apparatuses of the present invention directed to a paradigm of manuscript generation and manipulation from a source textual document, involving a first format, into another document in a second format. A converter converts scenes, dialogue, milieus, movements, actions and other instructions input or stored in a first format into a second, different format, and vice versa.

Systems and methods for generating content for a screenplay

Systems and methods are disclosed herein for generating content based on format-specific screenplay parsing techniques. The techniques generate and present content by generating new dynamic content structures to generate content segments for output on electronic devices. In one disclosed technique, a first instance of a first character name is identified from the screenplay document. A first set of character data following the first instance of the first character name from the screenplay document and preceding an instance of a second character name from the screenplay document is then identified. Upon identification of the first set of character data, a content structure including an object is generated. The object includes attribute table entries based on the first set of character data. A content segment is generated for output based on the content structure (e.g., a 3D animation of the first character interacting within a scene).

SYSTEMS AND METHODS FOR GENERATING CONTENT FOR A SCREENPLAY

Systems and methods are disclosed herein for generating content based on format-specific screenplay parsing techniques. The techniques generate and present content by generating new dynamic content structures to generate content segments for output on electronic devices. In one disclosed technique, a first instance of a first character name is identified from the screenplay document. A first set of character data following the first instance of the first character name from the screenplay document and preceding an instance of a second character name from the screenplay document is then identified. Upon identification of the first set of character data, a content structure including an object is generated. The object includes attribute table entries based on the first set of character data. A content segment is generated for output based on the content structure (e.g., a 3D animation of the first character interacting within a scene).

Eye animated expression display method and robot using the same

The present disclosure provides an eye animated expression display method. The method includes: receiving an instruction for displaying an eye animated expression; parsing a JSON file storing the eye animated emoticon to obtain a parsing result; and displaying the eye animated emoticon on the eye display screen based on the parsing result. The present disclosure further provides a robot. In the above-mentioned manner, the present disclosure is capable of improving the interactive performance of the eyes of the robot while reducing the space for storing eye animated expressions.

SYSTEMS AND METHODS FOR GENERATING CONTENT FOR A SCREENPLAY

Systems and methods are disclosed herein for generating content based on format-specific screenplay parsing techniques. The techniques generate and present content by generating new dynamic content structures to generate content segments for output on electronic devices. In one disclosed technique, a first instance of a first character name is identified from the screenplay document. A first set of character data following the first instance of the first character name from the screenplay document and preceding an instance of a second character name from the screenplay document is then identified. Upon identification of the first set of character data, a content structure including an object is generated. The object includes attribute table entries based on the first set of character data. A content segment is generated for output based on the content structure (e.g., a 3D animation of the first character interacting within a scene).

Method and apparatus for implementing animation in client application and animation script framework

A method for implementing animation in a client application, includes receiving an animation code written in a script language from a server, the animation code including a logic script and an animation description script; parsing the logic script in the animation code, and obtaining a view identifier, an animation identifier, and a pre-obtained correspondence relationship between the view identifier and the animation identifier included therein; determining a view component to be driven and corresponding to the view identifier in a client application based on a correspondence relationship between view identifiers and view components, and reading an animation description to be implemented and corresponding to the animation identifier in the animation description script according to the animation identifier corresponding to the view identifier; and determining that loading of the animation description to be implemented in the view component to be driven according to a condition provided by the logic script.

Co-registration—simultaneous alignment and modeling of articulated 3D shapes

Present application refers to a method, a model generation unit and a computer program (product) for generating trained models (M) of moving persons, based on physically measured person scan data (S). The approach is based on a common template (T) for the respective person and on the measured person scan data (S) in different shapes and different poses. Scan data are measured with a 3D laser scanner. A generic personal model is used for co-registering a set of person scan data (S) aligning the template (T) to the set of person scans (S) while simultaneously training the generic personal model to become a trained person model (M) by constraining the generic person model to be scan-specific, person-specific and pose-specific and providing the trained model (M), based on the co-registering of the measured object scan data (S).

Method for Sharing Emotions Through the Creation of Three-Dimensional Avatars and Their Interaction

A system includes at least one camera capable of acquiring at least one first image comprising a representation of at least a portion of a face and at least one processor. The processor is configured to: detect a plurality of landmarks in the at least one first image, the plurality of landmarks corresponding to respective features of the face; detect an emotion expressed in the face; generate a three-dimensional model of the face based, at least in part, on the plurality of detected landmarks; and, in response to a request to incorporate at least one object, generate at least one second image comprising a representation of at least a portion of the at least one object atop of the representation of at least a portion of the face generated from the three-dimensional model, wherein the emotion detected in the face is expressed in the at least one object.

EYE ANIMATED EXPRESSION DISPLAY METHOD AND ROBOT USING THE SAME

The present disclosure provides an eye animated expression display method. The method includes: receiving an instruction for displaying an eye animated expression; parsing a JSON file storing the eye animated emoticon to obtain a parsing result; and displaying the eye animated emoticon on the eye display screen based on the parsing result. The present disclosure further provides a robot. In the above-mentioned manner, the present disclosure is capable of improving the interactive performance of the eyes of the robot while reducing the space for storing eye animated expressions.