G06T2213/12

COMMUNICATION DEVICE, COMMUNICATION METHOD, AND COMMUNICATION PROGRAM
20230075863 · 2023-03-09 · ·

When cheering of a distributor avatar by virtual avatars has been detected and the distributor avatar reacts to the cheering, a reaction motion is performed for the virtual avatars that cheered, and a normal motion that is not a reaction motion is performed for the viewer avatars that did not cheer. The motion performed for the virtual avatars that cheered differs from the motion performed for the viewer avatars that did not cheer based on timing indicated by the distributor. Either the motion performed for the virtual avatars that cheered or the motion performed for the viewer avatars that did not cheer is a predetermined motion.

SYSTEM FOR NEUROBEHAVIORUAL ANIMATION

The present invention relates to a computer implemented system for animating a virtual object or digital entity. It has particular relevance to animation using biologically based models, or behavioural models particularly neurobehavioural models. There is provided a plurality of modules having a computational element and a graphical element. The modules are arranged in a required structure and have at least one variable and being associated with at least one connector. The connectors link variables between modules across the structure, and the modules together provide a neurobehavioural model. There is also provided a method of controlling a digital entity in response to an external stimulus.

REAL-TIME GOAL SPACE STEERING FOR DATA-DRIVEN CHARACTER ANIMATION
20170365091 · 2017-12-21 · ·

A method for generating real-time goal space steering for data-driven character animation is disclosed. A goal space table of sparse samplings of possible future locations is computed, indexed by the starting blend value and frame. A steer space is computed as a function of the current blend value and frame, interpolated from the nearest indices of the table lookup in the goal space. The steer space is then transformed to local coordinates of a character's position at the current frame. The steer space samples closest to a line connecting the character's position with the goal location may be selected. The blending values of the two selected steer space samples are interpolated to compute the new blending value to render subsequent frames of an animation sequence.

SYSTEM FOR CUSTOMIZING IN-GAME CHARACTER ANIMATIONS BY PLAYERS
20230186541 · 2023-06-15 ·

System and methods for using a deep learning framework to customize animation of an in-game character of a video game. The system can be preconfigured with animation rule sets corresponding to various animations. Each animation can be comprised of a series of distinct poses that collectively form the particular animation. The system can provide an animation-editing interface that enables a user of the video game to make modifications to at least one pose or frame of the animation. The system can realistically extrapolate these modifications across some or all portions of the animation. In addition or alternatively, the system can realistically extrapolate the modifications across other types of animations.

Dynamic media collection generation
11676320 · 2023-06-13 · ·

A computer system receives user selection of an avatar story template. User-specific parameters relating to the user are determined and real-time data, based at least in part on the user-specific parameters, is retrieved. Specific media or digital assets are obtained based on at least one of the real-time data and the user-specific parameters. An avatar story is then generated by combining the avatar story template and the specific media or digital assets. The avatar story is then displayed on a display of a computing device.

Communication device, communication method, and communication program
11500456 · 2022-11-15 · ·

When cheering of a distributor avatar by virtual avatars has been detected and the distributor avatar reacts to the cheering, a reaction motion is performed for the virtual avatars that cheered, and a normal motion that is not a reaction motion is performed for the viewer avatars that did not cheer. The motion performed for the virtual avatars that cheered differs from the motion performed for the viewer avatars that did not cheer based on timing indicated by the distributor. Either the motion performed for the virtual avatars that cheered or the motion performed for the viewer avatars that did not cheer is a predetermined motion.

SYSTEM, APPARATUS AND METHOD FOR FORMATTING A MANUSCRIPT AUTOMATICALLY
20170309309 · 2017-10-26 ·

System, method and apparatuses of the present invention directed to a paradigm of manuscript generation and manipulation from a source textual document, involving a first format, into another document in a second format. A converter converts scenes, dialogue, milieus, movements, actions and other instructions input or stored in a first format into a second, different format, and vice versa.

Method of animating messages

The present invention relates to rendering texts in a natural language, namely to manipulating a text in a natural language to generate an image or animation corresponding to this text. The invention is unique mainly in that a sequence of animations is selected, semantically corresponding to a given text. Given a set of animations and a text, the invention makes it possible to compare the sequence of these animations to this text. It is unique in that text templates are used and an optimum sequence of these templates is determined. The idea of the template-based text rendering consists in that the text is manipulated to generate an image or animation with the aid of searching correspondences to a limited number of predefined templates. An animation according to certain style is selected in compliance with each template. Animations are sequentially combined into a single sequence of video images.

ANIMATED DELIVERY OF ELECTRONIC MESSAGES
20170230321 · 2017-08-10 ·

An electronic message is transformed into moving images uttering the content of the electronic message. Methods of the present invention may be implemented on devices such as smart phones to enable users to compose text and select an animation character which may include cartoons, persons, animals, or avatars. The recipient is presented with an animation or video of the animation character with a voice that speaks the words of the text. The user may further select and include a catch-phrase associated with the character. The user may further select a background music identifier and a background music associated with the background music identifier is played back while the animated text is being presented. The user may further select a type of animation and the animation character will be animated according to the type of animation.

ANIMATING A VIRTUAL OBJECT IN A VIRTUAL WORLD
20170221249 · 2017-08-03 ·

A computer implemented method for use in animating parts of a virtual object in a virtual world, the method comprising accessing joint data for each joint of a chain of joints associated with parts of a virtual object, joint data including length data defining a vector length for a vector from the joint to a next joint, the length data corresponding to a length of a part in the virtual world; accessing data for a target curve for use in defining possible target locations for the joints of the parts of the virtual object; and processing the joint data to set a location of a first joint at a first end of the chain to location of a first end of the target curve; define an end target location on the curve for an end joint at a second end of the chain; define intermediate locations on the curve for joints intermediate the ends of the chain based on the lengths of the vectors along the chain; and for a number of iterations, repeatedly identify a joint at a location having a largest location error relative to an intermediate location on the curve for the joint; rotate a vector for a preceding joint in the chain to minimize a distance between the end joint and the intermediate location on the curve; rotate a vector for the identified joint to minimize the distance between the end joint and the end target location on the curve; identify a joint at a location having the largest location error relative to an intermediate location on the curve for the joint; and determine a rotation to be applied to the vector for the first joint and the vector for the identified joint to fit the end joint to the end target location, and rotating the vector for the first joint and the vector for the identified joint to fit the end joint to the end target location.