A63F2300/6607

MOTIVATIONAL KINESTHETIC VIRTUAL TRAINING PROGRAM FOR MARTIAL ARTS AND FITNESS
20170291086 · 2017-10-12 ·

Apparatus and associated methods relate to a computer system executing a predetermined motivational kinesthetic martial arts training program (MKMATP), the system including a preparation phase, a participation phase and a simulation phase, with a score being generated indicative of a user's move performance during the participation phase, and displaying an avatar performing moves at the user's score level during the simulation phase. The system may include sensors monitoring the user's physical space. The preparation phase may be mandatory based on an enforcement policy. The participation phase may depict various physical moves, and may generate scores indicative of the user's performance of the moves. The simulation phase may produce a computer-simulated scenario of an avatar performing the moves against an opponent, based on the user's score and randomized variables. In an illustrative example, users may learn martial arts skills and stay motivated by viewing the simulation action, while becoming physically fit.

Enhanced animation generation based on video with local phase

Embodiments of the systems and methods described herein provide a dynamic animation generation system that can apply a real-life video clip with a character in motion to a first neural network to receive rough motion data, such as pose information, for each of the frames of the video clip, and overlay the pose information on top of the video clip to generate a modified video clip. The system can identify a sliding window that includes a current frame, past frames, and future frames of the modified video clip, and apply the modified video clip to a second neural network to predict a next frame. The dynamic animation generation system can then move the sliding window to the next frame while including the predicted next frame, and apply the new sliding window to the second neural network to predict the following frame to the next frame.

COMMUNICATION WITH IN-GAME CHARACTERS
20220040581 · 2022-02-10 ·

A system for coordinating reactions of a virtual character with script spoken by a player in a video game or presentation, comprising an internet-connected server executing software and streaming video games or presentations to a player's computerized device. The system senses start of a dialogue between the player and the virtual character, displays a script for the player on a display of the computerized platform, prompts the player to speak the script. A timer then starts, or the system tracks an audio stream of the spoken script, determines where the player is in the script by the timer or the audio stream, and causes specific actions and responses of the virtual character according to pre-programmed association of actions and responses of the character to points of time or specific variations in the audio stream.

Method and system for determining identifiers for tagging video frames

A method of determining identifiers for tagging frames of animation with is provided. The method comprises obtaining data indicating motion of an animated object in a plurality of frames and detecting the object as performing a pre-determined motion in at least some of the plurality of frames. For a given frame, it is determined based on the detected pre-determined motion, whether to associate an identifier with the pre-determined motion, the identifier indicating an event that is to be triggered in response to the pre-determined motion. The frames of the animation comprising the detected pre-determined motion are tagged, in response to a determination of an identifier. The pre-determined motion and corresponding identifier are determined by inputting the obtained data to machine learning model. A corresponding system is also provided.

System and method to modify avatar characteristics based on inferred conditions

A system and method to modify avatar characteristics and, in particular, to modify avatar characteristics based on inferred conditions. The system comprises a collection engine configured to collect one or more inputs and at least one rule set. The system also comprises an emotion engine configured to accept the one or more inputs and operate on the at least one rule set by comparing the one or more inputs to the at least one rule set, the emotion engine configured to modify at least one characteristic of a user participating in a virtual universe when the comparing produces a match.

ANIMATION CONTROL METHOD FOR MULTIPLE PARTICIPANTS
20220309726 · 2022-09-29 ·

A computer system is used to host a virtual reality universe process in which multiple avatars are independently controlled in response to client input. The host provides coordinated motion information for defining coordinated movement between designated portions of multiple avatars, and an application responsive to detect conditions triggering a coordinated movement sequence between two or more avatars. During coordinated movement, user commands for controlling avatar movement may be in part used normally and in part ignored or otherwise processed to cause the involved avatars to respond in part to respective client input and in part to predefined coordinated movement information. Thus, users may be assisted with executing coordinated movement between multiple avatars.

Kinetic energy smoother
09741146 · 2017-08-22 · ·

Embodiments disclose an animation system designed to generate animation that appears realistic to a user without using a physics engine. The animation system can use a measure of kinetic energy and reference information to determine whether the animation appears realistic or satisfies the laws of physics. Based, at least in part, on the kinetic energy, the animation system can determine whether to adjust a sampling rate of animation data to reflect more realistic motion compared to a default sampling rate.

System and Method of Implementing Behavior Trees When Modifying Attribute Values of Game Entities Based On Physical Token Detection
20170225076 · 2017-08-10 ·

Systems and methods configured for implementing behavior trees when modifying attribute values of game entities based on physical token detection are presented herein. Behavior and/or action of game entities may be implemented using behavior trees. Individual behavior trees may be implemented for individual game entities and/or groups of game entities defined, at least in part, by individual sets of attribute values. Token detection may cause attribute values of one or more game entity attributes to change. In response to the change in attribute values, a behavior tree being implemented for the game entity may be changed to a different game entity. In this manner, behavioral changes for game entity may be implemented “on-the-fly” as attribute values are modified based on token detection.

ANIMATION PRODUCTION SYSTEM
20220032190 · 2022-02-03 ·

To enable you to take animations in a virtual space, an animation production method comprising: a step of placing a character in a virtual space; a step of placing a virtual camera for shooting the character in the virtual space; a step of acquiring action data defining an action of the character from an external source; a step of operating the character based on the action data; and a step of shooting the action of the character by the camera.

Controlling objects in a virtual environment

Methods, systems, and computer-storage media having computer-usable instructions embodied thereon, for controlling objects in a virtual environment are provided. Real-world objects may be received into a virtual environment. The real-world objects may be any non-human object. An object skeleton may be identified and mapped to the object. A user skeleton of the real-world user may also be identified and mapped to the object skeleton. By mapping the user skeleton to the object skeleton, movements of the user control the movements of the object in the virtual environment.