A63F2300/6607

METHOD AND APPARATUS FOR CONTROLLING VIRTUAL CHARACTER, COMPUTER DEVICE, AND STORAGE MEDIUM
20230045852 · 2023-02-16 ·

This application relates to a method for controlling a virtual character performed by a computer device, the method including: displaying at least a portion of a target virtual character in a virtual scene, the target virtual character being bound with basic bones and deformed bones; triggering the character action of the target virtual character in the virtual scene; when the character action comprises a character movement, controlling the target virtual character to implement the character movement in the virtual scene through a movement of a basic bone associated with the character movement; and when the character action comprises a local character deformation, controlling the target virtual character to implement the local character deformation in the virtual scene through a deformation of a deformed bone associated with the local character deformation.

VIRTUAL PROP CONTROL METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
20230046750 · 2023-02-16 ·

This application discloses a virtual prop control method performed by a computer device, and a storage medium. In this application, a controlled virtual object is sheltered by equipping the controlled virtual object with a movable virtual shelter prop, so as to avoid the controlled virtual object being directly exposed in a virtual scene and being attacked by other virtual objects. Moreover, the virtual shelter prop is capable of moving with the controlled virtual object. When the controlled virtual object runs, jumps, etc., the virtual shelter prop may also play a role in defending against an attack, thereby effectively improving the security of the controlled virtual object in competitive battles and improving the user experience. In addition, the controlled virtual object is capable of erecting a target virtual prop at a reference position of the virtual shelter prop held by the controlled virtual object to launch an attack against other virtual objects.

ROLL TURNING AND TAP TURNING FOR VIRTUAL REALITY ENVIRONMENTS

Technologies are described for providing turning in virtual reality environments. For example, some implementations use roll turning that involves rotating around an outer edge of a control input, some implementations use tap turning to move directly to a location indicated by a control movement, and some implementations involve combinations of roll turning and tap turning.

SOFTWARE WITH MOTION RECORDING FEATURE TO SIMPLIFY ANIMATION
20230237726 · 2023-07-27 · ·

Features of a software program designed to facilitate animation by users of handheld or portable electronic devices are described. The software program may be in the form of instructions suitable to be carried out by the microprocessor of such a device in response to inputs from the user. The software provides a motion recording feature in which a user input in the form of a pointer, touch point, or other position-related input is monitored over the course of a recording session, converted to a data string of attribute values, and stored in memory. The software displays an animation of a virtual object over an animation period by retrieving the data string of attribute values from the memory and causing the processor to generate the animation using the retrieved data string of attribute values

Communication with in-game characters
11691076 · 2023-07-04 ·

A system for coordinating reactions of a virtual character with script spoken by a player in a video game or presentation, comprising an internet-connected server executing software and streaming video games or presentations to a player's computerized device. The system senses start of a dialogue between the player and the virtual character, displays a script for the player on a display of the computerized platform, prompts the player to speak the script. A timer then starts, or the system tracks an audio stream of the spoken script, determines where the player is in the script by the timer or the audio stream, and causes specific actions and responses of the virtual character according to pre-programmed association of actions and responses of the character to points of time or specific variations in the audio stream.

ENHANCED ANIMATION GENERATION BASED ON VIDEO WITH LOCAL PHASE

Embodiments of the systems and methods described herein provide a dynamic animation generation system that can apply a real-life video clip with a character in motion to a first neural network to receive rough motion data, such as pose information, for each of the frames of the video clip, and overlay the pose information on top of the video clip to generate a modified video clip. The system can identify a sliding window that includes a current frame, past frames, and future frames of the modified video clip, and apply the modified video clip to a second neural network to predict a next frame. The dynamic animation generation system can then move the sliding window to the next frame while including the predicted next frame, and apply the new sliding window to the second neural network to predict the following frame to the next frame.

SKELETON MODEL UPDATING APPARATUS, SKELETON MODEL UPDATING METHOD, AND PROGRAM
20220410000 · 2022-12-29 ·

Provided are a skeleton model updating apparatus, a skeleton model updating method, and a program by which time and effort for changing the pose of a skeleton model to a known standard pose can be reduced. A target node identifying section (80) identifies a plurality of target nodes from among a plurality of nodes included in a skeleton model that is in a pose other than a known standard pose. A reference node identifying section (82) identifies a reference node that is positioned closest to the side of the plurality of target nodes, from among nodes that are connected to all of the target nodes via one or more bones. A position deciding section (84) decides positions of the plurality of target nodes such that relative positions of the plurality of target nodes with respect to the position of the reference node are adjusted to predetermined positions. A pose updating section (56) updates the pose of the skeleton model to the known standard pose on the basis of the decided positions of the target nodes.

Skeletal tracking using previous frames

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for detecting a pose of a user. The program and method include operations comprising receiving a monocular image that includes a depiction of a body of a user; detecting a plurality of skeletal joints of the body based on the monocular image; accessing a video feed comprising a plurality of monocular images received prior to the monocular image; filtering, using the video feed, the plurality of skeletal joints of the body detected based on the monocular image; and determining a pose represented by the body depicted in the monocular image based on the filtered plurality of skeletal joints of the body.

Animation production system
11586278 · 2023-02-21 · ·

The principal invention for solving the above-described problem is an animation production method that provides a virtual space in which a given object is placed, the method comprising: detecting an operation of a user equipped with a head mounted display; controlling a movement of an object based on the detected operation of the user; shooting the movement of the object; storing an action data relating to the movement of the shot object in a first track; and storing audio from the user in a second track.

Method and system for determining blending coefficients

A method of determining blending coefficients for respective animations includes: obtaining animation data, the animation data defining at least two different animations that are at least in part to be simultaneously applied to the animated object, each animation comprising a plurality of frames; obtaining corresponding video game data, the video game data comprising an in-game state of the object; inputting the animation data and video game data into a machine learning model, the machine learning model being trained to determine, based on the animation data and corresponding video game data, a blending coefficient for each of the animations in the animation data; determining, based on the output of the machine learning model, one or more blending coefficients for at least one of the animations, the or each blending coefficient defining a relative weighting with which each animation is to be applied to the animated object; and blending the at least simultaneously applied part of the two animations using the or each determined blending coefficient, the contribution from each of the at least two animations being in accordance with the or each determined blending coefficient.