G06T2213/12

Augmented reality (AR) device support

A managed device associated with an error code is identified. An animation associated with physically manipulating a component and parts of the component for the managed device is generated. The animation representing a workflow for resolving the error code. A live video feed of the managed device and the physical surroundings of the managed device is presented within an Augmented Reality (AR) interface on a mobile device operated by a support person. The animation is rendered over a portion of the live video feed for the support person to view while following the workflow and physically manipulating the component and parts of the managed device to resolve the error code.

System and method for virtual character locomotion

A system and method for controlling the animation and movement of in-game objects. In some embodiments, the system includes one or more data-driven animation building blocks that can be used to define any character movements. In some embodiments, the data-driven animation blocks are conditioned by how their data is described separately from any explicit code in the core game engine. These building blocks can accept certain inputs from the core code system (e.g., movement direction, desired velocity of movement, and so on). But the game itself is agnostic as to why particular building blocks are used and what animation data (e.g., single animation, parametric blend, defined by user, and so on) the blocks may be associated with.

AUGMENTED REALITY (AR) DEVICE SUPPORT

A managed device associated with an error code is identified. An animation associated with physically manipulating a component and parts of the component for the managed device is generated. The animation representing a workflow for resolving the error code. A live video feed of the managed device and the physical surroundings of the managed device is presented within an Augmented Reality (AR) interface on a mobile device operated by a support person. The animation is rendered over a portion of the live video feed for the support person to view while following the workflow and physically manipulating the component and parts of the managed device to resolve the error code.

Animation Processing Method and Related Apparatus
20230351665 · 2023-11-02 ·

This application discloses an animation processing method and a related apparatus. The method includes: An electronic device runs a first application; the electronic device invokes an animation configuration file to display a first animation of the first application, where the animation configuration file includes N feature attributes of the first animation and values corresponding to the N feature attributes, and N is a positive integer; the electronic device runs a second application; and the electronic device invokes the animation configuration file to display a second animation of the second application, where the animation configuration file includes M feature attributes of the second animation and values corresponding to the M feature attributes, and M is a positive integer.

Method for establishing complex motion controller

A method for establishing a complex motion controller includes following steps: obtaining a source controller and a destination controller, wherein the source controller is configured to generate a source motion, and the destination controller is configured to generate a destination motion; determining a transition tensor between the source controller and the destination controller, wherein the transition tensor has a plurality of indices, one of the plurality of indices corresponds to a plurality of phases of the source motion; calculating a plurality of transition outcomes of the transition tensor and recording the plurality of transition outcomes according to the plurality of indices; calculating a plurality of transition qualities according to the plurality of transition outcomes; and searching for an optimal transition quality from the plurality of transition qualities to establish a complex motion controller for generating a complex motion corresponding to one of the plurality of phases.

System for customizing in-game character animations by players

System and methods for using a deep learning framework to customize animation of an in-game character of a video game. The system can be preconfigured with animation rule sets corresponding to various animations. Each animation can be comprised of a series of distinct poses that collectively form the particular animation. The system can provide an animation-editing interface that enables a user of the video game to make modifications to at least one pose or frame of the animation. The system can realistically extrapolate these modifications across some or all portions of the animation. In addition or alternatively, the system can realistically extrapolate the modifications across other types of animations.

AUTONOMOUS ANIMATION IN EMBODIED AGENTS

Embodiments described herein relate to the autonomous animation of Gestures by the automatic application of animations to Input Text—or the automatic application of animation Mark-up wherein the Mark-up triggers nonverbal communication expressions or Gestures. In order for an Embodied Agent's movements to come across as natural and human-like as possible, a Text-To-Gesture Algorithm (TTG Algorithm) analyses Input Text of a Communicative Utterance before it is uttered by a Embodied Agent, and marks it up with appropriate and meaningful Gestures given the meaning, context, and emotional content of Input Text and the gesturing style or personality of the Embodied Agent.

Generating animation based on first scene and second scene

A method can include receiving a starting scene for display and an ending scene for display, the starting scene including at least a first graphical element in a first location and a second graphical element in a second location, the ending scene including at least the first graphical element in a third location and the second graphical element in a fourth location; generating multiple individual candidate animations based on the starting scene and the ending scene, each of the multiple candidate animations including display of the first graphical element transitioning from the first location to the second location and display of the second graphical element transitioning from the third location to the fourth location; for each of the multiple individual candidate animations, determining a score; selecting one of the individual candidate animations based on the determined scores for the individual candidate animations; and presenting the selected individual candidate animation.

INFORMATION PROCESSING DEVICE ESTIMATING A PARAMETER BASED ON ACQUIRED INDEXES REPRESENTING AN EXERCISE STATE OF A SUBJECT, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY RECORDING MEDIUM
20230019283 · 2023-01-19 · ·

An information processing device including a memory that stores a program; and a processor that executes the program. The processor is configured to acquire, from exercise data representing an exercise state of a subject, exercise parameter information including a plurality of parameters that represent the exercise state of the subject and have a correlation with each other. When an animation representing a motion of the subject based on the acquired exercise parameter information is displayed and then an operation for changing a value of a first parameter of the plurality of parameters is received, the processor generates an animation reflecting at least the first parameter for which the value is changed and a second parameter of the plurality of parameters, a value of the second parameter being changed in conjunction with the value of the first parameter.

Interactive Avatars in Artificial Reality

Aspects of the present disclosure are directed to creating interactive avatars that can be pinned as world-locked artificial reality content. Once pinned, an avatar can interact with the environment according to contextual queues and rules, without active control by the avatar owner. An interactive avatar system can configure the avatar with action rules, visual elements, and settings based on user selections. Once an avatar is configured and pinned to a location by an avatar owner, when other XR devices are at that location, a central system can provide the avatar (with its configurations) to the other XR device. This allows a user of that other XR device to discover and interact with the avatar according to the configurations established by the avatar owner.