Patent classifications
G06T13/40
Animation preparing device, animation preparing method and recording medium
The animation preparing device includes at least one processor. The processor acquires exercise data concerning an exercise done by a user or the exercise that the user is doing. In a case where there exists a plurality of causes for a characteristic point of the user that has been detected based on the acquired exercise data, the processor prepares, from the exercise data, a first user animation represented in a first direction of line of sight as an animation of the exercise done by the user or the exercise that the user is doing in order to indicate a first cause for the characteristic point, and prepares a second user animation represented in a second direction of line of sight that is separate from the first line of sight in order to indicate a second cause for the characteristic point that is separate from the first cause.
Animation preparing device, animation preparing method and recording medium
The animation preparing device includes at least one processor. The processor acquires exercise data concerning an exercise done by a user or the exercise that the user is doing. In a case where there exists a plurality of causes for a characteristic point of the user that has been detected based on the acquired exercise data, the processor prepares, from the exercise data, a first user animation represented in a first direction of line of sight as an animation of the exercise done by the user or the exercise that the user is doing in order to indicate a first cause for the characteristic point, and prepares a second user animation represented in a second direction of line of sight that is separate from the first line of sight in order to indicate a second cause for the characteristic point that is separate from the first cause.
Using text for avatar animation
Systems and processes for animating an avatar are provided. An example process of animating an avatar includes at an electronic device having one or more processors and memory, receiving text, determining an emotional state, and generating, using a neural network, a speech data set representing the received text and a set of parameters representing one or more movements of an avatar based on the received text and the determined emotional state.
Immersive virtual entertainment system
Aspects of the subject disclosure may include, for example, a method that includes generating a virtual venue for the virtual reality space, wherein the generating the virtual venue including replicating an architecture of a venue associated with the event and generating a plurality of virtual stores for the virtual venue, wherein each virtual store is associated with each participant of the plurality of participants, accessing a plurality of cameras and a plurality of microphones associated with the event, generating the virtual reality space based on the plurality of participants, the virtual venue, the plurality of microphones, and the plurality of cameras, generating a plurality of images for each participant of the plurality of participants according to each profile for each participant of the plurality of participants to participate in the event, and presenting the virtual reality space to user equipment in a virtual reality format. Other embodiments are disclosed.
Immersive virtual entertainment system
Aspects of the subject disclosure may include, for example, a method that includes generating a virtual venue for the virtual reality space, wherein the generating the virtual venue including replicating an architecture of a venue associated with the event and generating a plurality of virtual stores for the virtual venue, wherein each virtual store is associated with each participant of the plurality of participants, accessing a plurality of cameras and a plurality of microphones associated with the event, generating the virtual reality space based on the plurality of participants, the virtual venue, the plurality of microphones, and the plurality of cameras, generating a plurality of images for each participant of the plurality of participants according to each profile for each participant of the plurality of participants to participate in the event, and presenting the virtual reality space to user equipment in a virtual reality format. Other embodiments are disclosed.
DISPLAY APPARATUS, DISPLAY CONTROL METHOD, AND DISPLAY SYSTEM
A display apparatus includes an image acquisition unit, an image extraction unit, a registration unit, a display control unit, a coordinate generation unit, and a motion detection unit. The coordinate generation unit generates, based on a detection result of a detection unit configured to detect the position of an object in a three-dimensional space, coordinates of the object in a screen. The motion detection unit detects a motion of the object based the coordinates in the screen generated by the coordinate generation unit. The display control unit displays a first image on the screen. When the motion is detected by the motion detection unit, the display control unit further displays a second image on the screen based on coordinates corresponding to the detected motion, and changes the display of the first image.
Dynamic Entering and Leaving of Virtual-Reality Environments Navigated by Different HMD Users
Systems and methods for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes are provided. A computer-implemented method includes providing a first perspective of a VR scene to a first HMD of a first user and receiving an indication that a second user is requesting to join the VR scene provided to the first HMD. The method further includes obtaining real-world position and orientation data of the second HMD relative to the first HMD and then providing, based on said data, a second perspective of the VR scene. The method also provides that the first and second perspectives are each controlled by respective position and orientation changes while viewing the VR scene.
Dynamic Entering and Leaving of Virtual-Reality Environments Navigated by Different HMD Users
Systems and methods for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes are provided. A computer-implemented method includes providing a first perspective of a VR scene to a first HMD of a first user and receiving an indication that a second user is requesting to join the VR scene provided to the first HMD. The method further includes obtaining real-world position and orientation data of the second HMD relative to the first HMD and then providing, based on said data, a second perspective of the VR scene. The method also provides that the first and second perspectives are each controlled by respective position and orientation changes while viewing the VR scene.
SYSTEM AND METHOD FOR PLACING A CHARACTER ANIMATION AT A LOCATION IN A GAME ENVIRONMENT
A method for execution by a processor of a computer system for computer gaming. The method comprises maintaining a game environment; receiving a request to execute an animation routine during gameplay; attempting to identify a location in the game environment having a surrounding area that is free to host the requested animation routine; and in case the attempting is successful, carrying out the animation routine at the identified location in the game environment.
Inverse reinforcement learning for user-specific behaviors
In one implementation, a method for inverse reinforcement learning for tailoring virtual agent behaviors to a specific user. The method includes: obtaining an initial behavior model for a virtual agent and an initial state for a virtual environment associated with the virtual agent, wherein the initial behavior model includes one or more tunable parameters; generating, based on the initial behavior model and the initial state for the virtual environment, a first set of behavioral trajectories for the virtual agent; obtaining a second set of behavioral trajectories from a source different from the initial behavior model; and generating an updated behavior model by adjusting at least one of the one or more tunable parameters of the initial behavior model as a function of the first and second sets of behavioral trajectories, wherein at least one of the first and second sets of behavioral trajectories are assigned different weights.