Patent classifications
A63F2300/6607
VIRTUAL OBJECT CONTROL METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
A virtual object control method includes obtaining a panel presentation instruction generated through triggering on a user interface (UI) and presenting an interaction control panel on the UI in response to the panel presentation instruction. When an interaction button or a movement on the interaction control panel is triggered, and the interaction button is associated with an object selection panel, the interaction control panel is replaced with the object selection panel in the UI. A to-be-controlled virtual object is selected through the object selection panel and the selected virtual object to perform interaction is controlled according to an interaction corresponding to the interaction button.
CONTENT GENERATION SYSTEM AND METHOD
A content generation system operable to generate one or more actions to be performed by an agent, the system comprising an input receiving unit operable to receive information defining an input action, the input action being an action associated with the agent, a constraint identifying unit operable to identify one or more constraints associated with the input action, and an action generation unit operable to generate, using a machine learning model, one or more actions in dependence upon the information defining the input action and the identified constraints wherein the one or more actions are variations of the defined action.
CONTENT GENERATION SYSTEM AND METHOD
A content generation system operable to generate one or more actions to be performed by an agent, the system comprising an input receiving unit operable to receive two or more actions for the agent, a model generation unit operable to input the actions to a machine learning model so as to generate a trained machine learning model, and an action generation unit operable to generate an action to be performed by the agent, wherein the generation comprises the selection of a latent space interpolation state associated with the trained machine learning model.
ANIMATION PRODUCTION SYSTEM
To enable you to take animations in a virtual space an animation production method comprising: a step of placing a character in a virtual space; a step of placing a virtual camera for shooting the character in the virtual space; a step of acquiring action data defining an action of the character from an external source; a step of operating the character based on the action data; and a step of shooting the action of the character by the camera.
Animation control method for multiple participants
A computer system is used to host a virtual reality universe process in which multiple avatars are independently controlled in response to client input. The host provides coordinated motion information for defining coordinated movement between designated portions of multiple avatars, and an application responsive to detect conditions triggering a coordinated movement sequence between two or more avatars. During coordinated movement, user commands for controlling avatar movement may be in part used normally and in part ignored or otherwise processed to cause the involved avatars to respond in part to respective client input and in part to predefined coordinated movement information. Thus, users may be assisted with executing coordinated movement between multiple avatars.
ANIMATION PRODUCTION SYSTEM
The principal invention for solving the above-described problem is an animation production method that provides a virtual space in which a given object is placed, the method comprising: detecting an operation of a user equipped with a head mounted display; controlling a movement of an object based on the detected operation of the user; shooting the movement of the object; storing an action data relating to the movement of the shot object in a first track; and storing audio from the user in a second track.
Method to use recognition of nearby physical surfaces to generate NPC reactions to events
A method to generate an appropriate reaction, by an NPC in an XR game, to a significant event in the game includes: compiling a record from a previously generated SMM of surfaces in the XR space, categorizing the surfaces; after the game begins, tracking in real time physical surroundings of the NPC, allowing 3D positions of the NPC relative to nearby physical surfaces to be continuously determined; and events occurring in the game, allowing detection of any event deemed to be significant. For each detected event deemed significant, occurring at a corresponding event time, an appropriate action is determined for the NPC to carry out in response, partly based on whether the NPC is positioned close to a physical surface at the event time, and the NPC is directed to carry out the appropriate action.
Enhanced pose generation based on generative modeling
Systems and methods are provided for enhanced pose generation based on generative modeling. An example method includes accessing an autoencoder trained based on poses of real-world persons, each pose being defined based on location information associated with joints, with the autoencoder being trained to map an input pose to a feature encoding associated with a latent feature space. Information identifying, at least, a first pose and a second pose associated with a character configured for inclusion in an in-game world is obtained via user input, with each of the poses being defined based on location information associated with the joints and with the joints being included on a skeleton associated with the character. Feature encodings associated with the first pose and the second pose are generated based on the autoencoder. Output poses are generated based on transition information associated with the first pose and the second pose.
Systems, methods, and devices for creating a spline-based video animation sequence
A spline-based animation process creates an animation sequence. The process receives a plurality of frames that illustrate a figure based on a design template (e.g., which includes a skeleton having segments). The process further identifies a spine segment, generates hip, shoulder, and head segments at respective positions relative to the spine segment, identifies limb and facial feature segments, and converts the segments into respective splines bound between endpoints. The process further determines changes between frames for respective splines and animates movement of the figure over a sequence of frames based on the changes.
INFORMATION PROCESSING SYSTEM, NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM HAVING STORED THEREIN PROGRAM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING METHOD
Provided is an information processing system in which a moving object (ball) is moved at a first speed in first to third virtual spaces of first to third apparatuses. In the second virtual space of the second apparatus, the moving speed of the moving object is changed from the first speed to a second speed lower than the first speed.