Patent classifications
A63F13/42
Interactive environment with virtual environment space scanning
An interactive environment image may be displayed in a virtual environment space, and interaction with the interactive environment image may be detected within a three-dimensional space that corresponds to the virtual environment space. The interactive environment image may be a three-dimensional image, or it may be two-dimensional. An image is displayed to provide a visual representation of an interactive environment image including one or more virtual objects, which may be spatially positioned. User interaction with the visualized representation in the virtual environment space may be detected and, in response to user interaction, the interactive environment image may be changed.
Interactive environment with virtual environment space scanning
An interactive environment image may be displayed in a virtual environment space, and interaction with the interactive environment image may be detected within a three-dimensional space that corresponds to the virtual environment space. The interactive environment image may be a three-dimensional image, or it may be two-dimensional. An image is displayed to provide a visual representation of an interactive environment image including one or more virtual objects, which may be spatially positioned. User interaction with the visualized representation in the virtual environment space may be detected and, in response to user interaction, the interactive environment image may be changed.
Passing control of cloud gameplay
A method includes: executing a video game by a cloud gaming machine; streaming, by a video server over a network, video generated from the executing video game to a plurality of client devices; enabling gameplay of the video game that includes control of a virtual object of the video game by one of the client devices, and passing the control of the virtual object to each of the client devices in turn, wherein passing the control is responsive to detecting a predefined condition during the gameplay of the video game.
Passing control of cloud gameplay
A method includes: executing a video game by a cloud gaming machine; streaming, by a video server over a network, video generated from the executing video game to a plurality of client devices; enabling gameplay of the video game that includes control of a virtual object of the video game by one of the client devices, and passing the control of the virtual object to each of the client devices in turn, wherein passing the control is responsive to detecting a predefined condition during the gameplay of the video game.
GAME CONTENT CHOREOGRAPHY BASED ON GAME CONTEXT USING SEMANTIC NATURAL LANGUAGE PROCESSING AND MACHINE LEARNING
A module that implements a state machine generates a first environmental condition experienced by a player in a video game. The first environmental condition is produced by the module operating in a first state of the state machine and the first state is associated with a first natural language tag. The state machine transitions from the first state to a second state based on a semantic similarity of an input phrase and a second natural language tag associated with the second state. The module, while operating in the second state, generates a second environmental condition experienced by the player in the video game. In some cases, the module selects the second natural language tag based on a ranking of tags for a plurality of states on their semantic similarity to an input phrase. The ranking is generated by a semantic natural language processing (NLP) machine learning (ML) algorithm.
GAME CONTENT CHOREOGRAPHY BASED ON GAME CONTEXT USING SEMANTIC NATURAL LANGUAGE PROCESSING AND MACHINE LEARNING
A module that implements a state machine generates a first environmental condition experienced by a player in a video game. The first environmental condition is produced by the module operating in a first state of the state machine and the first state is associated with a first natural language tag. The state machine transitions from the first state to a second state based on a semantic similarity of an input phrase and a second natural language tag associated with the second state. The module, while operating in the second state, generates a second environmental condition experienced by the player in the video game. In some cases, the module selects the second natural language tag based on a ranking of tags for a plurality of states on their semantic similarity to an input phrase. The ranking is generated by a semantic natural language processing (NLP) machine learning (ML) algorithm.
Method and apparatus for selecting accessory in virtual environment, device, and readable storage medium
This application discloses a method and apparatus for selecting an accessory in a virtual environment. The method includes: displaying a first virtual environment interface; receiving a trigger operation on an accessory switching control; displaying a candidate accessory zone in a local peripheral region of the accessory switching control according to the trigger operation; receiving a selection operation on a target gun accessory in n gun accessories; and displaying a second virtual environment interface according to the selection operation. The accessory switching control is displayed superimposed on a picture. When the trigger operation on the accessory switching control is received, the candidate accessory zone is displayed in the peripheral region of the accessory switching control, and the n gun accessories of the same accessory type are displayed in the candidate accessory zone.
Systems and methods for a shared interactive environment
Systems and methods for maintaining a shared interactive environment include receiving, by a server, requests to register a first input device of a first user and a second input device of a second user with a shared interactive environment. The first input device may be for a first modality involving user input for an augmented reality (AR) environment, and the second input device may be for a second modality involving user input for a personal computer (PC) based virtual environment or a virtual reality (VR) environment. The server may register the first and second input device with the shared interactive environment. The server may receive inputs from a first adapter for the first modality and from a second adapter for the second modality. The inputs may be for the first and second user to use the shared interactive environment.
Interactive augmented reality experiences using positional tracking
Interactive augmented reality experiences with an eyewear device including a position detection system and a display system. The eyewear device registers a first marker position for a user-controlled virtual game piece and a second marker for an interaction virtual game piece. The eyewear device monitors its position (e.g., location and orientation) and updates the position of the user-controlled virtual game piece accordingly. The eyewear device additionally monitors the position of the user-controlled virtual game piece with respect to the interaction virtual game piece for use in generating a score. Augmented reality examples include a “spheroidal balancing” augmented reality experience and a “spheroidal balancing” augmented reality experience.
SYSTEMS AND METHODS FOR EMULATION OF USER INPUT DURING A PLAY OF A LEGACY GAME
A method for emulation of user input during a play of a legacy game is described. The method includes receiving a user input from the updated hand-held controller and parsing the user input to identify an updated input device of the updated hand-held controller. The method further includes determining, based on the identity of the updated input device, an identity of a legacy input device of a legacy hand-held controller. The method includes determining whether one or more blocks of code for servicing a functionality of the legacy input device of the legacy hand-held controller are cached, and accessing one or more instructions of a legacy game code of the legacy game upon determining that the one or more blocks of code are not cached. The method includes compiling the one or more blocks of code from the one or more instructions of the legacy game code.