Patent classifications
A63F2300/6018
Cloud-Based Game Slice Generation and Frictionless Social Sharing with Instant Play
Methods enable creation of a game slice from a game. Plurality of games is provided for presentation on a display device. Each game is identified by an image. Selection activity is detected at the image of one of the games. In response to the selection, game code of the selected game is executed to enable game play of an unlocked game. The selected game is streamed to the display device. User interaction related to the game play is received. A recording of the game play is examined to identify portions of the game for generating a game slice, which are returned in a suggested list for selection. Game slice is generated for a selected portion from the list. A recording of the game play for the game slice is associated as a primary video segment. The game slice and the primary video segment are provided for sharing over a network.
Systems and methods for challenges between unique digital articles based on real-world information
Systems and methods to effectuate challenges between unique digital articles, the challenges being evaluated based on particular real-world information, are disclosed. Exemplary implementations may execute instances of a game; manage player accounts associated with the players, including a first and a second player account associated with a first and a second player; present a first user interface to a first player that enables the first player to define an objective for a challenge between a first and a second unique digital article, define one or more stakes for this challenge, and invite the second player to partake in this challenge, record executable code on a permanent registry to evaluate the challenge, based on real-world information; and responsive to the first or second unique digital article winning the challenge, distribute the one or more stakes to the first or second player, respectively.
Method and system by which computer advances game on basis of user position information
A method includes acquiring position information of a user. The method further includes receiving an input operation designating a position range that enables configuring a game parameter in map data usable in a game. The method further includes advancing a game such that, when the acquired position information is in the position range based on the game parameter.
SYSTEM AND METHOD FOR GENERATING AND DISPLAYING AVATARS
Among other things, embodiments of the present disclosure provide systems and methods for modifying avatar components of avatar datasets for multiple users, generating avatars based on the datasets, and displaying multiple avatars on a display screen of a graphical user interface.
Information processing device and content editing method
An acquisition section 104 acquires a plurality of sets of content data and stores the plurality of sets of content data in a content storage section 132. An editing processing section 110 generates a stream of continuous data obtained by temporally concatenating a plurality of sets of content data. An opening image generation section 114 generates, for each content, a set of opening image data. A clipping processing section 116 clips at least a portion of a set of content image data. A concatenating section 118 generates an edited set of image data obtained by temporally concatenating the set of opening image data and a clipped set of content image data.
Content, orchestration, management and programming system
Procedurally generating live experiences for a virtualized music-themed world, including: providing a headless content management system supporting management of back end; coupling a virtual world client to the back end of the content management system using an interface that provides a content packaging framework to enable customized event scheduling and destination management; and procedurally generating a plurality of content elements in a real-time game engine for live experiences in the virtualized music-themed world.
VIDEO GAME TESTING AND AUTOMATION FRAMEWORK
An automated video game testing framework and method includes communicatively coupling an application programming interface (API) to an agent in a video game, where the video game includes a plurality of in-game objects that are native to the video game. The agent is managed as an in-game object of the video game. A test script is executed to control the agent, via the API, to induce gameplay and interrogate one or more target objects selected from the plurality of in-game objects native to the video game. Video game data indicating a behavior of the one or more target objects during the gameplay is received. Based on the received video game data, performance of the video game is evaluated.
Macro-based electronic map editing
Embodiments relate to macro-based customization of electronic maps. A computing device stores a macro representing a map feature. The macro includes a set of textures, and the set of textures includes a height map. The computing device places an instance of the macro in an electronic map. The instance of the macro is visually represented in the electronic map based on a set of textures of the instance of the macro that corresponds to the set of textures of the macro. The computing device edits a texture in the set of textures of the instance of the macro. The computing device updates the visual representation of the instance of the macro based on the edit to the texture.
System and method for generating and displaying avatars
Among other things, embodiments of the present disclosure provide systems and methods for modifying avatar components of avatar datasets for multiple users, generating avatars based on the datasets, and displaying multiple avatars on a display screen of a graphical user interface.
INFORMATION PROCESSING DEVICE AND CONTENT EDITING METHOD
An acquisition section 104 acquires a plurality of sets of content data and stores the plurality of sets of content data in a content storage section 132. An editing processing section 110 generates a stream of continuous data obtained by temporally concatenating a plurality of sets of content data. An opening image generation section 114 generates, for each content, a set of opening image data. A clipping processing section 116 clips at least a portion of a set of content image data. A concatenating section 118 generates an edited set of image data obtained by temporally concatenating the set of opening image data and a clipped set of content image data.