Patent classifications
A63F2300/6009
Memory system, computer system, and information protection method
A memory system connected to a host computer generating input information, includes a storage configured to store application program executed by the host computer, a contents database relating various contents candidate information used by the host computer with either of plural adjustment candidate identification information, and input information inputted from the host computer, circuitry configured to infer, by executing inference by an artificial intelligence algorithm, specific adjustment candidate identification information as adjustment identification information from the plurality of adjustment candidate identification information according to the input information and select specific contents candidate information as adjustment contents information from the contents database using the adjustment identification information and an interface configured to output the adjustment contents information to the host computer.
Cross-pollination of in-game events, assets and persistent communications using signs and likes across virtual environments in gaming sessions of a video game
A method for connecting game plays between different players playing a video game. The method includes determining that a first character has accomplished a mission within a region of a virtual environment of the video game, wherein the first character is controlled by a first player playing the video game in a first game play. The method includes opening access by the first character to a regional inter-game communication medium in response to accomplishing the task. The method includes generating first data in the first game play. The method includes cross-pollinating the first data using the regional inter-game communication medium across a plurality of virtual environments of a plurality of asynchronous game plays of a plurality of players.
Audio Generation Methods and Systems
A method of generating audio assets, comprising the steps of: receiving a plurality of input audio assets, converting each input audio asset into an input graphical representation, generating an input multi-channel image by stacking each input graphical representation in a separate channel of the image, feeding the input multi-channel image into a generative model to train the generative model and generate one or more output multi-channel images, each output multi-channel image comprising an output graphical representation, extracting the output graphical representations from each output multi-channel image and converting each output graphical representation into an output audio asset.
Audio Generation Methods and System
A method of generating audio assets, comprising the steps of: receiving an input multi-layered audio asset comprising a plurality of audio layers, generating an input multi-channel image, wherein each channel of the input multi-channel image comprises an input image representative of one of the audio layers, training a generative model on the input multi-channel image and implementing the trained generative model to generate an output multi-channel image, wherein each channel of the output multi-channel image comprises an output image representative of an output audio layer, and generating an output multi-layered audio asset based on a combination of output audio layers derived from the output images.
Information processing apparatus and information processing system
Methods and apparatus provide for downloading application software from a server, including: downloading the application software from the server, where a first application software file contains only a portion of the application software, and a second application software file contains more than the portion of the application software; executing the application software and generating application images based thereon, where execution of the first application software file contains enough of the application software to execute a limited amount of the application software; and displaying the application images on a display screen based on the execution of the application software, where the acquisition unit begins downloading the second application software file in a background process after downloading the first application software file and at least partially during the execution of the first application software file.
INFRASTRUCTURE TO INTEGRATE AN INTEGRATED DEVELOPMENT ENVIRONMENT (IDE) WITH GAME ENGINES
Techniques are described herein that are capable of integrating an IDE with game engines. States of the game engines are identified. Each state indicates whether the IDE enables a game developer to interact with the respective game engine and/or game(s) created by the respective game engine. A subset of the game engines is caused to be displayed to the game developer based at least in part on the IDE enabling the game developer to interact with each game engine in the subset and/or game(s) created by the respective game engine. A selection indicator, which indicates that a game engine is selected from the game engines in the subset, is received. An integration infrastructure, including a game engine-agnostic messaging protocol and game engine-agnostic messages, is provided. At least a portion of game code and/or test unit(s) are run and/or debugged using the IDE in a context of the selected game engine.
Game Development Method And Apparatus, Game Running Method And Apparatus, And Electronic Device
A game development method and apparatus, a game running method and apparatus, and an electronic device. The game development method comprises: receiving a development instruction for a target function of a game (S101); obtaining a target prefabricated part corresponding to the target function from a preset database on the basis of the development instruction, a plurality of prefabricated parts being pre-stored in the database, each prefabricated part corresponding to one function setting, and each prefabricated part comprising a control configured with a preset logic, an application interface, and a backend invocation cloud function (S102); and developing the target function of the game according to the target prefabricated part (S103). According to the method, the development workload can be effectively reduced, the development labor cost and the later server maintenance cost are reduced, and thus the development efficiency is effectively improved.
AUTOMATIC PRESENTATION OF SUITABLE CONTENT
Implementations described herein relate to methods, systems, and computer-readable media to automatically present suitable content for a particular locale. In some implementations, a computer-implemented method includes receiving gaming content associated with a game associated with a first client locale, the received gaming content including content that is restricted at a second client locale, receiving at least one content alternative, the at least one content alternative being an alternative to replace the received gaming content, generating a first localized rating for the received gaming content and a second localized rating for the at least one content alternative, and automatically providing the received gaming content or the at least one content alternative to a user device associated with the second client locale based on the first localized rating and the second localized rating.
Method and system for determining blending coefficients
A method of determining blending coefficients for respective animations includes: obtaining animation data, the animation data defining at least two different animations that are at least in part to be simultaneously applied to the animated object, each animation comprising a plurality of frames; obtaining corresponding video game data, the video game data comprising an in-game state of the object; inputting the animation data and video game data into a machine learning model, the machine learning model being trained to determine, based on the animation data and corresponding video game data, a blending coefficient for each of the animations in the animation data; determining, based on the output of the machine learning model, one or more blending coefficients for at least one of the animations, the or each blending coefficient defining a relative weighting with which each animation is to be applied to the animated object; and blending the at least simultaneously applied part of the two animations using the or each determined blending coefficient, the contribution from each of the at least two animations being in accordance with the or each determined blending coefficient.
CONTENT PLAYBACK PROGRAM AND CONTENT PLAYBACK DEVICE
Provided are a content playback program and a content playback device in which a character displayed on a display unit takes a predetermined reaction in response to a screenshot acquisition operation. When execution of a screenshot caused by a user's operation is detected in a state in which a character is displayed on the display unit, a content playback processing unit executes a production in which a facial expression of the character changes and/or a production in which the character speaks, immediately after completion of the screenshot.