A63F13/424

DETECTION AND CLASSIFICATION OF AUDIO EVENTS IN GAMING SYSTEMS

A system that incorporates the subject disclosure may include, for example, a gaming system that cooperates with a graphical user interface to enable user modification and enhancement of one or more audio streams associated with the gaming system. In embodiments, the audio streams may include a game audio stream, a chat audio stream of conversation among players of a video game, and a microphone audio stream of a player of the video game. Additional embodiments are disclosed.

Method of generating a tactile signal using a haptic device

A haptic device according to one embodiment can comprise: a database unit for storing acoustic information or receiving the acoustic information from an external device; a control unit for converting the acoustic information into an electrical signal according to a predetermined pattern; a driving unit for generating a motion signal on the basis of the electrical signal; and a transfer unit for transferring a patterned tactile signal to a user by means of the motion signal.

Method of generating a tactile signal using a haptic device

A haptic device according to one embodiment can comprise: a database unit for storing acoustic information or receiving the acoustic information from an external device; a control unit for converting the acoustic information into an electrical signal according to a predetermined pattern; a driving unit for generating a motion signal on the basis of the electrical signal; and a transfer unit for transferring a patterned tactile signal to a user by means of the motion signal.

MULTI-DEVICE AUDIO INTERFACE

An electronic device may be configured to present a user interface via which a user can select from a plurality of commands associated with a particular video game. In response to a selection of one of the plurality of commands, the electronic device may transmit the selected one of the plurality of commands to a user interface device. The selected command may cause said user interface device to transmit a corresponding one or more simulated user inputs to a game console. The selection of the command may occur automatically in response to detection, by audio processing circuitry, of an occurrence of the particular audio clip in an audio signal output by the game console.

MULTI-DEVICE AUDIO INTERFACE

An electronic device may be configured to present a user interface via which a user can select from a plurality of commands associated with a particular video game. In response to a selection of one of the plurality of commands, the electronic device may transmit the selected one of the plurality of commands to a user interface device. The selected command may cause said user interface device to transmit a corresponding one or more simulated user inputs to a game console. The selection of the command may occur automatically in response to detection, by audio processing circuitry, of an occurrence of the particular audio clip in an audio signal output by the game console.

Game mediation component for enriching multiplayer gaming sessions
11465056 · 2022-10-11 · ·

Users play electronic games on their client devices, such as smartphones, laptop computers, game consoles, or the like. A game mediation component supplements the electronic games by providing additional functionality for group gameplay, such as communication among the players participating in the gameplay group, emphasizing key moments in the gameplay, saving and sharing of portions of the group gameplay, etc. Events related to gameplay, such as environmental events or in-game events, cause the game mediation component to take an action to supplement the gameplay, such as modifying an overlay user interface to provide additional gameplay data while efficiently using screen real estate, sending messages to other players in the group gameplay, and saving and sharing portions of gameplay session.

Speaker conversion for video games

This specification describes a computer-implemented method of generating speech audio for use in a video game, wherein the speech audio is generated using a voice convertor that has been trained to convert audio data for a source speaker into audio data for a target speaker. The method comprises receiving: (i) source speech audio, and (ii) a target speaker identifier. The source speech audio comprises speech content in the voice of a source speaker. Source acoustic features are determined for the source speech audio. A target speaker embedding associated with the target speaker identifier is generated as output of a speaker encoder of the voice convertor. The target speaker embedding and the source acoustic features are inputted into an acoustic feature encoder of the voice convertor. One or more acoustic feature encodings are generated as output of the acoustic feature encoder. The one or more acoustic feature encodings are derived from the target speaker embedding and the source acoustic features. Target speech audio is generated for the target speaker. The target speech audio comprises the speech content in the voice of the target speaker. The generating comprises decoding the one or more acoustic feature encodings using an acoustic feature decoder of the voice convertor.

Speaker conversion for video games

This specification describes a computer-implemented method of generating speech audio for use in a video game, wherein the speech audio is generated using a voice convertor that has been trained to convert audio data for a source speaker into audio data for a target speaker. The method comprises receiving: (i) source speech audio, and (ii) a target speaker identifier. The source speech audio comprises speech content in the voice of a source speaker. Source acoustic features are determined for the source speech audio. A target speaker embedding associated with the target speaker identifier is generated as output of a speaker encoder of the voice convertor. The target speaker embedding and the source acoustic features are inputted into an acoustic feature encoder of the voice convertor. One or more acoustic feature encodings are generated as output of the acoustic feature encoder. The one or more acoustic feature encodings are derived from the target speaker embedding and the source acoustic features. Target speech audio is generated for the target speaker. The target speech audio comprises the speech content in the voice of the target speaker. The generating comprises decoding the one or more acoustic feature encodings using an acoustic feature decoder of the voice convertor.

METHOD OF GENERATING VIRTUAL CHARACTER, ELECTRONIC DEVICE, AND STORAGE MEDIUM

A method of generating a virtual character, an electronic device, and a storage medium. A specific implementation solution includes: determining, in response to a first speech command for adjusting an initial virtual character, a target adjustment object corresponding to the first speech command; determining a plurality of character materials related to the target adjustment object; determining a target character material from the plurality of character materials in response to a second speech command for determining the target character material; and adjusting the initial virtual character by using the target character material, so as to generate a target virtual character.

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD

A terminal device (10) corresponding to an example of an information processing apparatus includes an acquisition unit (13d) that acquires a feature value related to a display element that is a target of a voice command uttered by a user, and a call determination unit (13e) (corresponding to an example of a “determination unit”) that determines a call of the display element on the basis of the feature value acquired by the acquisition unit (13d) such that the display element is uniquely specified with another display element other than the display element.