Patent classifications
H04S2420/01
AUDIO PROVIDING APPARATUS AND AUDIO PROVIDING METHOD
An audio providing apparatus and method are provided. The audio providing apparatus includes: an object renderer configured to render an object audio signal based on geometric information regarding the object audio signal; a channel renderer configured to render an audio signal having a first channel number into an audio signal having a second channel number; and a mixer configured to mix the rendered object audio signal with the audio signal having the second channel number.
Audio Volume Handling
Apparatus is configured to associate each of one or more spatially-distributed audio sources in a virtual space, each audio source providing one or more audio signals representing audio for playback through a user device, with a respective fade-in profile which defines how audio volume for the audio source is gradually increased from a minimum level to a target volume level as a function of time. It is configured also to identify, based on user position, a current field-of-view within a virtual space and, in response to detecting that one or more new audio sources have a predetermined relationship with respect to the current field-of-view, fading-in the audio from the or each new audio source according to the fade-in profile for the respective audio source so as to increase their volume gradually towards the target volume level defined by the fade-in profile.
ACOUSTIC SIGNAL PROCESSING APPARATUS, ACOUSTIC SIGNAL PROCESSING METHOD, AND PROGRAM
The present technology relates to an acoustic signal processing apparatus, an acoustic signal processing method, and a program for expanding a range of listening positions in which an effect of a transaural reproduction system can be obtained. First and second output signals for localizing a sound image in front of or behind and on the left of a first position located on the left of a listening position are output from first and second speakers, respectively. Third and fourth output signals for localizing a sound image in front of or behind and on the right of a second position located on the right of the listening position are output from third and fourth speakers, respectively. The first speaker is disposed in a first direction in front of or behind the listening position and on the left of the listening position. The second speaker is disposed in the first direction and on the right of the listening position. The third speaker is disposed in the first direction and on the left of the listening position and on the right of the first speaker. The fourth speaker is disposed in the first direction of the listening position and on the right of the second speaker. The present technology can be applied, for example, to an acoustic processing system.
HEARING ASSISTANCE SYSTEM
There is provided a hearing assistance system, comprising an audio streaming device, a first hearing device for stimulating a first ear of a user, and a second hearing device for stimulating a second ear of the user, the audio streaming device comprising an audio input interface for receiving an input stereo audio signal, a unit for analyzing the input stereo audio signal in order to determine at least one azimuthal localization cue by comparing the two channels of the stereo signal, a unit for processing the input stereo audio signal in order to produce an output stereo audio signal, and a unit for supplying one channel of the output stereo audio signal to the first hearing device and for supplying the other channel of the output stereo audio signal to the second hearing device.
Headset sound leakage mitigation
An audio system for a headset includes a plurality of speakers and an audio controller. The plurality of speakers may be in a dipole configuration that cancel sound leakage into a local area of the headset. The controller filters audio content presented by the plurality of speakers to further mitigate leakage of audio content into the local area. The audio determines sound filters based on environmental conditions, such as ambient noise levels, as well as based on the audio content being presented.
METHOD FOR PROCESSING SOUND ON BASIS OF IMAGE INFORMATION, AND CORRESPONDING DEVICE
A method of processing an audio signal including at least one audio object based on image information includes: obtaining the audio signal and a current image that corresponds to the audio signal; dividing the current image into at least one block; obtaining motion information of the at least one block; generating index information including information for giving a three-dimensional (3D) effect in at least one direction to the at least one audio object, based on the motion information of the at least one block; and processing the audio object, in order to give the 3D effect in the at least one direction to the audio object, based on the index information.
Graphical user interface and parametric equalizer in gaming systems
A system that incorporates the subject disclosure may include, for example, a gaming system that cooperates with a graphical user interface to enable user modification and enhancement of one or more audio streams associated with the gaming system. In embodiments, the audio streams may include a game audio stream, a chat audio stream of conversation among players of a video game, and a microphone audio stream of a player of the video game. Additional embodiments are disclosed.
Moving an emoji to move a location of binaural sound
During an electronic communication between a first user and a second user, an electronic device of the second user displays a graphical representation at a located selected by the first user. The graphical representation provides an indication to the second user where binaural sound associated with the graphical representation will externally localize to the second user. Subsequent movement of the graphical representation changes a location where the binaural sound externally localizes to the second user.
Binaural Sound in Visual Entertainment Media
A method provides binaural sound to a listener while the listener watches a movie so sounds from the movie localize to a location of a character in the movie. Sound is convolved with head related transfer functions (HRTFs) of the listener, and the convolved sound is provided to the listener who wears a wearable electronic device.
NON-TRANSITORY COMPUTER-READABLE MEDIUM HAVING COMPUTER-READABLE INSTRUCTIONS AND SYSTEM
A sound controlling system including a user terminal having a sound source, a wireless communication device, a digital to analog converter (DAC) and first processing electronics. The first processing electronics are configured to: provide data of a backing sound to the sound source; control the sound source to generate a sound signal based on the data; receive a first input instruction including a first instruction to transmit the sound signal and a second instruction to play back the backing sound; provide the sound signal to the wireless communication device as the first input instruction being the first instruction, and provide the sound signal to the DAC as being the second instruction; control the wireless communication device to convert the sound signal to a wireless signal and transmit the wireless signal; and convert the sound signal from a digital signal to an analog signal for play back of the backing sound.