H04S2400/13

Audio Volume Handling

Apparatus is configured to associate each of one or more spatially-distributed audio sources in a virtual space, each audio source providing one or more audio signals representing audio for playback through a user device, with a respective fade-in profile which defines how audio volume for the audio source is gradually increased from a minimum level to a target volume level as a function of time. It is configured also to identify, based on user position, a current field-of-view within a virtual space and, in response to detecting that one or more new audio sources have a predetermined relationship with respect to the current field-of-view, fading-in the audio from the or each new audio source according to the fade-in profile for the respective audio source so as to increase their volume gradually towards the target volume level defined by the fade-in profile.

METHOD FOR PROCESSING SOUND ON BASIS OF IMAGE INFORMATION, AND CORRESPONDING DEVICE

A method of processing an audio signal including at least one audio object based on image information includes: obtaining the audio signal and a current image that corresponds to the audio signal; dividing the current image into at least one block; obtaining motion information of the at least one block; generating index information including information for giving a three-dimensional (3D) effect in at least one direction to the at least one audio object, based on the motion information of the at least one block; and processing the audio object, in order to give the 3D effect in the at least one direction to the audio object, based on the index information.

Spatial audio navigation

Methods and apparatus for spatial audio navigation that may, for example, be implemented by mobile multipurpose devices. A spatial audio navigation system provides navigational information in audio form to direct users to target locations. The system uses directionality of audio played through a binaural audio device to provide navigational cues to the user. A current location, target location, and map information may be input to pathfinding algorithms to determine a real world path between the user's current location and the target location. The system may then use directional audio played through a headset to guide the user on the path from the current location to the target location. The system may implement one or more of several different spatial audio navigation methods to direct a user when following a path using spatial audio-based cues.

Manipulation of Playback Device Response Using Signal Processing
20180014137 · 2018-01-11 ·

An example playback device receives left and right channels of audio content and generates a center channel of the audio content by combining at least a portion of the left right channels. The playback device generates first and second side channels of the audio content by combining the center channel and a difference of the left channel and the right channel and combining the center channel and an inverse of the difference of the left channel and the right channel, respectively. The playback device plays back the center channel of the audio content according to a first radiation pattern having a maximum aligned with a first direction, the first side channel according to a second radiation pattern having a maximum aligned with a second direction, and the second side channel according to a third radiation pattern having a maximum aligned with a fourth direction.

Reproduction device, reproduction system and reproduction method

A reproduction device includes: an acquisition unit configured to acquire sound source information about a sound source; a determination unit configured to determine an output characteristic of the sound source with a virtual speaker arranged in a virtual space, based on a positional relationship between the virtual speaker and a virtual listener arranged in the virtual space; and a reproduction unit configured to reproduce the sound source with a real speaker arranged in a real space, based on the output characteristic determined by the determination unit.

METHODS AND SYSTEMS FOR GENERATING AND RENDERING OBJECT BASED AUDIO WITH CONDITIONAL RENDERING METADATA

Methods and audio processing units for generating an object based audio program including conditional rendering metadata corresponding to at least one object channel of the program, where the conditional rendering metadata is indicative of at least one rendering constraint, based on playback speaker array configuration, which applies to each corresponding object channel, and methods for rendering audio content determined by such a program, including by rendering content of at least one audio channel of the program in a manner compliant with each applicable rendering constraint in response to at least some of the conditional rendering metadata. Rendering of a selected mix of content of the program may provide an immersive experience.

Binaural Sound in Visual Entertainment Media
20230239649 · 2023-07-27 ·

A method provides binaural sound to a listener while the listener watches a movie so sounds from the movie localize to a location of a character in the movie. Sound is convolved with head related transfer functions (HRTFs) of the listener, and the convolved sound is provided to the listener who wears a wearable electronic device.

DIGITAL AUDIO COMMUNICATION AND CONTROL IN A LIVE PERFORMANCE VENUE
20230007417 · 2023-01-05 ·

In embodiments of the present invention improved capabilities are described for digitally transmitting audio that is converted from analog audio received from analog media pickup devices in a live performance venue by a stage box to a base unit over off-the-shelf twisted pair cable while sending pre-amplification control signals and power over the cable to the stage box. Audio for the performance venue is remotely managed from a virtual audio engineering mixing board that wirelessly communicates audio control commands to the stage box from a handheld computing device.

Apparatus, Method, or Computer Program for Processing an Encoded Audio Scene using a Parameter Smoothing

Apparatus for processing an audio scene representing a sound field, the audio scene having information on a transport signal and a first set of parameters. The apparatus has a parameter processor for processing the first set of parameters to obtain a second set of parameters, wherein the parameter processor is configured to calculate at least one raw parameter for each output time frame using at least one parameter of the first set of parameters for the input time frame, to calculate a smoothing information such as a factor for each raw parameter in accordance with a smoothing rule, and to apply a corresponding smoothing information to the corresponding raw parameter to derive the parameter of the second set of parameters for the output time frame. The apparatus further has an output interface for generating a processed audio scene using the second set of parameters and the information on the transport signal.

SERVICE FOR TARGETED CROWD SOURCED AUDIO FOR VIRTUAL INTERACTION

An audio generation system is provided to enable coordinated control of multiple IoT devices for audio collection and distribution of one or more audio sources according to location and user preference. The audio generation system enables a location sensitive acoustic control of sound, both as a shaped envelope for a particular source, and as an individualized experience. The audio generation system also facilitates an interactive visual system for visualization and manipulation of the audio environment including via the use of augmented reality and/or virtual reality to depict soundscapes. The audio generation system can also facilitate a system for improving and achieving an audio environment (sound influence zone) and an intuitive way to understand where sounds will be heard.