H04S7/305

Spatial Audio Reproduction by Positioning at Least Part of a Sound Field
20230143857 · 2023-05-11 ·

An apparatus for positioning at least part of a sound field based on a target direction, the apparatus comprising means configured to: obtain at least one audio signal; obtain speaker setup information; obtain, for at least two processing paths, at least one processing path parameter, the at least one processing path parameter comprising a target direction associated with each of the at least two processing paths; process, for each of the at least two processing paths, the at least one audio signal based on the at least one processing path parameter to generate a multiple-channel audio signal, wherein for each processing path the means is configured to: generate at least two at least partly mutually incoherent audio signals from the at least one audio signal; determine at least two panning gains based on the target direction associated with the processing path and the speaker setup information; apply each of the at least two panning gains with an associated one of the at least partly mutually incoherent audio signal to generate at least two panning gain applied at least partly mutually incoherent audio signals; and combine the at least two panning gain applied at least partly mutually incoherent audio signals to generate the multiple-channel audio signal; and combine the multiple-channel audio signal from each processing path to generate a combined panning gain applied multiple-channel audio signal.

AUTOMATIC LEVEL-DEPENDENT PITCH CORRECTION OF DIGITAL AUDIO
20230143062 · 2023-05-11 ·

In various applications, the system provides a method for processing audio signals, including: receiving, by a processor, a digital audio signal from a recorded audio file; analyzing, by the processor, the digital audio signal to identify pitch distortion caused by changes in momentary sound level; determining, by the processor, an amount of compensation of the audio signal to correct the identified pitch distortion; dynamically adjusting, by the processor, the digital audio signal by the compensation amount to correct the identified pitch distortion; and outputting, by the processor, the digital audio signal to an audio transducer device of a listener to improve a listening experience for the listener of the recorded audio file.

Sharing locations where binaural sound externally localizes
11641561 · 2023-05-02 ·

A method processes binaural sound to externally localize to a first user at a first location. This location is shared such that an electronic device processes the binaural sound to externally localize to a second user at a second location. The first and second locations occur at a same or similar location such that the first and second users hear the binaural sound as originating from the same or similar location.

Method and Apparatus for Audio Transition Between Acoustic Environments

An apparatus for enabling audio transition between at least two acoustic environments, the apparatus including circuitry configured to: obtain information of at least a first acoustic environment associated with an audio scene, wherein the audio scene includes the first acoustic environment and a second acoustic environment; obtain a first distance threshold that at least partially defines an audio transition region that enables adaptive rendering between the first and second acoustic environments depending on a listening position within the audio scene; determine the listening position to adjust an environment characteristic of at least one of the first and second acoustic environments; and adjust the environment characteristic of at least one of the first and second acoustic environments depending on the listening position, wherein the environment characteristic is adaptively controlled within the audio scene.

APPARATUS AND METHOD FOR GENERATING A DIFFUSE REVERBERATION SIGNAL

An audio apparatus for generating a diffuse reverberation signal comprises a receiver (501) receiving audio signals representing sound sources and metadata comprising a diffuse reverberation signal to total source relationship indicative of a level of diffuse reverberation sound relative to total emitted sound in the environment. The metadata also for each audio signal comprises a signal level indication and a directivity data indicative of directivity of sound radiation from the sound source represented by the audio signal. A circuit (505, 507) determines a total emitted energy indication based on the signal level indication and the directivity data, and a downmix coefficient based on the total emitted energy and the diffuse reverberation signal to total signal relationship. A downmixer (509) generates a downmix signal by combining signal components for each audio signal generated by applying the downmix coefficient for each audio signal to the audio signal. A reverberator (407) generates the diffuse reverberation signal for the environment from thedownmix signal component.

NETWORKED AUDIO AURALIZATION AND FEEDBACK CANCXELLATION SYSTEM AND METHOD

The present embodiments generally relate to enabling participants in an online gathering with networked audio to use a cancelling auralizer at their respective locations to create a common acoustic space or set of acoustic spaces shared among subgroups of participants. For example, there are a set of network connected nodes, and the nodes can contain speakers and microphones, as well as participants and node mixing-processing blocks. The node mixing-processing blocks generate and manipulate signals for playback over the node loudspeakers and for distribution to and from the network. This processing can include cancellation of loudspeaker signals from the microphone signals and auralization of signals according to control parameters that are developed locally and from the network. A network block can contain network routing and processing functions, including auralization, synthesis, and cancellation of audio signals, synthesis and processing of control parameters, and audio signal and control parameter routing.

Method for generating filter for audio signal, and parameterization device for same

The present invention relates to a method for generating a filter for an audio signal and a parameterization device for the same, and more particularly, to a method for generating a filter for an audio signal, to implement filtering of an input audio signal with a low computational complexity, and a parameterization device therefor. To this end, provided are a method for generating a filter for an audio signal, including: receiving at least one binaural room impulse response (BRIR) filter coefficients for binaural filtering of an input audio signal; converting the BRIR filter coefficients into a plurality of subband filter coefficients; obtaining average reverberation time information of a corresponding subband by using reverberation time information extracted from the subband filter coefficients; obtaining at least one coefficient for curve fitting of the obtained average reverberation time information; obtaining flag information indicating whether the length of the BRIR filter coefficients in a time domain is more than a predetermined value; obtaining filter order information for determining a truncation length of the subband filter coefficients, the filter order information being obtained by using the average reverberation time information or the at least one coefficient according to the obtained flag information and the filter order information of at least one subband being different from filter order information of another subband; and truncating the subband filter coefficient by using the obtained filter order information and a parameterization device therefor.

Vehicle and operation method thereof
11689853 · 2023-06-27 · ·

A device can include a microphone, a memory configured to store a sound source table including correspondence relationships among boarding items, virtual sound items and filter set values, and a processor configured to receive a sound signal through the microphone and determine a first filter set value matching characteristics of the received sound signal using the sound source table.

Encoding reverberator parameters from virtual or physical scene geometry and desired reverberation characteristics and rendering using these

A method and apparatus for simulating reverberation in rendering systems. A decoder/renderer may determine how to simulate reverberation with reference to audio scene description format file information or other bitstream information regarding the virtual scene geometry, and/or a lookup table. Late reverberation may be generated based on reverberator parameters derived from the virtual scene geometry, including the lengths of delay lines, attenuation filter coefficients, and diffuse-to-direct ratio characteristics, using a feedback delay network. Early reverberation may be generated based on determining what potential reflecting elements in a virtual scene geometry are reflecting planes; this may involve the simulation of ray reflection, potentially through beam tracing.

AUDIO RENDERING USING 6-DOF TRACKING
20170366914 · 2017-12-21 ·

The methods and apparatus described herein optimally represent full 3D audio mixes (e.g., azimuth, elevation, and depth) as “sound scenes” in which the decoding process facilitates head tracking. Sound scene rendering can be performed for the listener's orientation (e.g., yaw, pitch, roll) and 3D position (e.g., x, y, z), and can be modified for a change in the listener's orientation or 3D position. As described below, the ability to render an audio object in both the near-field and far-field enables the ability to fully render depth of not just objects, but any spatial audio mix decoded with active steering/panning, such as Ambisonics, matrix encoding, etc., thereby enabling full translational head tracking (e.g., user movement) beyond simple rotation in the horizontal plane, or 6-degrees-of-freedom (6-DOF) tracking and rendering.