H04S2420/01

METHOD AND APPARATUS FOR RENDERING OBJECT-BASED AUDIO SIGNAL CONSIDERING OBSTACLE

A method and apparatus for rendering an object-based audio signal considering an obstacle are disclosed. A method for rendering an object-based audio signal according to an example embodiment, the method includes identifying an object-based input signal and metadata for the input signal, generating a binaural filter based on the metadata using a binaural room impulse response (BRIR), determining, based on the metadata, whether an obstacle is present between a listener and an object, modifying the generated binaural filter when it is determined that the obstacle is present, and generating a rendered output signal by convolving the modified binaural filter and the input signal.

Sound masking system

A sound masking system according to the invention is disclosed in which one or more sound masking loudspeaker assemblies are coupled to one or more electronic sound masking signal generators. The loudspeaker assemblies in the system of the invention have a low directivity index and preferably emit an acoustic sound masking signal that has a sound masking spectrum specifically designed to provide superior sound masking in an open plan office. Each of the plurality of loudspeaker assemblies is oriented to provide the acoustic sound masking signal in a direct path into the predetermined area in which masking sound is needed. In addition, the sound masking system of the invention can include a remote control function by which a user can select from a plurality of stored sets of information for providing from a recipient loudspeaker assembly an acoustic sound masking signal having a-selected sound masking spectrum.

USER-CONFIGURABLE SPATIAL AUDIO BASED CONFERENCING SYSTEM
20230008964 · 2023-01-12 ·

A client device receives an arrangement of at least a subset of participants of a virtual meeting. The client device additionally receives an audio stream for each participant of the subset of participants of the virtual meeting. For each participant of the subset of participants, the client device determines a location based at least in part on the received arrangement, and modulates the received audio stream of the participant based on the determined location. The client device generates a combined modulated audio stream by combining the modulated audio stream of each of the participants and plays the combined modulated audio stream.

Systems and methods for providing augmented audio

A system for providing augmented spatialized audio in a vehicle, including a plurality of speakers disposed in a perimeter of a cabin of the vehicle; and a controller configured to receive a position signal indicative of the position of a first user's head in the vehicle and to output to a first binaural device, according to the first position signal, a first spatial audio signal, such that the first binaural device produces a first spatial acoustic signal perceived by the first user as originating from a first virtual source location within the vehicle cabin, wherein the first spatial audio signal comprises at least an upper range of a first content signal, wherein the controller is further configured to drive the plurality of speakers with a driving signal such that a first bass content of the first content signal is produced in the vehicle cabin.

Audio communication device

An audio communication device includes: a sound position determiner that determines sound localization positions for N audio signals in a virtual space having first and second walls; N sound localizers each performing sound localization processing to localize sound in the sound localization position determined by the sound position determiner, and outputting localized sound signals; an adder that sums the N localized sound signals, and outputs a summed localized sound signal. Each sound localizer performs the processing using: a first head-related transfer function (HRTF) assuming that a sound wave emitted from the sound localization position of the sound localizer determined by the sound position determiner directly reaches each ear of a hearer virtually present at the hearer position; and a second HRTF assuming that the sound wave emitted from the sound localization position reaches each ear of the hearer after being reflected by closer one of the first and second walls.

Audio Processing Apparatus
20230213349 · 2023-07-06 ·

An apparatus configured to: determine, with a position sensor, position information; determine at least one keyword within at least one audio signal, wherein at least the at least one keyword is configured to be spatially processed; obtain at least one spatial processing parameter based at least partially, on the position information, wherein the at least one spatial processing parameter is configured to be used to spatially process at least the at least one keyword to be perceived from a direction during rendering, wherein the direction indicates a navigation direction; generate at least one processed audio signal, comprising processing at least the at least one keyword based on the at least one spatial processing parameter; and provide the at least one processed audio signal, comprising the at least one processed keyword, for generation of a virtual audio image.

Headtracking for pre-rendered binaural audio

A system and method of modifying a binaural signal using headtracking information. The system calculates a delay, a first filter response, and a second filter response, and applies these to the left and right components of the binaural signal according to the headtracking information. The system may also apply headtracking to parametric binaural signals. In this manner, headtracking may be applied to pre-rendered binaural audio.

ENHANCED HEADPHONE DESIGN USING DSP AND ARRAY TECHNOLOGY

A headphone arrangement includes two earphones, wherein each earphone comprises a housing encompassing a low-frequency transducer and an array of at least three high-frequency transducers. The low-frequency transducer of each earphone is disposed on or over an ear canal of a user when the earphone is worn by the user, and is configured to broadcast low-frequency sound that corresponds to low-frequency components of an input signal. The array of at least three high frequency transducers of each array are configured to broadcast high-frequency sound that corresponds to high-frequency components of the input signal, and the array of at least three high frequency transducers of each array is disposed adjacent to the low-frequency transducer and in a lower rostral quadrant of a full circle around the low-frequency transducer when the earphone is worn by the user.

IN-VEHICLE INDEPENDENT SOUND ZONE CONTROL METHOD, SYSTEM AND RELATED DEVICE
20230217200 · 2023-07-06 ·

The present disclosure discloses an in-vehicle independent sound zone control method, a system and a related device, applied to a vehicle. The method includes the following steps: presetting a control area and a non-control area; arranging a speaker array behind a front seat of the vehicle for generating a first acoustic response, and arranging a headrest speaker at a headrest on a rear seat of the vehicle for generating a second acoustic response; fitting a virtual target speaker, wherein the virtual target speaker is configured to generate a target acoustic response within the control area; and controlling a sound quality of the in-vehicle independent sound zone through an audio algorithm processing on the target acoustic response, the first acoustic response and the second acoustic response.

AUDIO FILTER EFFECTS VIA SPATIAL TRANSFORMATIONS
20230217201 · 2023-07-06 ·

An audio system of a client device applies transformations to audio received over a computer network. The transformations (e.g., HRTFs) effect changes in apparent source positions of the received audio, or of segments thereof. Such transformations may be used to achieve “animation” of audio, in which the source positions of the audio or audio segments appear to change over time (e.g., circling around the listener). Additionally, segmentation of audio into distinct semantic audio segments, and application of separate transformations for each audio segment, can be used to intuitively differentiate the different audio segments by causing them to sound as if they emanated from different positions around the listener.