H04S7/30

Audio conferencing using a distributed array of smartphones

Described is a method of hosting a teleconference among a plurality of client devices arranged in two or more acoustic spaces, each client device having an audio capturing capability and/or an audio rendering capability, the method comprising: grouping the plurality of client devices into two or more groups based on their belonging to respective acoustic spaces, receiving first audio streams from the plurality of client devices, generating second audio streams from the first audio streams for rendering by respective client devices among the plurality of client devices, based on the grouping of the plurality of client devices into the two or more groups, and outputting the generated second audio streams to respective client devices. Further described are corresponding computation devise, computer programs, and computer-readable storage media.

Method and device for processing audio signal, using metadata
11540075 · 2022-12-27 · ·

Disclosed is a device for processing an audio signal, which renders an audio signal. The device for processing an audio signal includes a processor. The processor receives metadata including an audio signal and first element reference distance information and renders a first element signal on the basis of the first element reference distance information, wherein the first element reference distance information indicates the reference distance of an element signal. The audio signal is capable of including a second element signal which may be simultaneously rendered with the first element signal, and the metadata is capable of including second element distance information indicating the distance of the second element signal. The number of bits required for representing the first element reference distance information is smaller than the number of bits required for representing the second element distance information.

CONTENT AND ENVIRONMENTALLY AWARE ENVIRONMENTAL NOISE COMPENSATION

Some implementations involve receiving a content stream that includes audio data, determining a content type corresponding to the content stream and determining, based at least in part on the Receiving, by a control system and via an interface system, a content stream that includes audio data content type, a noise compensation method. Some examples involve performing the noise compensation method on the audio data to produce noise-compensated audio data, rendering the noise-compensated audio data for reproduction via a set of audio reproduction transducers of the audio environment, to produce rendered audio signals, and providing the rendered audio signals to at least some audio reproduction transducers of the audio environment.

Loudness adjustment for downmixed audio content

Audio content coded for a reference speaker configuration is downmixed to downmix audio content coded for a specific speaker configuration. One or more gain adjustments are performed on individual portions of the downmix audio content coded for the specific speaker configuration. Loudness measurements are then performed on the individual portions of the downmix audio content. An audio signal that comprises the audio content coded for the reference speaker configuration and downmix loudness metadata is generated. The downmix loudness metadata is created based at least in part on the loudness measurements on the individual portions of the downmix audio content.

SYSTEM AND METHOD FOR 3D SOUND PLACEMENT
20220400352 · 2022-12-15 ·

A phone app is disclosed that enables a user to place 3D sound in a room. The user of this app is able to locate precisely where sound is perceived to originate by aiming their phone. This app may be used by audio professionals in place of the controls on a traditional sound mixer.

Apparatus and method for screen related audio object remapping

An apparatus for generating loudspeaker signals includes an object metadata processor configured to receive metadata, to calculate a second position of the audio object depending on the first position of the audio object and on a size of a screen if the audio object is indicated in the metadata as being screen-related, to feed the first position of the audio object as the position information into the object renderer if the audio object is indicated in the metadata as being not screen-related, and to feed the second position of the audio object as the position information into the object renderer if the audio object is indicated in the metadata as being screen-related. The apparatus further includes an object renderer configured to receive an audio object and to generate the loudspeaker signals depending on the audio object and on position information.

Playback device base
11528570 · 2022-12-13 · ·

Example techniques may involve a playback device modifying its playback configuration when the playback device is placed onto a device base. In an example implementation, the playback device may receive, via an 802.15-compatible radio of one or more network interfaces, one or more packets comprising data representing an identifier of the first device base. After receipt of the one or more packets, the playback device queries a database for state information corresponding to the identifier of the first device base and modifies a playback configuration of the playback device from a first configuration to a second configuration based on the queried state information corresponding to the identifier of the first device base.

Wearer identification based on personalized acoustic transfer functions

A wearable device includes an audio system. In one embodiment, the audio system includes a sensor array that includes a plurality of acoustic sensors. When a user wears the wearable device, the audio system determines an acoustic transfer function for the user based upon detected sounds within a local area surrounding the sensor array. Because the acoustic transfer function is based upon the size, shape, and density of the user's body (e.g., the user's head), different acoustic transfer functions will be determined for different users. The determined acoustic transfer functions are compared with stored acoustic transfer functions of known users in order to authenticate the user of the wearable device.

ELECTRONIC DEVICE, METHOD AND COMPUTER PROGRAM
20220392461 · 2022-12-08 · ·

An electronic device comprising circuitry configured to analyze the results of a stereo or multi-channel source separation to determine one or more time-varying parameters, and to create spatially dynamic audio objects based on the one or more time-varying parameters.

SPECTRAL COMPENSATION FILTERS FOR CLOSE PROXIMITY SOUND SOURCES

A method of generating a signal for driving a first linear array of sound sources. The first linear array of sound sources comprises a primary sound source and one or more secondary sound sources. The method comprises the steps of receiving an audio signal for a first channel of an audio system, deriving, from the audio signal, a first signal and a second signal, applying a low-pass filter to the second signal to generate a second drive signal for driving the one or more secondary sound sources, and applying a corresponding high-frequency shelving filter to the first signal to generate a first drive signal for driving the primary sound source. A computer program product and an audio system for generating a levelled sound field is also provided.