Patent classifications
H04S2420/13
Device and method for decorrelating loudspeaker signals
A device for generating a multitude of loudspeaker signals based on a virtual source object which has a source signal and a meta information determining a position or type of the virtual source object. The device has a modifier configured to time-varyingly modify the meta information. In addition, the device has a renderer configured to transfer the virtual source object and the modified meta information to form a multitude of loudspeaker signals.
Audio playback device and audio playback method
An audio playback device which plays back an audio object including an audio signal and playback position information indicating a position in a three-dimensional space at which a sound image of the audio signal is localized, includes: at least one speaker array; a converting unit which converts playback position information to corrected playback position information which is information indicating a position of the sound image on a two-dimensional coordinate system based on a position of the at least one speaker array; and a signal processing unit which localizes the sound image of the audio signal included in the audio object according to the corrected playback position information.
SOUND SIGNAL PROCESSING DEVICE AND SOUND SIGNAL PROCESSING METHOD
A sound signal processing device includes: a vocal remover which generates a first output signal based on first-channel and second-channel sound signals and a first coefficient indicating a vocal bandwidth to be removed; a surround sound processor which generates a second output signal by adding a surround sound effect to the first output signal; an amplifier which amplifies a signal at an amplification factor that is based on a second coefficient; a synthesizer which synthesizes the second output signal with one of the first-channel and second-channel sound signals, and synthesizes a signal that is the second output signal inverted with another one of the first-channel and second-channel sound signals; and a coefficient determination unit which sets the second coefficient such that the amplification factor, used when the vocal bandwidth to be removed is greater than a first bandwidth, is greater than the amplification factor for the first bandwidth.
Method, device and system
Method for approximating the synthesis of a target sound field based on contributions of a predefined number of synthesis monopoles placed at respective synthesis positions, the method comprising modelling the target sound field as at least one target monopole placed at a defined target position.
SOUND SIGNAL GENERATION CIRCUITRY AND SOUND SIGNAL GENERATION METHOD
The present disclosure generally pertains to sound signal generation circuitry, configured to: obtain a position of a virtual user and sound of the virtual user, the virtual user representing a training partner of a real user; and generate, based on the position of the virtual user and the sound of the virtual user, a control signal for at least two loudspeakers positioned in a real space, such that the at least two loudspeakers generate sound representing the virtual user at a predetermined position relative to the real user in the real space.
Method and apparatus for compressing and decompressing a Higher Order Ambisonics representation
Higher Order Ambisonics represents three-dimensional sound independent of a specific loudspeaker set-up. However, transmission of an HOA representation results in a very high bit rate. Therefore compression with a fixed number of channels is used, in which directional and ambient signal components are processed differently. The ambient HOA component is represented by a minimum number of HOA coefficient sequences. The remaining channels contain either directional signals or additional coefficient sequences of the ambient HOA component, depending on what will result in optimum perceptual quality. This processing can change on a frame-by-frame basis.
Electronic device, method and computer program
An electronic device for a vehicle comprising circuitry configured to obtain information about a planned maneuver of the vehicle; and circuitry configured to determine the position and/or orientation of a sound field based on the information about the planned maneuver of the vehicle.
SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, AND PROGRAM
The present disclosure relates to a signal processing device, a signal processing method, and a program that allows for generating an input signal suitable for a multi-way speaker. A band dividing unit divides an audio signal into signals in a plurality of bands corresponding to respective bands of a plurality of speaker units of the multi-way speaker. A filter processing unit performs wave front synthesis filter processing on each of the audio signals in the respective bands having been divided into. The audio signal in each of the bands after the wave front synthesis filter processing is supplied to the speaker unit of the corresponding band in the multi-way speaker. The present disclosure is applicable, for example, to a signal processing device or other devices.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
Provided is an information processing apparatus having an audio signal generation unit which generates an audio signal reproduced from a loudspeaker on the basis of position information of each of a plurality of unmanned aerial vehicles, each of the unmanned aerial vehicles having the loudspeaker.
Audio processing apparatus and method therefor
An audio processing apparatus comprises a receiver (705) which receives audio data including audio components and render configuration data including audio transducer position data for a set of audio transducers (703). A renderer (707) generating audio transducer signals for the set of audio transducers from the audio data. The renderer (7010) is capable of rendering audio components in accordance with a plurality of rendering modes. A render controller (709) selects the rendering modes for the renderer (707) from the plurality of rendering modes based on the audio transducer position data. The renderer (707) can employ different rendering modes for different subsets of the set of audio transducers the render controller (709) can independently select rendering modes for each of the different subsets of the set of audio transducers (703). The render controller (709) can select the rendering mode for a first audio transducer of the set of audio transducers (703) in response to a position of the first audio transducer relative to a predetermined position for the audio transducer. The approach may provide improved adaptation, e.g. to scenarios where most speakers are at desired positions whereas a subset deviate from the desired position(s).