Patent classifications
G10K15/10
SIGNAL PROCESSING DEVICE, SIGNAL PROCESSING METHOD, AND PROGRAM
Provided is a signal processing device including: a reverberation sound signal generation unit that generates a reverberation sound signal according to a sound source position of a virtual sound source and a distance to a reference point; and a drive signal generation unit that generates a drive signal for a speaker array by a wavefront synthesis filter, in which the drive signal generation unit generates the drive signal on the basis of a signal obtained by performing wavefront synthesis filtering processing on a signal obtained by convolving the reverberation sound signal with a signal of the virtual sound source and/or a signal obtained by performing wavefront synthesis filtering processing on the reverberation sound signal to make the reverberation sound signal into a virtual sound source.
Spatial audio for interactive audio environments
Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. For each sound source of a plurality of sound sources in the virtual environment, a respective first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the respective sound source in the virtual environment, and the respective first intermediate audio signal is associated with a first bus. For each of the sound sources of the plurality of sound sources in the virtual environment, a respective second intermediate audio signal is determined. The respective second intermediate audio signal corresponds to a reverberation of the input audio signal in the virtual environment. The respective second intermediate audio signal is determined based on a location of the respective sound source, and further based on an acoustic property of the virtual environment. The respective second intermediate audio signal is associated with a second bus. The output audio signal is presented to the listener via the first bus and the second bus.
Spatial audio for interactive audio environments
Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. For each sound source of a plurality of sound sources in the virtual environment, a respective first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the respective sound source in the virtual environment, and the respective first intermediate audio signal is associated with a first bus. For each of the sound sources of the plurality of sound sources in the virtual environment, a respective second intermediate audio signal is determined. The respective second intermediate audio signal corresponds to a reverberation of the input audio signal in the virtual environment. The respective second intermediate audio signal is determined based on a location of the respective sound source, and further based on an acoustic property of the virtual environment. The respective second intermediate audio signal is associated with a second bus. The output audio signal is presented to the listener via the first bus and the second bus.
USER TRACKING HEADREST AUDIO CONTROL
Implementations of the subject technology provide user tracking headrest audio control. For example, a seat may have a headrest and one or more speakers mounted to the headrest for providing audio output to an occupant of the seat. Because the head of the occupant may be disposed in the near field of the headrest speakers when the occupant is seated in the seat, movements of the occupant's head and/or ears may affect the acoustic experience of the occupant. Aspects of the subject technology provide for modifications to audio output(s) from one or more speaker(s) mounted in a headrest, based on tracking of the location of the occupant's head and/or ears relative to the location(s) of the speaker(s).
USER TRACKING HEADREST AUDIO CONTROL
Implementations of the subject technology provide user tracking headrest audio control. For example, a seat may have a headrest and one or more speakers mounted to the headrest for providing audio output to an occupant of the seat. Because the head of the occupant may be disposed in the near field of the headrest speakers when the occupant is seated in the seat, movements of the occupant's head and/or ears may affect the acoustic experience of the occupant. Aspects of the subject technology provide for modifications to audio output(s) from one or more speaker(s) mounted in a headrest, based on tracking of the location of the occupant's head and/or ears relative to the location(s) of the speaker(s).
Sound signal processing method and sound signal processing device
A sound signal processing method includes: obtaining a plurality of sound signals respectively collected by a plurality of microphones arranged in a space; adjusting respective levels of the plurality of sound signals in accordance with respective positions of the plurality of microphones; mixing the plurality of sound signals having the adjusted respective levels to thereby obtain a mixed signal; and generating a reflected sound by using the obtained mixed signal.
Sound signal processing method and sound signal processing device
A sound signal processing method includes: obtaining a plurality of sound signals respectively collected by a plurality of microphones arranged in a space; adjusting respective levels of the plurality of sound signals in accordance with respective positions of the plurality of microphones; mixing the plurality of sound signals having the adjusted respective levels to thereby obtain a mixed signal; and generating a reflected sound by using the obtained mixed signal.
SPATIAL AUDIO FOR INTERACTIVE AUDIO ENVIRONMENTS
Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. A first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the sound source in the virtual environment, and the first intermediate audio signal is associated with a first bus. A second intermediate audio signal is determined. The second intermediate audio signal corresponds to a reverberation of the input audio signal in the virtual environment. The second intermediate audio signal is determined based on a location of the sound source, and further based on an acoustic property of the virtual environment. The second intermediate audio signal is associated with a second bus. The output audio signal is presented to the listener via the first and second buses.
SPATIAL AUDIO FOR INTERACTIVE AUDIO ENVIRONMENTS
Systems and methods of presenting an output audio signal to a listener located at a first location in a virtual environment are disclosed. According to embodiments of a method, an input audio signal is received. A first intermediate audio signal corresponding to the input audio signal is determined, based on a location of the sound source in the virtual environment, and the first intermediate audio signal is associated with a first bus. A second intermediate audio signal is determined. The second intermediate audio signal corresponds to a reverberation of the input audio signal in the virtual environment. The second intermediate audio signal is determined based on a location of the sound source, and further based on an acoustic property of the virtual environment. The second intermediate audio signal is associated with a second bus. The output audio signal is presented to the listener via the first and second buses.
Method and Apparatus for Audio Transition Between Acoustic Environments
An apparatus for enabling audio transition between at least two acoustic environments, the apparatus including circuitry configured to: obtain information of at least a first acoustic environment associated with an audio scene, wherein the audio scene includes the first acoustic environment and a second acoustic environment; obtain a first distance threshold that at least partially defines an audio transition region that enables adaptive rendering between the first and second acoustic environments depending on a listening position within the audio scene; determine the listening position to adjust an environment characteristic of at least one of the first and second acoustic environments; and adjust the environment characteristic of at least one of the first and second acoustic environments depending on the listening position, wherein the environment characteristic is adaptively controlled within the audio scene.