Methods and systems for designing and applying numerically optimized binaural room impulse responses
11576004 · 2023-02-07
Assignee
Inventors
- Grant A. Davidson (Burlingame, CA)
- Kuan-Chieh Yen (Foster City, CA)
- Dirk Jeroen Breebaart (Ultimo, AU)
Cpc classification
H04S2420/07
ELECTRICITY
H04S2420/01
ELECTRICITY
H04S2400/03
ELECTRICITY
International classification
Abstract
Methods and systems for designing binaural room impulse responses (BRIRs) for use in headphone virtualizers, and methods and systems for generating a binaural signal in response to a set of channels of a multi-channel audio signal, including by applying a BRIR to each channel of the set, thereby generating filtered signals, and combining the filtered signals to generate the binaural signal, where each BRIR has been designed in accordance with an embodiment of the design method. Other aspects are audio processing units configured to perform any embodiment of the inventive method. In accordance with some embodiments, BRIR design is formulated as a numerical optimization problem based on a simulation model (which generates candidate BRIRs) and at least one objective function (which evaluates each candidate BRIR), and includes identification of a best one of the candidate BRIRs as indicated by performance metrics determined for the candidate BRIRs by each objective function.
Claims
1. A method for generating an output binaural signal in response to a set of N audio input signals, the method comprising: receiving the N audio input signals, wherein each of the N audio input signals corresponds to a spatial location, and wherein one or more of the N audio input signals is a channel audio signal associated with a static spatial location; determining N direct response and early reflection binaural room impulse response, BRIR, portions, wherein each direct response and early reflection BRIR portion corresponds to the spatial location of one of the audio input signals; determining a late response BRIR portion, wherein a subset of the late response BRIR portion temporally overlaps with subsets of the direct response and early reflection BRIR portions, and wherein the temporally overlapping subset of the late response BRIR portion models the transition from the direct response and early reflection BRIR portions to the late response BRIR portion; generating, for each audio input signal, a binaural signal, by processing the audio input signal to apply the corresponding direct response and early reflection BRIR portion; generating a first binaural signal by combining the binaural signals for each audio input signal; generating a second binaural signal by processing a downmix of the N audio input signals to apply the late response BRIR portion; generating the output binaural signal by combining the first binaural signal and the second binaural signal.
2. The method of claim 1, wherein one or more of the N audio input signals is an object audio signal associated with at time-varying spatial location.
3. A non-transitory computer readable storage medium comprising a sequence of instructions, wherein, when an audio signal processing device executes the sequence of instructions, the audio signal processing device performs the method of claim 1.
4. An audio signal processing device for generating an output binaural signal in response to a set of N audio input signals, wherein the audio signal processing device comprises one or more processing components configured to: receive the N audio input signals, wherein each of the N audio input signals corresponds to a spatial location, and wherein one or more of the N audio input signals is a channel audio signal associated with a static spatial location; determine N direct response and early reflection binaural room impulse response, BRIR, portions, wherein each direct response and early reflection BRIR portion corresponds to the spatial location of one of the audio input signals; determine a late response BRIR portion, wherein a subset of the late response BRIR portion temporally overlaps with subsets of the direct response and early reflection BRIR portions, and wherein the temporally overlapping subset of the late response BRIR portion models the transition from the direct response and early reflection BRIR portions to the late response BRIR portion; generate, for each audio input signal, a binaural signal, by processing the audio input signal to apply the corresponding direct response and early reflection BRIR portion; generate a first binaural signal by combining the binaural signals for each audio input signal; generate a second binaural signal by processing a downmix of the N audio input signals to apply the late response BRIR portion; generate the output binaural signal by combining the first binaural signal and the second binaural signal.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
NOTATION AND NOMENCLATURE
(9) Throughout this disclosure, including in the claims, the expression performing an operation “on” a signal or data (e.g., filtering, scaling, transforming, or applying gain to, the signal or data) is used in a broad sense to denote performing the operation directly on the signal or data, or on a processed version of the signal or data (e.g., on a version of the signal that has undergone preliminary filtering or pre-processing prior to performance of the operation thereon).
(10) Throughout this disclosure including in the claims, the expression “system” is used in a broad sense to denote a device, system, or subsystem. For example, a subsystem that implements a virtualizer may be referred to as a virtualizer system, and a system including such a subsystem (e.g., a system that generates X output signals in response to multiple inputs, in which the subsystem generates M of the inputs and the other X−M inputs are received from an external source) may also be referred to as a virtualizer system (or virtualizer).
(11) Throughout this disclosure including in the claims, the term “processor” is used in a broad sense to denote a system or device programmable or otherwise configurable (e.g., with software or firmware) to perform operations on data (e.g., audio, or video or other image data). Examples of processors include a field-programmable gate array (or other configurable integrated circuit or chip set), a digital signal processor programmed and/or otherwise configured to perform pipelined processing on audio or other sound data, a programmable general purpose processor or computer, and a programmable microprocessor chip or chip set.
(12) Throughout this disclosure including in the claims, the expression “analysis filterbank” is used in a broad sense to denote a system (e.g., a subsystem) configured to apply a transform (e.g., a time domain-to-frequency domain transform) on a time-domain signal to generate values (e.g., frequency components) indicative of content of the time-domain signal, in each of a set of frequency bands. Throughout this disclosure including in the claims, the expression “filterbank domain” is used in a broad sense to denote the domain of the frequency components generated by an analysis filterbank (e.g., the domain in which such frequency components are processed). Examples of filterbank domains include (but are not limited to) the frequency domain, the quadrature mirror filter (QMF) domain, and the hybrid complex quadrature mirror filter (HCQMF) domain. Examples of the transform which may be applied by an analysis filterbank include (but are not limited to) a discrete-cosine transform (DCT), modified discrete cosine transform (MDCT), discrete Fourier transform (DFT), and a wavelet transform. Examples of analysis filterbanks include (but are not limited to) quadrature mirror filters (QMF), finite-impulse response filters (FIR filters), infinite-impulse response filters (IIR filters), cross-over filters, and filters having other suitable multi-rate structures.
(13) Throughout this disclosure including in the claims, the term “metadata” refers to separate and different data from corresponding audio data (audio content of a bitstream which also includes metadata). Metadata is associated with audio data, and indicates at least one feature or characteristic of the audio data (e.g., what type(s) of processing have already been performed, or should be performed, on the audio data, or the trajectory of an object indicated by the audio data). The association of the metadata with the audio data is time-synchronous. Thus, present (most recently received or updated) metadata may indicate that the corresponding audio data contemporaneously has an indicated feature and/or comprises the results of an indicated type of audio data processing.
(14) Throughout this disclosure including in the claims, the term “couples” or “coupled” is used to mean either a direct or indirect connection. Thus, if a first device couples to a second device, that connection may be through a direct connection, or through an indirect connection via other devices and connections.
(15) Throughout this disclosure including in the claims, the following expressions have the following definitions:
(16) speaker and loudspeaker are used synonymously to denote any sound-emitting transducer. This definition includes loudspeakers implemented as multiple transducers (e.g., woofer and tweeter);
(17) speaker feed: an audio signal to be applied directly to a loudspeaker, or an audio signal that is to be applied to an amplifier and loudspeaker in series;
(18) channel (or “audio channel”): a monophonic audio signal. Such a signal can typically be rendered in such a way as to be equivalent to application of the signal directly to a loudspeaker at a desired or nominal position. The desired position can be static, as is typically the case with physical loudspeakers, or dynamic;
(19) audio program: a set of one or more audio channels (at least one speaker channel and/or at least one object channel) and optionally also associated metadata (e.g., metadata that describes a desired spatial audio presentation);
(20) speaker channel (or “speaker-feed channel”): an audio channel that is associated with a named loudspeaker (at a desired or nominal position), or with a named speaker zone within a defined speaker configuration. A speaker channel is rendered in such a way as to be equivalent to application of the audio signal directly to the named loudspeaker (at the desired or nominal position) or to a speaker in the named speaker zone;
(21) object channel: an audio channel indicative of sound emitted by an audio source (sometimes referred to as an audio “object”). Typically, an object channel determines a parametric audio source description (e.g., metadata indicative of the parametric audio source description is included in or provided with the object channel). The source description may determine sound emitted by the source (as a function of time), the apparent position (e.g., 3D spatial coordinates) of the source as a function of time, and optionally at least one additional parameter (e.g., apparent source size or width) characterizing the source;
(22) object based audio program: an audio program comprising a set of one or more object channels (and optionally also comprising at least one speaker channel) and optionally also associated metadata (e.g., metadata indicative of a trajectory of an audio object which emits sound indicated by an object channel, or metadata otherwise indicative of a desired spatial audio presentation of sound indicated by an object channel, or metadata indicative of an identification of at least one audio object which is a source of sound indicated by an object channel); and
(23) render: the process of converting an audio program into one or more speaker feeds, or the process of converting an audio program into one or more speaker feeds and converting the speaker feed(s) to sound using one or more loudspeakers (in the latter case, the rendering is sometimes referred to herein as rendering “by” the loudspeaker(s)). An audio channel can be trivially rendered (“at” a desired position) by applying the signal directly to a physical loudspeaker at the desired position, or one or more audio channels can be rendered using one of a variety of virtualization techniques designed to be substantially equivalent (for the listener) to such trivial rendering. In this latter case, each audio channel may be converted to one or more speaker feeds to be applied to loudspeaker(s) in known locations, which are in general different from the desired position, such that sound emitted by the loudspeaker(s) in response to the feed(s) will be perceived as emitting from the desired position. Examples of such virtualization techniques include binaural rendering via headphones (e.g., using Dolby Headphone processing which simulates up to 7.1 channels of surround sound for the headphone wearer) and wave field synthesis.
(24) The notation that a multi-channel audio signal is an “x.y” or “x.y.z” channel signal herein denotes that the signal has “x” full frequency speaker channels (corresponding to speakers nominally positioned in the horizontal plane of the assumed listener's ears), “y” LFE (or subwoofer) channels, and optionally also “z” full frequency overhead speaker channels (corresponding to speakers positioned above the assumed listener's head, e.g., at or near a room's ceiling).
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(25) Many embodiments of the present invention are technologically possible. It will be apparent to those of ordinary skill in the art from the present disclosure how to implement them. Embodiments of the inventive system, method, and medium will be described with reference to
(26) As noted above, a class of embodiments of the invention comprises audio processing units (APUs) configured to perform any embodiment of the inventive method. In another class of embodiments, the invention is an APU including a memory (e.g., a buffer memory) which stores (e.g., in a non-transitory manner) data indicative of a BRIR determined in accordance with any embodiment of the inventive method.
(27) System 20 of above-described
(28) Other exemplary embodiments of the inventive system are audio processing unit (APU) 30 of
(29) Delivery subsystem 40 is configured to store the signal (or to store BRIR data indicated by the signal) and/or to transmit the signal to APU 10. APU 10 is coupled and configured (e.g., programmed) to receive the signal (or BRIR data indicated by the signal) from subsystem 40 (e.g., by reading or retrieving the BRIR data from storage in subsystem 40, or receiving the signal that has been transmitted by subsystem 40). Buffer 19 of APU 10 stores (e.g., in a non-transitory manner) the BRIR data. BRIR subsystems 12, . . . , and 14, and addition elements 16 and 18 of APU 10 are a headphone virtualizer configured to apply a binaural room impulse response (one of the BRIRs determined by the BRIR data delivered by subsystem 40) to each full frequency range channel (X.sub.1, . . . , X.sub.N) of a multi-channel audio input signal.
(30) To configure the headphone virtualizer, the BRIR data are asserted from buffer 19 to memory 13 of subsystem 12, and to memory 15 of subsystem 14 (and to a memory of each other BRIR subsystem coupled in parallel with subsystems 12 and 14 to filter one of audio input signal channels X.sub.1, . . . , and X.sub.N). Each of BRIR subsystems 12, . . . , and 14 is configured to apply any selected one of a set of BRIRs indicated by BRIR data stored therein, and thus storage of the BRIR data (which has been delivered to buffer 19) in each BRIR subsystem (12, . . . , or 14) configures the BRIR subsystem to apply a selected one of the BRIRs indicated by the BRIR data (a BRIR corresponding to a source direction and distance for audio content of channel X.sub.1, . . . , or X.sub.N) to one of the channels X.sub.1, . . . , and X.sub.N, of the multi-channel audio input signal.
(31) Each of channels X.sub.1, . . . , X.sub.N, (which may be speaker channels or object channels) corresponds to a specific source direction and distance relative to an assumed listener (i.e., the direction of a direct path from, and the distance between, an assumed position of a corresponding speaker to the assumed listener position), and the headphone virtualizer is configured to convolve each such channel with a BRIR for the corresponding source direction and distance. Thus, subsystem 12 is configured to convolve channel X.sub.1 with BRIR.sub.1 (one of the BRIRs, determined by the BRIR data delivered by subsystem 40 and stored in memory 13, which corresponds to the source direction and distance of channel X.sub.1), subsystem 4 is configured to convolve channel X.sub.N with BRIR.sub.N (one of the BRIRs, determined by the BRIR data delivered by subsystem 40 and stored in memory 15, which corresponds to the source direction and distance of channel X.sub.N), and so on for each other input channel. The output of each BRIR subsystem (each of subsystems 12, . . . , 14) is a time-domain binaural signal including a left channel and a right channel (e.g., the output of subsystem 12 is a binaural signal including a left channel, L.sub.1, and a right channel, R.sub.1).
(32) The left channel outputs of the BRIR subsystems are mixed in addition element 16, and the right channel outputs of the BRIR subsystems are mixed in addition element 18. The output of element 16 is the left channel, L, of the binaural audio signal output from the virtualizer, and the output of element 18 is the right channel, R, of the binaural audio signal output from the virtualizer.
(33) APU 10 may be a decoder which is coupled to receive an encoded audio program, and which includes a subsystem (not shown in
(34) We next describe embodiments of the inventive method for BRIR design and/or generation. In a class of such embodiments, BRIR design is formulated as a numerical optimization problem based on a simulation model (which generates candidate BRIRs, preferably in accordance with perceptual cues and acoustic constraints) and at least one objective function (which evaluates each of the candidate BRIRs, preferably in accordance with perceptual criteria), and includes a step of identifying a best (e.g., optimal) one of the candidate BRIRs (as indicated by performance metrics determined for the candidate BRIRs by each objective function). Typically, each BRIR designed in accordance with the method (i.e., each candidate BRIR determined to be an optimal or “best” one of a number of candidate BRIRs) is useful for virtualization of speaker channels and/or object channels of multi-channel audio signals. Typically, the method includes a step of generating at least one signal indicative of each designed BRIR (e.g., a signal indicative of data indicative of each designed BRIR), and optionally also a step of delivering at least one said signal to a headphone virtualizer (or configuring a headphone virtualizer to apply at least one at least one designed BRIR). In typical embodiments, the numerical optimization problem is solved by applying any one of a number of methods that are well-known in the art (for example, random search (Monte Carlo), Simplex, or Simulated Annealing) to evaluate the candidate BRIRs in accordance with each objective function, and to identify a best (e.g., optimal) one of the candidate BRIRs as a BRIR which has been designed in accordance with the invention. In one exemplary embodiment, one objective function determines a performance metric (for each candidate BRIR) indicative of perceptual-domain frequency response, another determines a performance metric (for each candidate BRIR) indicative of temporal response, and another determines a performance metric (for each candidate BRIR) indicative of dialog clarity, and all three objective functions are employed to evaluate each candidate BRIR.
(35) In a class of embodiments, the invention is a method for designing a BRIR (e.g., BRIR.sub.1 or BRIR.sub.N of
(36) (a) generating candidate BRIRs in accordance with a simulation model (e.g., the model implemented by subsystem 101 of the
(37) (b) generating performance metrics (e.g., those generated in subsystem 107 of the
(38) (c) identifying (e.g., in subsystem 107 or 108 of the
(39) Typically, step (a) includes a step of generating the candidate BRIRs in accordance with predetermined perceptual cues such that each of the candidate BRIRs, when convolved with the input audio channel, generates a binaural signal indicative of sound which provides said perceptual cues. Examples of such cues include (but are not limited to): interaural time difference and interaural level difference (e.g., as implemented by subsystems 102 and 113 of the
(40) In typical embodiments, the simulation model is a stochastic room/head model (e.g., implemented in BRIR generator 31 of
(41) The stochastic model typically uses a combination of deterministic and random (stochastic) elements. Deterministic elements, such as the essential perceptual cues, serve as constraints on the optimization process. Random elements, such as room reflection waveform shape for the early and late responses, generate random variables that appear in the formulation of the BRIR optimization problem itself.
(42) The degree of similarity between each candidate and an ideal BRIR response (“target” or “target BRIR”) is numerically evaluated (e.g., in BRIR generator 31 of
(43)
(44) Stochastic room model subsystem 101 of
(45) Target BRIR subsystem 105 is or includes a memory which stores the target BRIR, which has been predetermined and provided to subsystem 105 by the system operator. Transform stage 106 is coupled and configured to transform the target BRIR from the time domain to the perceptual domain. Each perceptual-domain target BRIR output from stage 106 is a sequence of values (e.g., frequency components) indicative of content of a time-domain target BRIR, in each of a set of perceptually determined frequency bands.
(46) Subsystem 107 is configured to implement at least one objective function which determines a perceptual-domain metric of BRIR performance (e.g., suitability) of each of the candidate BRIRs. Subsystem 107 numerically evaluates a degree of similarity between each candidate BRIR and the target BRIR in accordance with each said objective function. Specifically, subsystem 107 applies each objective function (to each candidate BRIR and the target BRIR) to determine a metric of performance for each candidate BRIR.
(47) Subsystem 108 is configured to select, as the optimal BRIR, one of the candidate BRIRs which has a best metric of performance (e.g., a best overall performance metric, of the type mentioned above) as indicated by the output of subsystem 107). For example, the optimal BRIR can be selected to be one of the candidate BRIRs having a largest degree of similarity to the target BRIR (as indicated by the output of subsystem 107). In the ideal case, the objective function(s) represent all aspects of virtualizer subjective performance, including but not limited to: spectral naturalness (timbre relative to the stereo downmix); dialog clarity; and sound source localization, externalization, and width. A standardized method that could serve as an objective function for evaluating dialog clarity is Perceptual Evaluation of Speech Quality (PESQ) (cf. ITU-T Recommendation P.862.2, “Wideband extension to Recommendation P.862 for the assessment of wideband telephone networks and speech codecs”, November 2007.
(48) As a result of simulations, the inventors have found that a gain-optimized log-spectral distortion measure, D (defined below), is a useful perceptual-domain metric. This metric provides (for each candidate BRIR and target BRIR pair) a measure of spectral naturalness of audio signals rendered by the candidate BRIR. Smaller values of D correspond to BRIRs that produce lower timbral distortion and more natural quality of rendered audio signals. This metric, D, is determined from the following objective function (which subsystem 107 of
(49)
where D=average log-spectral distortion,
C.sub.nk=Perceptual energy for channel n, frequency band k of the candidate BRIR,
T.sub.nk=Perceptual energy for channel n, frequency band k of the target BRIR,
g.sub.log=log gain offset that minimizes D,
w.sub.n=channel weighting factor for channel n, and
B=the number of perceptual bands.
(50) In some embodiments of the inventive method which generate a performance metric at least substantially equal to the above metric, D, for each candidate BRIR, the method includes a step of comparing a perceptually banded, frequency domain representation of each of the candidate BRIRs with a perceptually banded, frequency domain representation of the target BRIR corresponding to the source direction for said each of the candidate BRIRs. Each such perceptually banded, frequency domain representation (of a candidate BRIR or a corresponding target BRIR) comprises a left channel having B frequency bands and a right channel having B frequency bands. The index, n, in the above expression for the metric, D, is an index indicative of channel, whose value n=1 indicates the left channel, and whose value n=2 indicates the right channel.
(51) A useful attribute of the above-defined metric D is that it is sensitive to spectral combing distortion at low frequencies, a common source of unnatural audio quality in virtualizers. The metric D is also insensitive to broadband gain offsets between the candidate and target BRIRs due to the above term g.sub.log, which is defined as follows in a typical embodiment of the inventive method (implemented in accordance with
(52)
(53) In such an embodiment, the term g.sub.log is computed separately (by subsystem 107) for each candidate BRIR in a manner that minimizes the resulting mean-square distortion D for the candidate BRIR.
(54) Other performance metrics could be implemented by subsystem 107 (in place of, or to supplement, the above-defined metric D) to evaluate different aspects of candidate BRIR performance. Additionally, the above expressions for D and g.sub.log can be modified (to determine another distortion measure, for use in place of metric D, expressed in the specific loudness domain) by replacing the log(C.sub.nk) and log(T.sub.nk) terms in the above expressions for D and g.sub.log, by the specific loudness in critical bands of the candidate and target BRIRs, respectively.
(55) The inventors have also found that in typical embodiments of the invention, the anechoic HRTF response, equalized with a direction-independent equalization filter, is a suitable target BRIR (to be output from subsystem 105 of
(56) In accordance with the
(57) Reflection control subsystem 111 identifies (i.e., chooses) a set of early reflection paths (comprising one or more early reflection paths) in response to the same sound source direction and distance which determine the direct response, and asserts control values indicative of each such set of early reflection paths to early reflection generation subsystem (generator) 113. Early reflection generator 113 selects a pair of left and right HRTFs from database 102 which correspond to the direction of arrival (at the listener) of each early reflection (of each set of early reflection paths) determined by subsystem 111 in response to the same sound source direction and distance which determine the direct response. In response to the selected pair(s) of left and right HRTFs for each set of early reflection paths determined by subsystem 111, generator 113 determines an early response portion of one of the candidate BRIRs.
(58) Late response control subsystem 110 asserts control signals to late response generator 114, in response to the same sound source direction and distance which determine the direct response, to cause generator 114 to output a late response portion of one of the candidate BRIRs which corresponds to the sound source direction and distance.
(59) The direct response, early reflections, and late response are summed together (with appropriate time offsets and overlap) in combiner subsystem 115 to generate each candidate BRIR. Control values asserted to subsystem 115 are indicative of a direct-to-reverb ratio (DR Ratio) and an early reflection-to-late response ratio (EL Ratio) which are used by subsystem 115 to set the relative gains of direct, early, and late BRIR portions which it combines.
(60) The subsystems of
(61) Typically, reflection control subsystem 111 is implemented to impose the desired delay, gain, shape, duration, and/or direction of the early reflection(s) of the sets of early reflections indicated by its output. Typically, late response control subsystem 110 is implemented to vary the interaural coherence, echo density, delay, gain, shape, and/or duration to the raw random sequences in order to generate the late responses indicated by its output.
(62) In variations on the
(63) In typical implementations of subsystem 111 of
(64) It is contemplated that subsystem 111 be implemented to determine the sets of early reflections (for each source direction and distance) in accordance with such perceptual considerations.
(65) The inventors have also found that certain reflection direction spreading patterns can improve source localization. As suggested by the observation noted above that early reflections emanating from the same azimuth and elevation as the sound source can improve source localization and focus, and increase perceived distance), one strategy for implementation by subsystem 111 that was found to be particularly effective is to design the early reflection(s) for a given source direction and distance to originate from the same direction as the sound source, and to progressively fan out in space during the late response to eventually surround the listener.
(66) From the above findings, it is evident that important aspects of sound image control is provided by the early reflections, and the manner in which they transition to the late BRIR response. For optimal virtualizer performance, reflections (e.g., those determined by the output of subsystem 111 of
(67) Next, with reference to
(68) Attack and decay envelope modification stage 126 modifies the attack and decay characteristics of the reflection prototype which is output from stage 125, by applying a window. A variety of window shapes are possible, but an exponentially-decaying window is typically suitable. Finally, HRTF stage 127 applies the HRTF (retrieved from HRTF database 102 of
(69) Subsystems 120 and 127 of
(70) Next, with reference to
(71) In typical implementations, the generation of the late response is based on a stochastic model that imparts essential temporal, spectral and spatial acoustic attributes to the candidate BRIR. As in a physical acoustic space, during the early reflection stage, reflections arrive at the ears sparsely such that the micro structure of each reflection is observable and affects auditory perception. In the late response stage, the echo density typically increases to the point where micro features of individual reflections are no longer observable. Instead, the macro attributes of the reverberation become the essential auditory cues. These frequency-dependent attributes include energy decay time, interaural coherence, and spectral distribution.
(72) The transition from early response stage to late response stage is a progressive process. Implementing such a transition in the generated late response helps focus sound source images, reduce spatial pumping, and improve externalization. In typical embodiments, the transition implementation involves controlling the temporal patterns of echo density, interaural time differential or “ITD,” and interaural level differential or “ILD” (e.g., using echo generator 130 of
(73) Generating late responses with the transitional characteristics described above can be achieved by a stochastic echo generator (e.g., echo generator 130 of
(74) In other implementations of late response generator 114, other methods are performed to create similar transitional behavior. In order to introduce the diffusion and decorrelation effects to the reflections for improved naturalness, a pair of multi-stage all-pass filters (APFs) may be applied to the left- and right-channels of the generated binaural response, respectively, as the final step performed by echo generator 130. The inventors have found that for best performance in common applications, the time-spreading effect of the APFs should be in the order of 1 ms, with maximum binaural decorrelation possible. The APFs also need to have the same group delay in order to maintain binaural balance.
(75) As noted earlier, the macro attributes of the late response have profound and critical perceptual impact, both spatially and timbrally. The energy decay time is an essential attribute that characterize the acoustic environment. Lengthy decay time causes excess and unnatural reverberation that degrades audio quality. It is especially detrimental to dialog clarity. On the other hand, insufficient decay time reduces externalization and causes mismatch to the acoustic space. Interaural coherence is essential to the focus of sound source images and depth perception. A too-high coherence value causes the sound source image to become internalized, and a too-low coherence value causes the sound source image to spread or split. Ill-balanced coherence across frequency also causes the sound source image to stretch or split. Spectral distribution of the late response is essential to the timbre and naturalness. The ideal spectral distribution for the late response usually has flat and highest level between 500 Hz and 1 kHz. It tapers off at the high-frequency end to follow a natural acoustic characteristic and at the low-frequency end to avoid combing artifact. As an extra mechanism to reduce combing, the ramp-up of the late response is made slower in the lower frequency.
(76) To impose these macro attributes, the
(77) The late response portion of the candidate BRIR is combined (in subsystem 115 of
(78) In the
(79) One benefit of typical embodiments of the inventive numerically-optimized BRIR generation method is that they can readily generate a BRIR which meets any of a wide range of design criteria (e.g., the HRTF portion thereof has certain desired properties, and/or the BRIR has a desired direct-to-reverberation ratio). For example, it is well known that HRTFs vary considerably from one person to the next. Typical embodiments of the inventive method generate BRIRs that allow optimization of the virtual listening environment for a specific set of HRTFs associated with a specific listener. Alternatively or additionally, the physical environment in which a listener is situated may have specific properties such as a certain reverberation time that one wants to mimic in the virtual listening environment (and corresponding BRIRs). Such design criteria can be included as constraints in the optimization process. Yet another example is the situation in which a strong reflection is expected at the listener's position due to the presence of a desk or a wall. The generated BRIRs can be optimized based on the perceptual distortion metric given such constraints.
(80) It should be appreciated that in some embodiments, a binaural output signal generated in accordance with the invention is indicative of audio content that is intended to be perceived as emitting from “overhead” source locations (virtual source locations above the horizontal plane of the listener's ears) and/or audio content that is perceived as emitting from virtual source locations in the horizontal plane of the listener's ears. In either case, the BRIR employed to generate the binaural output signal would typically have an HRTF portion (for the direct response that corresponds to the sound source direction and distance), and a reflection (and/or reverb) portion for implementing reflections and late response derived from a model of a physical or virtual room.
(81) To render a binaural signal indicative of audio content perceived as emitting from “overhead” source locations, the rendering method employed would typically be the same as a conventional method for rendering a binaural signal indicative only of audio content intended to be perceived as emitting from virtual source locations in the horizontal plane of the listener's ears.
(82) The illusion of height provided by a BRIR which is simply an HRTF alone (without an early reflection or late response portion) can be increased by augmenting the BRIR to be indicative of early reflections from specific directions. In particular, the inventors have found that the ground reflection typically used (when the binaural output is to be indicative only of sources in the horizontal plane of the listener's ears) can reduce the height sensation when the binaural output is to be indicative of overhead sources. To prevent this, the BRIR can be designed in accordance with some embodiments of the invention to replace each ground reflection with two overhead reflections at the same azimuth as the overhead source but at higher elevation. The early reflection emanating from the same azimuth and elevation as the sound source is retained in the overhead model, bringing the total number of early reflections for overhead sources to three. To support virtualization of object channels (as well as speaker channels), interpolated BRIRs may be used, where the interpolated BRIRs are generated by interpolating between a small set of predetermined BRIRs (generated in accordance with an embodiment of the invention) which are indicative of different ground and overhead early reflections as a function of source position.
(83) In another class of embodiments, the invention is a method for generating a binaural signal in response to a set of N channels of a multi-channel audio input signal, where N is a positive integer (e.g., N=1, or N is greater than 1), said method including steps of:
(84) (a) applying N (e.g., in the N subsystems 12, . . . , 14 of APU 10 of
(85) (b) combining the filtered signals (e.g., in elements 16 and 18 of APU 10 of
(86) (c) generating candidate binaural room impulse responses (candidate BRIRs) in accordance with a simulation model (e.g., the model implemented by subsystem 101 of the
(87) (d) generating performance metrics (e.g., in subsystem 107 of the
(88) (e) identifying (e.g., in subsystem 107 of the
(89) There are many embodiments of a headphone virtualizer which applies BRIRs which have been generated in accordance with an embodiment of the invention. Each virtualizer is configured to generate a 2-channel, binaural output signal in response to an M-channel audio input signal (and so typically includes one or more down-mixing stages each implementing a down-mixing matrix) and also to apply a BRIR to each channel of the audio input signal which is downmixed to 2 output channels. For performing virtualization on speaker channels (indicative of content corresponding to loudspeakers in fixed positions), one such virtualizer applies a BRIR to each speaker channel (so that the binaural output is indicative of content for a virtual loudspeaker corresponding to the speaker channel), each such BRIR having been predetermined offline. At runtime, each channel of the multi-channel input signal is convolved with its associated BRIR and the results of the convolution operations are then downmixed into the 2-channel binaural output signal. The BRIRs are typically pre-scaled such that downmix coefficients equal to 1 can be used. Alternatively, to achieve a similar result with lower computational complexity, each input channel is convolved with a “direct and early reflection” portion of a single-channel BRIR, a downmix of the input channels is convolved with a late reverberation portion of a downmix BRIR (e.g., a late reverberation portion of one of the single-channel BRIRs), and the results of the convolution operations are then downmixed into the 2-channel binaural output signal.
(90) For rendering object channels of a multi-channel object-based audio input signal (each of which object channels may be indicative of content associated with a fixed or moving audio object), any of multiple approaches are possible. For example, in some embodiments each object channel of the multi-channel input signal is convolved with an associated BRIR (which has been predetermined, offline, in accordance with an embodiment of the invention) and the results of the convolution operations are then downmixed into the 2-channel binaural output signal. Alternatively, to achieve a similar result with lower computational complexity, each object channel is convolved with a “direct and early reflection” portion of a single-channel BRIR, a downmix of the object channels is convolved with a late reverberation portion of a downmix BRIR (e.g., a late reverberation portion of one of the single-channel BRIRs), and the results of the convolution operations are then downmixed into the 2-channel binaural output signal.
(91) Regardless of whether the input signal channels undergoing virtualization are speaker channels or object channels, the most straightforward virtualization approach is typically to implement the virtualizer to generate its binaural output to be indicative of the outputs of a sufficient number of virtual speakers to allow smooth panning in 3D space of each sound source indicated by the binaural signal's content between the locations of the virtual speakers. In our experience, a binaural signal indicative of output from seven virtual speakers in the horizontal plane of the assumed listener's ears is typically sufficient for good panning performance, and the binaural signal may also be indicative of output of a small number of overhead virtual speakers (e.g., four overhead virtual speakers) in virtual positions above the horizontal plane of the assumed listener's ears. With four such overhead virtual speakers and seven other virtual speakers, the binaural signal would be indicative of a total of 11 virtual speakers.
(92) The inventors have found that properly-designed BRIRs indicative of reflections optimized for one virtual source direction and distance can often be used for virtual sources in other positions in the same virtual environment (e.g., virtual room) with minimal loss of performance. In case of exceptions to this rule, BRIRs indicative of optimized reflections for each of a small number of different virtual source locations can be generated, and interpolation between them can be performed (e.g., in a virtualizer) as a function of sound source position, to generate a different interpolated BRIR for each needed virtual source location.
(93) In some embodiments, the method generates a BRIR so as to maximize sound source externalization for the center channel (of a 5.1 or 7.1 channel audio input signal to be virtualized) under the constraint of neutral timbre. The center channel is widely regarded as the most difficult to virtualize since the number of perceptual cues are reduced (no ITD/ILD, where ITD is interaural time difference, or difference in arrival times between the two ears, and ILD is interaural level difference), visual cues are not always present to assist the localization, and so on. It is contemplated that various embodiments of the invention generate BRIRs useful for virtualizing input signals having any of many different formats, e.g., input signals having 2.0, 5.1, 7.1, 7.1.2, or 7.1.4 speaker channel formats (where “7.1.x” format denotes 7 channels for speakers in the horizontal plane of the listener's ears, 4 channels for speakers in a square pattern overhead, and one Lfe channel).
(94) Typical embodiments do not assume that the input signal channels are speaker channels or object channels (i.e., they could be either). In choosing optimal BRIRs for virtualizing a multi-channel input signal whose channels consist of speaker channels only, an optimal BRIR for each speaker channel may be chosen (each of which, in turn, assumes a specific source direction relative to a listener). If the input signal to the virtualizer is expected to be an object-based audio program indicative of one or more sources, each panned through a wide range of positions, the binaural output signal would typically be indicative of more virtual speaker locations than would the binaural output signal in the case that the input signal comprises only a small number of speaker channels (and no object channels), and thus more BRIRs would need to be determined (each for a different virtual speaker position) and applied to virtualize the object-based audio program than the speaker-channel input signal. In operation to virtualize a typical object-based audio program, it is contemplated that some embodiments of the inventive virtualizer would interpolate between predetermined BRIRs (each for one of a small number of virtual speaker positions) to generate interpolated BRIRs (each for one of a large number of virtual speaker positions), and apply the interpolated BRIRs to generate the binaural output to be indicative of a pan over a wide range of source positions.
(95) While specific embodiments of the present invention and applications of the invention have been described herein, it will be apparent to those of ordinary skill in the art that many variations on the embodiments and applications described herein are possible without departing from the scope of the invention described and claimed herein. It should be understood that while certain forms of the invention have been shown and described, the invention is not to be limited to the specific embodiments described and shown or the specific methods described.