G10L19/08

Processing method of audio signal using spectral envelope signal and excitation signal and electronic device including a plurality of microphones supporting the same

According to an embodiment, the above-described specification discloses an electronic device comprises at least one processor configured to: receive a first audio signal and a second audio signal; detect a spectral envelope signal from the first audio signal and extract a feature point from the second audio signal; extend a high-band of the second audio signal based on the spectral envelope signal from the first audio signal and the feature point from the second audio signal to generate a high-band extension signal; and mix the high-band extension signal and the first audio signal, thereby resulting in a synthesized signal.

AUDIO PROCESSOR AND METHOD FOR GENERATING A FREQUENCY ENHANCED AUDIO SIGNAL USING PULSE PROCESSING
20230395085 · 2023-12-07 ·

An audio processor for generating a frequency enhanced audio signal from a source audio signal has: an envelope determiner for determining a temporal envelope of at least a portion of the source audio signal; an analyzer for analyzing the temporal envelope to determine temporal values of certain features of the temporal envelope; a signal synthesizer for generating a synthesis signal, the generating having placing pulses in relation to the determined temporal values, wherein the pulses are weighted using weights derived from amplitudes of the temporal envelope related to the temporal values, where the pulses are placed; and a combiner for combining at least a band of the synthesis signal that is not included in the source audio signal and the source audio signal to obtain the frequency enhanced audio signal.

AUDIO PROCESSOR AND METHOD FOR GENERATING A FREQUENCY ENHANCED AUDIO SIGNAL USING PULSE PROCESSING
20230395085 · 2023-12-07 ·

An audio processor for generating a frequency enhanced audio signal from a source audio signal has: an envelope determiner for determining a temporal envelope of at least a portion of the source audio signal; an analyzer for analyzing the temporal envelope to determine temporal values of certain features of the temporal envelope; a signal synthesizer for generating a synthesis signal, the generating having placing pulses in relation to the determined temporal values, wherein the pulses are weighted using weights derived from amplitudes of the temporal envelope related to the temporal values, where the pulses are placed; and a combiner for combining at least a band of the synthesis signal that is not included in the source audio signal and the source audio signal to obtain the frequency enhanced audio signal.

Audio source parameterization
11152014 · 2021-10-19 · ·

The present document describes a method (600) for estimating source parameters of audio sources (101) from mix audio signals (102), with. The mix audio signals (102) comprise a plurality of frames. The mix audio signals (102) are representable as a mix audio matrix in a frequency domain and the audio sources (101) are representable as a source matrix in the frequency domain. The method (600) comprises updating (601) an un-mixing matrix (221) which is configured to provide an estimate of the source matrix from the mix audio matrix, based on a mixing matrix (225) which is configured to provide an estimate of the mix audio matrix from the source matrix. Furthermore, the method (600) comprises updating (602) the mixing matrix (225) based on the un-mixing matrix (221) and based on the mix audio signals (102). In addition, the method (600) comprises iterating (603) the updating steps (601, 602) until an overall convergence criteria is met.

Audio source parameterization
11152014 · 2021-10-19 · ·

The present document describes a method (600) for estimating source parameters of audio sources (101) from mix audio signals (102), with. The mix audio signals (102) comprise a plurality of frames. The mix audio signals (102) are representable as a mix audio matrix in a frequency domain and the audio sources (101) are representable as a source matrix in the frequency domain. The method (600) comprises updating (601) an un-mixing matrix (221) which is configured to provide an estimate of the source matrix from the mix audio matrix, based on a mixing matrix (225) which is configured to provide an estimate of the mix audio matrix from the source matrix. Furthermore, the method (600) comprises updating (602) the mixing matrix (225) based on the un-mixing matrix (221) and based on the mix audio signals (102). In addition, the method (600) comprises iterating (603) the updating steps (601, 602) until an overall convergence criteria is met.

AUDIO PROCESSOR AND METHOD FOR GENERATING A FREQUENCY ENHANCED AUDIO SIGNAL USING PULSE PROCESSING
20210287687 · 2021-09-16 ·

An audio processor for generating a frequency enhanced audio signal from a source audio signal has: an envelope determiner for determining a temporal envelope of at least a portion of the source audio signal; an analyzer for analyzing the temporal envelope to determine temporal values of certain features of the temporal envelope; a signal synthesizer for generating a synthesis signal, the generating having placing pulses in relation to the determined temporal values, wherein the pulses are weighted using weights derived from amplitudes of the temporal envelope related to the temporal values, where the pulses are placed; and a combiner for combining at least a band of the synthesis signal that is not included in the source audio signal and the source audio signal to obtain the frequency enhanced audio signal.

AUDIO PROCESSOR AND METHOD FOR GENERATING A FREQUENCY ENHANCED AUDIO SIGNAL USING PULSE PROCESSING
20210287687 · 2021-09-16 ·

An audio processor for generating a frequency enhanced audio signal from a source audio signal has: an envelope determiner for determining a temporal envelope of at least a portion of the source audio signal; an analyzer for analyzing the temporal envelope to determine temporal values of certain features of the temporal envelope; a signal synthesizer for generating a synthesis signal, the generating having placing pulses in relation to the determined temporal values, wherein the pulses are weighted using weights derived from amplitudes of the temporal envelope related to the temporal values, where the pulses are placed; and a combiner for combining at least a band of the synthesis signal that is not included in the source audio signal and the source audio signal to obtain the frequency enhanced audio signal.

PHASE RECONSTRUCTION IN A SPEECH DECODER

Innovations in phase quantization during speech encoding and phase reconstruction during speech decoding are described. For example, to encode a set of phase values, a speech encoder omits higher-frequency phase values and/or represents at least some of the phase values as a weighted sum of basis functions. Or, as another example, to decode a set of phase values, a speech decoder reconstructs at least some of the phase values using a weighted sum of basis functions and/or reconstructs lower-frequency phase values then uses at least some of the lower-frequency phase values to synthesize higher-frequency phase values. In many cases, the innovations improve the performance of a speech codec in low bitrate scenarios, even when encoded data is delivered over a network that suffers from insufficient bandwidth or transmission quality problems.

Spatially biased sound pickup for binaural video recording

A method for producing a target directivity function that includes a set of spatially biased HRTFs. A set of left ear and right ear head related transfer functions (HRTFs) are selected. The left ear and right ear head HRTFs are multiplied with an on-camera emphasis function (OCE), to produce the spatially biased HRTFs. The OCE may be designed to shape the sound profile of the HRTFs to provide emphasis in a desired location or direction that is a function of the specific orientation of the device as it is being used to make a video recording. Other aspects are also described and claimed.

Spatially biased sound pickup for binaural video recording

A method for producing a target directivity function that includes a set of spatially biased HRTFs. A set of left ear and right ear head related transfer functions (HRTFs) are selected. The left ear and right ear head HRTFs are multiplied with an on-camera emphasis function (OCE), to produce the spatially biased HRTFs. The OCE may be designed to shape the sound profile of the HRTFs to provide emphasis in a desired location or direction that is a function of the specific orientation of the device as it is being used to make a video recording. Other aspects are also described and claimed.