Patent classifications
G10L19/22
AUDIO ENCODER AND DECODER WITH DYNAMIC RANGE COMPRESSION METADATA
An audio processing unit (APU) is disclosed. The APU includes a buffer memory configured to store at least one frame of an encoded audio bitstream, where the encoded audio bitstream includes audio data and a metadata container. The metadata container includes a header and one or more metadata payloads after the header. The one or more metadata payloads include dynamic range compression (DRC) metadata, and the DRC metadata is or includes profile metadata indicative of whether the DRC metadata includes dynamic range compression (DRC) control values for use in performing dynamic range compression in accordance with at least one compression profile on audio content indicated by at least one block of the audio data.
Apparatus and method for selecting one of a first encoding algorithm and a second encoding algorithm
An apparatus for selecting one of a first encoding algorithm having a first characteristic and a second encoding algorithm having a second characteristic for encoding a portion of an audio signal to obtain an encoded version of the portion of the audio signal has a first estimator for estimating a first quality measure for the portion of the audio signal, which is associated with the first encoding algorithm, without actually encoding and decoding the portion of the audio signal using the first encoding algorithm. A second estimator is provided for estimating a second quality measure for the portion of the audio signal, which is associated with the second encoding algorithm, without actually encoding and decoding the portion of the audio signal using the second encoding algorithm. The apparatus has a controller for selecting the first or second encoding algorithms based on a comparison between the first and second quality measures.
Apparatus and method for selecting one of a first encoding algorithm and a second encoding algorithm
An apparatus for selecting one of a first encoding algorithm having a first characteristic and a second encoding algorithm having a second characteristic for encoding a portion of an audio signal to obtain an encoded version of the portion of the audio signal has a first estimator for estimating a first quality measure for the portion of the audio signal, which is associated with the first encoding algorithm, without actually encoding and decoding the portion of the audio signal using the first encoding algorithm. A second estimator is provided for estimating a second quality measure for the portion of the audio signal, which is associated with the second encoding algorithm, without actually encoding and decoding the portion of the audio signal using the second encoding algorithm. The apparatus has a controller for selecting the first or second encoding algorithms based on a comparison between the first and second quality measures.
Harmonic transposition in an audio coding method and system
The present invention relates to transposing signals in time and/or frequency and in particular to coding of audio signals. More particular, the present invention relates to high frequency reconstruction (HFR) methods including a frequency domain harmonic transposer. A method and system for generating a transposed output signal from an input signal using a transposition factor T is described. The system comprises an analysis window of length L.sub.a, extracting a frame of the input signal, and an analysis transformation unit of order M transforming the samples into M complex coefficients. M is a function of the transposition factor T. The system further comprises a nonlinear processing unit altering the phase of the complex coefficients by using the transposition factor T, a synthesis transformation unit of order M transforming the altered coefficients into M altered samples, and a synthesis window of length L.sub.s, generating a frame of the output signal.
Machine learning-based audio codec switching
Described herein are techniques, devices, and systems for selectively using a music-capable audio codec on-demand during a communication session. A user equipment (UE) may adaptively transition between using a first audio codec that provides a first audio bandwidth and a second audio codec (e.g., the EVS-FB codec) that provides a second audio bandwidth that is greater than the first audio bandwidth. The transition to the second audio codec may occur in response to determining that sound in the environment of the UE includes frequencies outside of a range of frequencies associated with a human voice, such as by determining that music is being played in the environment of the UE, which allows for selectively using a music-capable audio codec when it would be beneficial to do so.
Machine learning-based audio codec switching
Described herein are techniques, devices, and systems for selectively using a music-capable audio codec on-demand during a communication session. A user equipment (UE) may adaptively transition between using a first audio codec that provides a first audio bandwidth and a second audio codec (e.g., the EVS-FB codec) that provides a second audio bandwidth that is greater than the first audio bandwidth. The transition to the second audio codec may occur in response to determining that sound in the environment of the UE includes frequencies outside of a range of frequencies associated with a human voice, such as by determining that music is being played in the environment of the UE, which allows for selectively using a music-capable audio codec when it would be beneficial to do so.
METHOD AND DEVICE FOR SPEECH/MUSIC CLASSIFICATION AND CORE ENCODER SELECTION IN A SOUND CODEC
Two-stage speech/music classification device and method classify an input sound signal and select a core encoder for encoding the sound signal. A first stage classifies the input sound signal into one of a number of final classes. A second stage extracts high-level features of the input sound signal and selects the core encoder for encoding the input sound signal in response to the extracted high-level features and the final class selected in the first stage.
METHOD AND DEVICE FOR SPEECH/MUSIC CLASSIFICATION AND CORE ENCODER SELECTION IN A SOUND CODEC
Two-stage speech/music classification device and method classify an input sound signal and select a core encoder for encoding the sound signal. A first stage classifies the input sound signal into one of a number of final classes. A second stage extracts high-level features of the input sound signal and selects the core encoder for encoding the input sound signal in response to the extracted high-level features and the final class selected in the first stage.
Signal filtering
In methods and systems for filtering an information input signal, a system may have: a first filter unit filtering an input signal at an initial subinterval in a current update interval according to parameters associated to the preceding update interval, the parameters being scaled by a first scaling factor changing towards 0; and a second filter unit filtering a second filter input signal, based on the output of the first filter unit, at the initial subinterval, according to parameters associated to the current update interval, the parameters being scaled by a second scaling factor changing from 0, or a value close to 0, toward a value more distant from 0.
Switching Binaural Sound
A method provides binaural sound to a person through electronic earphones. The binaural sound localizes to a sound localization point (SLP) in empty space that is away from but proximate to the person. When an event occurs, the binaural sound switches or changes to stereo sound, to mono sound, or to altered binaural sound.