G10L19/16

TRANSMISSION-AGNOSTIC PRESENTATION-BASED PROGRAM LOUDNESS

This disclosure falls into the field of audio coding, in particular it is related to the field of providing a framework for providing loudness consistency among differing audio output signals. In particular, the disclosure relates to methods, computer program products and apparatus for encoding and decoding of audio data bitstreams in order to attain a desired loudness level of an output audio signal.

AUDIO METADATA PROVIDING APPARATUS AND METHOD, AND MULTICHANNEL AUDIO DATA PLAYBACK APPARATUS AND METHOD TO SUPPORT DYNAMIC FORMAT CONVERSION

An audio metadata providing apparatus and method and a multichannel audio data playback apparatus and method to support a dynamic format conversion are provided. Dynamic format conversion information may include information about a plurality of format conversion schemes that are used to convert a first format set by an author of multichannel audio data into a second format that is based on a playback environment of the multichannel audio data and that are each set for corresponding playback periods of the multichannel audio data. The audio metadata providing apparatus may provide audio metadata including the dynamic format conversion information. The multichannel audio data playback apparatus may identify the dynamic format conversion information from the audio metadata, may convert the first format of the multichannel audio data into the second format based on the identified dynamic format conversion information, and may play back the multichannel audio data in the second format.

System for maintaining reversible dynamic range control information associated with parametric audio coders

On the basis of a bitstream (P), an n-channel audio signal (X) is reconstructed by deriving an m-channel core signal (Y) and multichannel coding parameters (α) from the bitstream, where 1≤m<n. Also derived from the bitstream are pre-processing dynamic range control, DRC, parameters (DRC2) quantifying an encoder-side dynamic range limiting of the core signal. The n-channel audio signal is obtained by parametric synthesis in accordance with the multichannel coding parameters and while cancelling any encoder-side dynamic range limiting based on the pre-processing DRC parameters. In particular embodiments, the reconstruction further includes use of compensated post-processing DRC parameters quantifying a potential decoder-side dynamic range compression. Cancellation of an encoder-side range limitation and range compression are preferably performed by different decoder-side components. Cancellation and compression may be coordinated by a DRC pre-processor.

Apparatus, Method, or Computer Program for Processing an Encoded Audio Scene using a Parameter Smoothing

Apparatus for processing an audio scene representing a sound field, the audio scene having information on a transport signal and a first set of parameters. The apparatus has a parameter processor for processing the first set of parameters to obtain a second set of parameters, wherein the parameter processor is configured to calculate at least one raw parameter for each output time frame using at least one parameter of the first set of parameters for the input time frame, to calculate a smoothing information such as a factor for each raw parameter in accordance with a smoothing rule, and to apply a corresponding smoothing information to the corresponding raw parameter to derive the parameter of the second set of parameters for the output time frame. The apparatus further has an output interface for generating a processed audio scene using the second set of parameters and the information on the transport signal.

SPEECH CODING METHOD AND APPARATUS, SPEECH DECODING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
20230238009 · 2023-07-27 ·

This application relates to a speech coding method performed by a computer device. The method includes: obtaining initial frequency bandwidth feature information corresponding to a speech signal; performing feature compression on initial feature information corresponding to a second band in the initial frequency bandwidth feature information to obtain target feature information corresponding to a compressed band, a frequency interval of the second band being greater than a frequency interval of the compressed band; obtaining, based on the target feature information corresponding to the compressed band, a compressed speech signal corresponding to the speech signal; and coding the compressed speech signal to obtain coded speech data corresponding to the speech signal, a target sampling rate corresponding to the compressed speech signal being less than a sampling rate corresponding to the speech signal.

Low-frequency emphasis for LPC-based coding in frequency domain

The invention provides an audio encoder including a combination of a linear predictive coding filter having a plurality of linear predictive coding coefficients and a time-frequency converter, wherein the combination is configured to filter and to convert a frame of the audio signal into a frequency domain in order to output a spectrum based on the frame and on the linear predictive coding coefficients; a low frequency emphasizer configured to calculate a processed spectrum based on the spectrum, wherein spectral lines of the processed spectrum representing a lower frequency than a reference spectral line are emphasized; and a control device configured to control the calculation of the processed spectrum by the low frequency emphasizer depending on the linear predictive coding coefficients of the linear predictive coding filter.

Low-frequency emphasis for LPC-based coding in frequency domain

The invention provides an audio encoder including a combination of a linear predictive coding filter having a plurality of linear predictive coding coefficients and a time-frequency converter, wherein the combination is configured to filter and to convert a frame of the audio signal into a frequency domain in order to output a spectrum based on the frame and on the linear predictive coding coefficients; a low frequency emphasizer configured to calculate a processed spectrum based on the spectrum, wherein spectral lines of the processed spectrum representing a lower frequency than a reference spectral line are emphasized; and a control device configured to control the calculation of the processed spectrum by the low frequency emphasizer depending on the linear predictive coding coefficients of the linear predictive coding filter.

Multi-stride packet payload mapping for robust transmission of data
11716294 · 2023-08-01 · ·

Systems and methods for packet payload mapping for robust transmission of data are described. For example, methods may include receiving, using a network interface, packets that each respectively include a primary frame and one or more preceding frames from the sequence of frames of data that are separated from the primary frame in the sequence of frames by a respective multiple of a stride parameter; storing the frames of the packets in a buffer with entries that each hold the primary frame and the one or more preceding frames of a packet; reading a first frame from the buffer as the primary frame from one of the entries; determining that a packet with a primary frame that is a next frame in the sequence has been lost; and, responsive to the determination, reading the next frame from the buffer as a preceding frame from one of the entries.

SYSTEMS, METHODS AND APPARATUS FOR CONVERSION FROM CHANNEL-BASED AUDIO TO OBJECT-BASED AUDIO

Embodiments are disclosed for channel-based audio (CBA) (e.g., 22.2-ch audio) to object-based audio (OBA) conversion. The conversion includes converting CBA metadata to object audio metadata (OAMD) and reordering the CBA channels based on channel shuffle information derived in accordance with channel ordering constraints of the OAMD. The OBA with reordered channels is rendered in a playback device using the OAMD or in a source device, such as a set-top box or audio/video recorder. In an embodiment, the CBA metadata includes signaling that indicates a specific OAMD representation to be used in the conversion of the metadata. In an embodiment, pre-computed OAMD is transmitted in a native audio bitstream (e.g., AAC) for transmission (e.g., over HDMI) or for rendering in a source device. In an embodiment, pre-computed OAMD is transmitted in a transport layer bitstream (e.g., ISO BMFF, MPEG4 audio bitstream) to a playback device or source device.

Speech encoding using a pre-encoded database

Methods, systems, and devices for encoding are described. A device, which may be otherwise known as user equipment (UE), may support standards-compatible audio encoding (e.g., speech encoding) using a pre-encoded database. The device may receive a digital representation of an audio signal and identify, based on receiving the digital representation of the audio signal, a database that is pre-encoded according to a coding standard and that includes a quantity of digital representations of other audio signals. The device may encode the digital representation of the audio signal using a machine learning scheme and information from the database pre-encoded according to the coding standard. The device may generate a bitstream of the digital representation that is compatible with the coding standard based on encoding the digital representation of the audio signal, and output a representation of the bitstream.