Rendering of multichannel audio using interpolated matrices

09826327 · 2017-11-21

Assignee

Inventors

Cpc classification

International classification

Abstract

Methods which uses interpolated primitive matrices to decode encoded audio to recover (losslessly) content of a multichannel audio program and/or to recover at least one downmix of such content, and encoding methods for generating such encoded audio. In some embodiments, a decoder performs interpolation on a set of seed primitive matrices to determine interpolated matrices for use in rendering channels of the program. Other aspects are a system or device configured to implement any embodiment of the method.

Claims

1. A method for encoding an N-channel audio program, wherein the program is specified over a time interval, the time interval includes a subinterval from a time t1 to a time t2, and a time-varying mix, A(t), of N encoded signal channels to M output channels has been specified over the time interval, where M is less than or equal to N, said method including steps of: determining a first cascade of N×N primitive matrices which, when applied to samples of the N encoded signal channels, implements a first mix of audio content of the N encoded signal channels to the M output channels, wherein the first mix approximates A(t1), and wherein an N×N primitive matrix is defined as a matrix in which N−1 rows contain off-diagonal elements equal to zero and on-diagonal elements with an absolute value of 1; determining interpolation values which, with the first cascade of primitive matrices and an interpolation function defined over the subinterval, are indicative of a sequence of cascades of N×N updated primitive matrices, such that each of the cascades of updated primitive matrices, when applied to samples of the N encoded signal channels, implements an updated mix, associated with a different time in the subinterval, of the N encoded signal channels to the M output channels, wherein each said updated mix approximates the time-varying mix, A(t), at the time in the subinterval associated with the updated mix; and generating an encoded bitstream which is indicative of encoded audio content, the interpolation values, and the first cascade of primitive matrices.

2. The method of claim 1, wherein each of the primitive matrices is a unit primitive matrix.

3. The method of claim 2, also including a step of generating the encoded audio content by performing matrix operations on samples of the program's N channels, including by applying a sequence of matrix cascades to the samples, wherein each matrix cascade in the sequence is a cascade of primitive matrices, and the sequence of matrix cascades includes a first inverse matrix cascade which is a cascade of inverses of the primitive matrices of the first cascade.

4. The method of claim 2, also including a step of generating the encoded audio content by performing matrix operations on samples of the program's N channels, including by applying a sequence of matrix cascades to the samples, wherein each matrix cascade in the sequence is a cascade of primitive matrices, and each matrix cascade in the sequence is the inverse of a corresponding cascade of the cascades of N×N updated primitive matrices, and N=M, so that the M output channels are the same as the N channels of the program recovered losslessly.

5. The method of claim 1, wherein the first cascade of primitive matrices implements a seed primitive matrix, and the interpolation values are indicative of a seed delta matrix for the seed primitive matrix.

6. The method of claim 4, wherein a time-varying downmix, A2(t), of audio content or encoded content of the program to M1 speaker channels has also been specified over the time interval, where M1 is an integer less than M, and the method also includes a step of: determining a second cascade of M1×M1 primitive matrices which, when applied to samples of M1 channels of the encoded audio content at each time instant tin the interval implements a downmix of the N-channel audio program to the M1 speaker channels, wherein the downmix approximates the time-varying mix, A2(t).

7. A method for recovery of M channels of an N-channel audio program, wherein the program is specified over a time interval, the time interval includes a subinterval from a time t1 to a time t2, and a time-varying mix, A(t), of N encoded signal channels to M output channels has been specified over the time interval, said method including steps of: obtaining an encoded bitstream which is indicative of encoded audio content, interpolation values, and a first cascade of N×N primitive matrices, wherein an N×N primitive matrix is defined as a matrix in which N−1 rows contain off-diagonal elements equal to zero and on-diagonal elements with an absolute value of 1; and performing interpolation to determine a sequence of cascades of N×N updated primitive matrices, from the interpolation values, the first cascade of primitive matrices, and an interpolation function over the subinterval, wherein the first cascade of N×N primitive matrices, when applied to samples of N encoded signal channels of the encoded audio content, implements a first mix of audio content of the N encoded signal channels to the M output channels, wherein the first mix approximates A(t1), and the interpolation values, with the first cascade of primitive matrices, and the interpolation function, are indicative of a sequence of cascades of N×N updated primitive matrices, such that each of the cascades of updated primitive matrices, when applied to samples of the N encoded signal channels of the encoded audio content, implements an updated mix, associated with a different time in the subinterval, of the N encoded signal channels to the M output channels, wherein each said updated mix is approximates the time-varying mix, A(t), at the time in the subinterval associated with the updated mix.

8. The method of claim 7, wherein each of the primitive matrices is a unit primitive matrix.

9. The method of claim 8, wherein the encoded audio content has been generated by performing matrix operations on samples of the program's N channels, including by applying a sequence of matrix cascades to the samples, wherein each matrix cascade in the sequence is a cascade of primitive matrices, and each matrix cascade in the sequence is the inverse of a corresponding cascade of the cascades of N×N updated primitive matrices, and N=M, so that the M output channels are the same as the N channels of the program recovered losslessly.

10. The method of claim 9, wherein a time-varying downmix, A2(t), of audio content or encoded content of the program to M1 speaker channels has also been specified over the time interval, where M1 is an integer less than N, and the method also includes steps of: receiving a second cascade of M1×M1 primitive matrices; and applying the second cascade of M1×M1 to samples of M1 channels of the encoded audio content at each time instant tin the interval to implement a downmix of the N-channel audio program to the M1 speaker channels, wherein the downmix approximates the time-varying mix, A2(t).

11. The method of claim 7, wherein the first cascade of primitive matrices implements a seed primitive matrix, and the interpolation values are indicative of a seed delta matrix for the seed primitive matrix.

12. The method of claim 7, said method also including a step of: applying at least one of the cascades of updated N×N primitive matrices to samples of the encoded audio content, including by applying a seed primitive matrix and a seed delta matrix separately to the samples of the encoded audio content to generate transformed samples, and linearly combining the transformed samples in accordance with the interpolation function, thereby generating recovered samples indicative of samples of the M channels of the N-channel audio program.

13. The method of claim 7, wherein the interpolation function is constant over some intervals of the encoded bitstream, and each most recently updated one of the cascades of N×N updated primitive matrices is updated by interpolation only during an interval of the encoded bitstream in which the interpolation function is not substantially constant.

14. The method of claim 7, wherein the interpolation values include normalized delta values representable with Y bits, an indication of this number of bits of precision, and precision values, where the normalized delta values are indicative of normalized versions of delta values, the delta values are indicative of rates of change of coefficients of the primitive matrices, and the precision values indicative an increase in precision required to represent the delta values relative to the precision required to represent the coefficients of the primitive matrices.

15. The method of claim 14, wherein the delta values are derived by scaling the normalized delta values by a scale factor that is dependent on the resolution of the coefficients of the primitive matrices and the precision values.

16. The method of claim 7, also including steps of: extracting a check word from the encoded bitstream, and verifying whether channels of a segment of the audio program have been correctly recovered, by comparing a second check word derived from audio samples generated by said matrix multiplication subsystem against the check word extracted from the encoded bitstream.

17. An audio encoder configured to encode an N-channel audio program, wherein the program is specified over a time interval, the time interval includes a subinterval from a time t1 to a time t2, and a time-varying mix, A(t), of N encoded signal channels to M output channels has been specified over the time interval, where M is less than or equal to N, and wherein said encoder is configured to perform the method of claim 1.

18. A decoder configured to implement recovery of an N-channel audio program, wherein the program is specified over a time interval, the time interval includes a subinterval from a time t1 to a time t2, and a time-varying mix, A(t), of N encoded signal channels to M output channels has been specified over the time interval, and wherein said decoder is configured to perform the method of claim 7.

19. A non-transitory computer readable storage medium comprising a program of instructions, which, when executed by an audio encoder, cause the audio encoder to perform the method of claim 1.

20. A non-transitory computer readable storage medium comprising a program of instructions, which, when executed by a decoder, cause the decoder to perform the method of claim 7.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a block diagram of elements of a conventional system including an encoder, a delivery subsystem, and a decoder.

(2) FIG. 2A is a diagram of conventional encoder circuitry for performing lossless matrixing operations via primitive matrices implemented with finite precision arithmetic.

(3) FIG. 2B is a diagram of conventional decoder circuitry for performing lossless matrixing operations via primitive matrices implemented with finite precision arithmetic.

(4) FIG. 3 is a block diagram of circuitry employed in an embodiment of the invention to apply a 4×4 primitive matrix (implemented with finite precision arithmetic) to four channels of an audio program. The primitive matrix is a seed primitive matrix, whose one non-trivial row comprises elements α0, α1, α2, and α3.

(5) FIG. 4 is a block diagram of circuitry employed in an embodiment of the invention to apply a 3×3 primitive matrix (implemented with finite precision arithmetic) to three channels of an audio program. The primitive matrix is an interpolated primitive matrix, generated from a seed primitive matrix Pk(t1) whose one non-trivial row comprises elements α0, α1, and α2, and a seed delta matrix Δ.sub.k(t1) whose the non-trivial row comprising elements δ0, δ1, . . . , δN−1, and an interpolation function f(t).

(6) FIG. 5 is a block diagram of an embodiment of the inventive system including an embodiment of the inventive encoder, a delivery subsystem, and an embodiment of the inventive decoder.

(7) FIG. 6 is a block diagram of another embodiment of the inventive system including an embodiment of the inventive encoder, a delivery subsystem, and an embodiment of the inventive decoder.

(8) FIG. 7 is a graph of the sum of squared errors between an achieved specification and a true specification at different instants of time, t, using interpolated primitive matrices (the curve labeled “Interpolated Matrixing”) and with piecewise constant (not interpolated) primitive matrices (the curve labeled “Non-interpolated Matrixing).

NOTATION AND NOMENCLATURE

(9) Throughout this disclosure, including in the claims, the expression performing an operation “on” a signal or data (e.g., filtering, scaling, transforming, or applying gain to, the signal or data) is used in a broad sense to denote performing the operation directly on the signal or data, or on a processed version of the signal or data (e.g., on a version of the signal that has undergone preliminary filtering or pre-processing prior to performance of the operation thereon).

(10) Throughout this disclosure including in the claims, the expression “system” is used in a broad sense to denote a device, system, or subsystem. For example, a subsystem that implements a decoder may be referred to as a decoder system, and a system including such a subsystem (e.g., a system that generates Y output signals in response to multiple inputs, in which the subsystem generates M of the inputs and the other Y-M inputs are received from an external source) may also be referred to as a decoder system.

(11) Throughout this disclosure including in the claims, the term “processor” is used in a broad sense to denote a system or device programmable or otherwise configurable (e.g., with software or firmware) to perform operations on data (e.g., audio, or video or other image data). Examples of processors include a field-programmable gate array (or other configurable integrated circuit or chip set), a digital signal processor programmed and/or otherwise configured to perform pipelined processing on audio or other sound data, a programmable general purpose processor or computer, and a programmable microprocessor chip or chip set.

(12) Throughout this disclosure including in the claims, the expression “metadata” refers to separate and different data from corresponding audio data (audio content of a bitstream which also includes metadata). Metadata is associated with audio data, and indicates at least one feature or characteristic of the audio data (e.g., what type(s) of processing have already been performed, or should be performed, on the audio data, or the trajectory of an object indicated by the audio data). The association of the metadata with the audio data is time-synchronous. Thus, present (most recently received or updated) metadata may indicate that the corresponding audio data contemporaneously has an indicated feature and/or comprises the results of an indicated type of audio data processing.

(13) Throughout this disclosure including in the claims, the term “couples” or “coupled” is used to mean either a direct or indirect connection. Thus, if a first device couples to a second device, that connection may be through a direct connection, or through an indirect connection via other devices and connections.

(14) Throughout this disclosure including in the claims, the following expressions have the following definitions:

(15) speaker and loudspeaker are used synonymously to denote any sound-emitting transducer. This definition includes loudspeakers implemented as multiple transducers (e.g., woofer and tweeter);

(16) speaker feed: an audio signal to be applied directly to a loudspeaker, or an audio signal that is to be applied to an amplifier and loudspeaker in series;

(17) channel (or “audio channel”): a monophonic audio signal. Such a signal can typically be rendered in such a way as to be equivalent to application of the signal directly to a loudspeaker at a desired or nominal position. The desired position can be static, as is typically the case with physical loudspeakers, or dynamic;

(18) audio program: a set of one or more audio channels (at least one speaker channel and/or at least one object channel) and optionally also associated metadata (e.g., metadata that describes a desired spatial audio presentation);

(19) speaker channel (or “speaker-feed channel”): an audio channel that is associated with a named loudspeaker (at a desired or nominal position), or with a named speaker zone within a defined speaker configuration. A speaker channel is rendered in such a way as to be equivalent to application of the audio signal directly to the named loudspeaker (at the desired or nominal position) or to a speaker in the named speaker zone;
object channel: an audio channel indicative of sound emitted by an audio source (sometimes referred to as an audio “object”). Typically, an object channel determines a parametric audio source description (e.g., metadata indicative of the parametric audio source description is included in or provided with the object channel). The source description may determine sound emitted by the source (as a function of time), the apparent position (e.g., 3D spatial coordinates) of the source as a function of time, and optionally at least one additional parameter (e.g., apparent source size or width) characterizing the source; and

(20) object based audio program: an audio program comprising a set of one or more object channels (and optionally also comprising at least one speaker channel) and optionally also associated metadata (e.g., metadata indicative of a trajectory of an audio object which emits sound indicated by an object channel, or metadata otherwise indicative of a desired spatial audio presentation of sound indicated by an object channel, or metadata indicative of an identification of at least one audio object which is a source of sound indicated by an object channel).

DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION

(21) Examples of embodiments of the invention will be described with reference to FIGS. 3, 4, 5, and 6.

(22) FIG. 5 is a block diagram of an embodiment of the inventive audio data processing system which includes encoder 40 (an embodiment of the inventive encoder), delivery subsystem 41 (which may be identical to delivery subsystem 31 of FIG. 1), and decoder 42 (an embodiment of the inventive decoder), coupled together as shown. Although subsystem 42 is referred to herein as a “decoder” it should be understood that may be implemented as a playback system including a decoding subsystem (configured to parse and decode a bitstream indicative of an encoded multichannel audio program) and other subsystems configured to implement rendering and at least some steps of playback of the decoding subsystem's output. Some embodiments of the invention are decoders which are not configured to perform rendering and/or playback (and which would typically be used with a separate rendering and/or playback system). Some embodiments of the invention are playback systems (e.g., a playback system including a decoding subsystem and other subsystems configured to implement rendering and at least some steps of playback of the decoding subsystem's output.

(23) In the FIG. 5 system, encoder 40 is configured to encode an 8-channel audio program (e.g., a traditional set of 7.1 speaker feeds) as an encoded bitstream including two substreams, and decoder 42 is configured to decode the encoded bitstream to render either the original 8-channel program (losslessly) or a 2-channel downmix of the original 8-channel program. Encoder 40 is coupled and configured to generate the encoded bitstream and to assert the encoded bitstream to delivery system 41.

(24) Delivery system 41 is coupled and configured to deliver (e.g., by storing and/or transmitting) the encoded bitstream to decoder 42. In some embodiments, system 41 implements delivery of (e.g., transmits) an encoded multichannel audio program over a broadcast system or a network (e.g., the internet) to decoder 42. In some embodiments, system 41 stores an encoded multichannel audio program in a storage medium (e.g., a disk or set of disks), and decoder 42 is configured to read the program from the storage medium.

(25) The block labeled “InvChAssign1” in encoder 40 is configured to perform channel permutation (equivalent to multiplication by a permutation matrix) on the channels of the input program. The permutated channels then undergo encoding in stage 43, which outputs eight encoded signal channels. The encoded signal channels may (but need not) correspond to playback speaker channels. The encoded signal channels are sometimes referred to as “internal” channels since a decoder (and/or rendering system) typically decodes and renders the content of the encoded signal channels to recover the input audio, so that the encoded signal channels are “internal” to the encoding/decoding system. The encoding performed in stage 43 is equivalent to multiplication of each set of samples of the permutated channels by an encoding matrix (implemented as a cascade of matrix multiplications, identified as P.sub.n.sup.−1, . . . , P.sub.1.sup.−1, P.sub.0.sup.−1.

(26) Although n may be equal to 7 in the exemplary embodiment, in the embodiment and in variations thereon the input audio program comprises an arbitrary number (N or X) channels, where N (or X) is any integer greater than one, and n in FIG. 5 may be n=N−1 (or n=X−1 or another value). In such alternative embodiments, the encoder is configured to encode the multichannel audio program as an encoded bitstream including some number of substreams, and the decoder is configured to decode the encoded bitstream to render either the original multichannel program (losslessly) or one or more downmixes of the original multichannel program. For example, the encoding stage (corresponding to stage 43) of such an alternative embodiment may apply a cascade of N×N primitive matrices to samples of the program's channels, to generate N encoded signal channels that can be converted to a first mix of M output channels, wherein the first mix is consistent with a time-varying mix A(t), specified over an interval, in the sense that the first mix is at least substantially equal to A(t1), where t1 is a time in the interval. The decoder may create the M output channels by applying a cascade of N×N primitive matrices received as part of the encoded audio content. The encoder in such an alternative embodiment may also generate a second cascade of M1×M1 primitive matrices (where M1 is an integer less than N), which is also included in the encoded audio content. A decoder may apply the second cascade on M1 encoded signal channels to implement a downmix of the N-channel program to M1 speaker channels, wherein the downmix is consistent with another time varying mix, A.sub.2(t), in the sense that the downmix is at least substantially equal to A.sub.2(t1). The encoder in such an alternative embodiment would also generate interpolation values (in accordance with any embodiment of the present invention) and include the interpolation values in the encoded bitstream output from the encoder, for use by a decoder to decode and render content of the encoded bitstream in accordance with the time-varying mix, A(t), and/or to decode and render a downmix of content of the encoded bitstream in accordance with the time-varying mix, A.sub.2(t).

(27) The description of FIG. 5 will sometimes refer to the multichannel signal input to the inventive encoder as an 8-channel input signal for specificity, but the description (with trivial variations apparent to those of ordinary skill) also applies to the general case by replacing references to an 8-channel input signal with references to an N-channel input signal, replacing references to cascades of 8-channel (or 2-channel) primitive matrices with references to M-channel (or M1-channel) primitive matrices, and replacing references to lossless recovery of an 8-channel input signal to references to lossless recovery of an M-channel audio signal (where the M-channel audio signal has been determined by performing matrix operations to apply a time-varying mix, A(t), to an N-channel input audio signal to determine M encoded signal channels).

(28) With reference to encoder stage 43 of FIG. 5, each matrix P.sub.n.sup.−1, . . . , P.sub.1.sup.−1, and P.sub.0.sup.−1 (and thus the cascade applied by stage 43) is determined in subsystem 44, and is updated from time to time (typically infrequently) in accordance with a specified time-varying mix of the program's N (where N=8) channels to N encoded signal channels which has been specified over the time interval.

(29) Matrix determination subsystem 44 is configured to generate data indicative of the coefficients of two sets of output matrices (one set corresponding to each of two substreams of the encoded channels). Each set of output matrices is updated from time to time, so that the coefficients are also updated from time to time. One set of output matrices consists of two rendering matrices, P.sub.0.sup.2(t), P.sub.1.sup.2(t), each of which is a primitive matrix (preferably a unit primitive matrix) of dimension 2×2, and is for rendering a first substream (a downmix substream) comprising two of the encoded audio channels of the encoded bitstream (to render a two-channel downmix of the eight-channel input audio). The other set of output matrices consists of eight rendering matrices, P.sub.0(t), P.sub.1(t), . . . , P.sub.n(t), each of which is a primitive matrix (preferably a unit primitive matrix) of dimension 8×8, and is for rendering a second substream comprising all eight of the encoded audio channels of the encoded bitstream (for lossless recovery of the eight-channel input audio program). For each time, t, a cascade of the rendering matrices, P.sub.0.sup.2(t), P.sub.1.sup.2(t), can be interpreted as a rendering matrix for the channels of the first substream that renders the two channel downmix from the two encoded signal channels in the first substream, and similarly a cascade of the rendering matrices, P.sub.0(t), P.sub.1(t), . . . , P.sub.n(t), can be interpreted as a rendering matrix for the channels of the second substream.

(30) The coefficients (of each rendering matrix) that are output from subsystem 44 to packing subsystem 45 are metadata indicating relative or absolute gain of each channel to be included in a corresponding mix of channels of the program. The coefficients of each rendering matrix (for an instant of time during the program) represent how much each of the channels of a mix should contribute to the mix of audio content (at the corresponding instant of the rendered mix) indicated by the speaker feed for a particular playback system speaker.

(31) The eight encoded audio channels (output from encoding stage 43), the output matrix coefficients (generated by subsystem 44), and typically also additional data are asserted to packing subsystem 45, which assembles them into the encoded bitstream which is then asserted to delivery system 41.

(32) The encoded bitstream includes data indicative of the eight encoded audio channels, the two sets of time-varying output matrices (one set corresponding to each of two substreams of the encoded channels), and typically also additional data (e.g., metadata regarding the audio content).

(33) In operation, encoder 40 (and alternative embodiments of the inventive encoder, e.g., encoder 100 of FIG. 6) encodes an N-channel audio program whose samples correspond to a time interval, where the time interval includes a subinterval from a time t1 to a time t2. When a time-varying mix, A(t), of N encoded signal channels to M output channels has been specified over the time interval, the encoder performs steps of:

(34) determining a first cascade of N×N primitive matrices (e.g., matrices P.sub.0(t1), P.sub.1(t1), . . . , P.sub.n(t1), for the time t1) which, when applied to samples of the N encoded signal channels, implements a first mix of audio content of the N encoded signal channels to the M output channels, wherein the first mix is consistent with the time-varying mix, A(t), in the sense that the first mix is at least substantially equal to A(t1);

(35) generating encoded audio content (e.g., the output of encoder 40's stage 43, or the output of encoder 100's stage 103) by performing matrix operations on samples of the program's N channels, including by applying a sequence of matrix cascades to the samples, wherein each matrix cascade in the sequence is a cascade of primitive matrices, and the sequence of matrix cascades includes a first inverse matrix cascade which is a cascade of inverses of the primitive matrices of the first cascade;

(36) determining interpolation values (e.g., interpolation values included in the output of encoder 40's stage 44, or in the output of encoder 100's stage 103) which, with the first cascade of primitive matrices (e.g., included in the output of stage 44 or stage 103) and an interpolation function defined over the subinterval, are indicative of a sequence of cascades of N×N updated primitive matrices, such that each of the cascades of updated primitive matrices, when applied to samples of the N encoded signal channels, implements an updated mix, associated with a different time in the subinterval, of the N encoded signal channels to the M output channels, wherein each said updated mix is consistent with the time-varying mix, A(t). Preferably, but not necessarily (in all embodiments), each updated mix is consistent with the time-varying mix in the sense that the updated mix associated with any time t3 in the subinterval is at least substantially equal to A(t3); and

(37) generating an encoded bitstream (e.g., the output of encoder 40's stage 45 or the output of encoder 100's stage 104) which is indicative of the encoded audio content, the interpolation values, and the first cascade of primitive matrices.

(38) With reference to stage 44 of FIG. 5, each set of output matrices (set P.sub.0.sup.2, P.sub.1.sup.2 or set P.sub.0, P.sub.1, . . . , P.sub.n) is updated from time to time. The first set of matrices P.sub.0.sup.2, P.sub.1.sup.2 that is output (at a first time, t1) is a seed matrix (implemented as a cascade of unit primitive matrices) which determines a linear transformation to be performed at the first time during the program (i.e., on samples of two channels of the encoded output of stage 43, corresponding to the first time). The first set of matrices P.sub.0, P.sub.1, . . . , P.sub.n that is output (at first time, t1) is also seed matrix (implemented as a cascade of unit primitive matrices) which determines a linear transformation to be performed at the first time during the program (i.e., on samples of all eight channels of the encoded output of stage 43 corresponding to the first time). Each updated set of matrices P.sub.0.sup.2, P.sub.1.sup.2 that is output from stage 44 is an updated seed matrix (implemented as a cascade of unit primitive matrices, which may also be referred to as a cascade of unit seed primitive matrices) which determines a linear transformation to be performed at the update time during the program (i.e., on samples of two channels of the encoded output of stage 43, corresponding to the update time). Each updated set of matrices P.sub.0, P.sub.1, . . . , P.sub.n that is output from stage 43 is also seed matrix (implemented as a cascade of unit primitive matrices, which may also be referred to as a cascade of unit seed primitive matrices) which determines a linear transformation to be performed at the update time during the program (i.e., on samples of all eight channels of the encoded output of stage 43 corresponding to the first time).

(39) Output stage 44 also outputs interpolation values, which (with an interpolation function for each seed matrix) enable decoder 42 to generated interpolated versions of the seed matrices (corresponding to times after the first time, t1, and between the update times). The interpolation values (which may include data indicative of each interpolation function) are included by stage 45 in the encoded bitstream output from encoder 40. We will describe examples of such interpolation values below (the interpolation values may include a delta matrix for each seed matrix).

(40) With reference to decoder 42 of FIG. 5, parsing subsystem 46 (of decoder 42) is configured to accept (read or receive) the encoded bitstream from delivery system 41 and to parse the encoded bitstream. Subsystem 46 is operable to assert the substreams of the encoded bitstream (including a “first” substream comprising only two encoded channels of the encoded bitstream), and output matrices (P.sub.0.sup.2, P.sub.1.sup.2) corresponding to the first substream, to matrix multiplication stage 48 (for processing which results in a 2-channel downmix presentation of content of the original 8-channel input program). Subsystem 46 is also operable to assert the substreams of the encoded bitstream (a “second” substream comprising all eight encoded channels of the encoded bitstream), and corresponding output matrices (P.sub.0, P.sub.1, . . . , P.sub.n) to matrix multiplication stage 47 for processing which results in lossless reproduction of the original 8-channel program.

(41) Parsing subsystem 46 (and parsing subsystem 105 in FIG. 6) may include (and/or implement) additional lossless encoding and decoding tools (for example, LPC coding, Huffman coding, and so on).

(42) Interpolation stage 60 is coupled to receive each seed matrix for the second substream (i.e., the initial set of primitive matrices, P.sub.0, P.sub.1, . . . , P.sub.n, for time t1, and each updated set of primitive matrices, P.sub.0, P.sub.1, . . . , P.sub.n) included in the encoded bitstream, and the interpolation values (also included in the encoded bitstream) for generating interpolated versions of each seed matrix. Stage 60 is coupled and configured to pass through (to stage 47) each such seed matrix, and to generate (and assert to stage 47) interpolated versions of each such seed matrix (each interpolated version corresponding to a time after the first time, t1, and before the first seed matrix update time, or between subsequent seed matrix update times).

(43) Interpolation stage 61 is coupled to receive each seed matrix for the first substream (i.e., the initial set of primitive matrices, P.sub.0.sup.2 and P.sub.1.sup.2, for time t1, and each updated set of primitive matrices, P.sub.0.sup.2 and P.sub.1.sup.2) included in the encoded bitstream, and the interpolation values (also included in the encoded bitstream) for generating interpolated versions of each such seed matrix. Stage 61 is coupled and configured to pass through (to stage 48) each such seed matrix, and to generate (and assert to stage 48) interpolated versions of each such seed matrix (each interpolated version corresponding to a time after the first time, t1, and before the first seed matrix update time, or between subsequent seed matrix update times).

(44) Stage 48 multiplies two audio samples of the two channels (of the encoded bitstream) which correspond to the channels of the first substream by the most recently updated cascade of the matrices P.sub.0.sup.2 and P.sub.1.sup.2 (e.g., a cascade of the most recent interpolated versions of matrices P.sub.0.sup.2 and P.sub.1.sup.2 generated by stage 61), and each resulting set of two linearly transformed samples undergoes channel permutation (equivalent to multiplication by a permutation matrix) represented by the block titled “ChAssign0” to yield each pair of samples of the required 2 channel downmix of the 8 original audio channels. The cascade of matrixing operations performed in encoder 40 and decoder 42 is equivalent to application of a downmix matrix specification that transforms the 8 input audio channels to the 2-channel downmix.

(45) Stage 47 multiplies each vector of eight audio samples (one from each of the full set of eight channels of the encoded bitstream) by the most recently updated cascade of the matrices P.sub.0, P.sub.1, . . . , P.sub.n (e.g., a cascade of the most recent interpolated versions of matrices P.sub.0, P.sub.1, . . . , P.sub.n generated by stage 60) and each resulting set of eight linearly transformed samples undergoes channel permutation (equivalent to multiplication by a permutation matrix) represented by the block labeled “ChAssign1” to yield each set of eight samples of the losslessly recovered original 8-channel program. In order that the output 8 channel audio is exactly the same as the input 8 channel audio (to achieve the “lossless” characteristic of the system), the matrixing operations performed in encoder 40 should be exactly (including quantization effects) the inverse of the matrixing operations performed in decoder 42 on the second substream of the encoded bitstream (i.e., each multiplication in stage 47 of decoder 42 by a cascade of matrices P.sub.0, P.sub.1, . . . , P.sub.n). Thus, in FIG. 5, the matrixing operations in stage 43 of encoder 40 are identified as a cascade of the inverse matrices of the matrices P.sub.0, P.sub.1, . . . , P.sub.n, in the opposite sequence applied in stage 47 of decoder 42, namely: P.sub.n.sup.−1, . . . , P.sub.1.sup.−1, P.sub.0.sup.−1.

(46) Thus, stage 47 (with the permutation stage, ChAssign1) is a matrix multiplication subsystem coupled and configured to apply sequentially each cascade of primitive matrices output from interpolation stage 60 to the encoded audio content extracted from the encoded bitstream, to recover losslessly the N channels of at least a segment of the multichannel audio program that was encoded by encoder 40.

(47) Permutation stage ChAssign1 of decoder 42 applies to the output of stage 47 the inverse of the channel permutation applied by encoder 40 (i.e., the permutation matrix represented by stage “ChAssign1” of decoder 42 is the inverse of that represented by element “InvChAssign1” of encoder 40).

(48) In variations on subsystems 40 and 42 of the system shown in FIG. 5, one or more of the elements are omitted or additional audio data processing units are included.

(49) In variations on the described embodiment of decoder 42, the inventive decoder is configured to perform lossless recovery of N channels of encoded audio content from an encoded bitstream indicative of N encoded signal channels, where the N channels of audio content are themselves a downmix of audio content of an X-channel input audio program (where X is an arbitrary integer and N is less than X), generated by performing matrix operations on the X-channel input audio program to apply a time-varying mix to the X channels of the input audio program, thereby determining the N channels of encoded audio content of the encoded bitstream. In such variations, the decoder performs interpolation on primitive N×N matrices provided with (e.g., included in) the encoded bitstream.

(50) In a class of embodiments, the invention is a method for rendering a multichannel audio program, including by performing a linear transformation (matrix multiplication) on samples of channels of the program (e.g., to generate a downmix of content of the program). The linear transformation is time dependent in the sense that the linear transformation to be performed at one time during the program (i.e., on samples of the channels corresponding to that time) differs from the linear transformation to be performed at another time during the program. In some embodiments, the method employs at least one seed matrix (which may be implemented as a cascade of unit primitive matrices) which determines the linear transformation to be performed at a first time during the program (i.e., on samples of the channels corresponding to the first time), and implements interpolation to determine at least one interpolated version of the seed matrix which determines the linear transformation to be performed at a second time during the program. In typical embodiments, the method is performed by a decoder (e.g., decoder 40 of FIG. 5 or decoder 102 of FIG. 6) which is included in, or associated with, a playback system. Typically, the decoder is configured to perform lossless recovery of audio content of an encoded audio bitstream indicative of the program, and the seed matrix (and each interpolated version of the seed matrix) is implemented as a cascade of primitive matrices (e.g., unit primitive matrices).

(51) Typically, rendering matrix updates (updates of the seed matrix) occur infrequently (e.g., a sequence of updated versions of the seed matrix is included in the encoded audio bitstream delivered to the decoder, but there are long time intervals between the segments of the program corresponding to consecutive ones of such updated versions), and a desired rendering trajectory (e.g., a desired sequence of mixes of content of channels of the program) between seed matrix updates is specified parametrically (e.g., by metadata included in the encoded audio bitstream delivered to the decoder).

(52) Each seed matrix (of a sequence of updated seed matrices) will be denoted as A(t.sub.j), or P.sub.k(t.sub.j) if it is a primitive matrix, where t.sub.j is the time (in the program) corresponding to the seed matrix (i.e., the time corresponding to the “j”th seed matrix). Where the seed matrix is implemented as a cascade of primitive matrices, P.sub.k(t.sub.j), the index k indicates the position in the cascade of each primitive matrix. Typically, the “k”th matrix, P.sub.k(t.sub.j), in a cascade of primitive matrices operates on the “k”th channel.

(53) When the linear transformation (e.g., downmix specification), A(t), is rapidly varying, an encoder (e.g., a conventional encoder) would need to transmit updated seed matrices frequently in order to achieve a close approximation of A(t).

(54) Consider a sequence of primitive matrices P.sub.k(t1), P.sub.k(t2), P.sub.k(t3), . . . , which operate on the same channel k but at different time instants t1, t2, t3, . . . . Rather than send updated primitive matrices at each of these instants, an embodiment inventive method sends, at time t1 (i.e., includes in an encoded bitstream in a position corresponding to time t1) a seed primitive matrix P.sub.k(t), and a seed delta matrix Δ.sub.k(t1) that defines the rate of change of matrix coefficients. For example, the seed primitive matrix and seed delta matrix may have the form:

(55) P k ( t 1 ) = [ 1 0 0 1 α 0 .Math. α k α N - 1 .Math. 0 0 0 0 1 ] Δ k ( t 1 ) = [ 0 0 0 0 δ 0 .Math. δ k δ N - 1 .Math. 0 0 0 0 0 ]
Since P.sub.k(t1) is a primitive matrix it is identical to the identity matrix of dimension N×N except for one (non-trivial) row (i.e., the row comprising elements α.sub.0, α.sub.1, α.sub.2, . . . α.sub.N-1 in the example). In the example, matrix Δ.sub.k(t1) comprises zeros, except for one (non-trivial) row (i.e., the row comprising elements δ.sub.0, δ.sub.1, . . . α.sub.N-1 in the example). Element α.sub.k denotes the one of elements α.sub.0, α.sub.1, α.sub.2, . . . α.sub.N-1 which occurs on the diagonal of P.sub.k(t1), and element δ.sub.k denotes the one of elements δ.sub.0, δ.sub.1, . . . α.sub.N-1 which occurs on the diagonal of Δ.sub.k(t1).

(56) Thus, the primitive matrix at a time instant t (occurring after time t1) is interpolated (e.g., by stage 60 or 61 of decoder 42, or stage 110, 111, 112, or 113 of decoder 102) as:
P.sub.k(t)=P.sub.k(t1)+f(t)Δ.sub.k(t1),
where f(t) is the interpolation factor for time t, and f(t1)=0. For instance, if linear interpolation is desired, the function f(t) may be of the form f(t)=α*(t−t1), where a is a constant. If the interpolation is implemented in a decoder, the decoder must be configured to know the function f(t). For example, metadata determining the function f(t) may be delivered to the decoder with the encoded audio bitstream to be decoded and rendered.

(57) While the above describes a general case of interpolation of primitive matrices, with element α.sub.k equal to 1, P.sub.k(t1) is a unit primitive matrix which is amenable for lossless inversion. However, in order to maintain losslessness at each time instant we would need to also set δ.sub.k=0, so that the primitive matrix at each instant is amenable for lossless inversion.

(58) Note that P.sub.k(t)x(t)=P.sub.k(t1)x(t)+f(t)(Δ.sub.k(t1)x(t)). Thus rather than updating the seed primitive matrix at each time instant t, one could equivalently calculate two intermediate set of channels P.sub.k(t1)x(t) and Δ.sub.k(t1)x(t), and combine them with the interpolation factor fit). This approach is typically computationally less expensive compared to the approach of updating the primitive matrix each instant where each delta coefficient has to be multiplied by the interpolation factor.

(59) Yet another equivalent approach is to split f(t) into an integer r and a fraction f(t)−r, and then achieve the required application of the interpolated primitive matrix as:
P.sub.k(t)x(t)=(P.sub.k(t1)+.sub.k(t1))x(t)+(f(t)−r)(Δ.sub.k(t1)x(t)).  (2)
This latter approach (using equation (2)) would thus be a mixture of the two approaches discussed earlier.

(60) In TrueHD, 0.833 ms (40 samples at 48 kHz) worth of audio is defined as an access unit. If the delta matrix Δ.sub.k is defined as the rate of change of the primitive matrix P.sub.k per access unit, and if we define f(t)=(t−t1)/T, where T is the length of the access unit, then r in equation (2) increases by 1 every access unit, and f(t)−r is simply a function of the offset of a sample within an access unit. Thus the fractional value f(t)−r need not necessarily be calculated and can be derived simply from a look-up table indexed by offsets within an access unit. At the end of each access unit, P.sub.k(t1)+rΔ.sub.k(t1) is updated by the addition of Δ.sub.k(t1). In general T need not correspond to an access unit and may instead be any fixed segmentation of the signal, for instance, it could be a block of length 8 samples.

(61) A further simplification, albeit an approximation, would be to ignore the fractional part f(t)−r altogether and periodically update P.sub.k(t1)+rΔ.sub.k(t1). This essentially yields a piece-wise constant matrix update, but without the requirement for transmitting primitive matrices often.

(62) FIG. 3 is a block diagram of circuitry employed in an embodiment of the invention to apply a 4×4 primitive matrix (implemented with finite precision arithmetic) to four channels of an audio program. The primitive matrix is a seed primitive matrix, whose one non-trivial row comprises elements α.sub.0, α.sub.1, α.sub.2, and α.sub.3. It is contemplated that four such primitive matrices, each for transforming samples of a different one of the four channels, would be cascaded to transform samples of all four of the channels. Such circuitry could be used when the primitive matrices are first updated via interpolation, and the updated primitive matrices applied on the audio data.

(63) FIG. 4 is a block diagram of circuitry employed in an embodiment of the invention to apply a 3×3 primitive matrix (implemented with finite precision arithmetic) to three channels of an audio program. The primitive matrix is an interpolated primitive matrix, generated in accordance with an embodiment of the invention from a seed primitive matrix P.sub.k(t1) whose one non-trivial row comprises elements α.sub.0, α.sub.1, and α.sub.2 and a seed delta matrix Δ.sub.k(t1) whose the non-trivial row comprising elements δ.sub.0, δ.sub.1, and δ.sub.2 and an interpolation function f(t). Thus, the primitive matrix at a time instant t (occurring after time t1) is interpolated as: P.sub.k(t)=P.sub.k(t1)+f(t)Δ.sub.k(t1), where f(t) is an interpolation factor for time t (the value of interpolation function f(t) at the time t), and f(t1)=0. It is contemplated that three such primitive matrices, each for transforming samples of a different one of the three channels, would be cascaded to transform samples of all three of the channels. Such circuitry could be used when a seed or partially updated primitive matrix is applied on the audio data and the delta matrix is applied on the audio data and the two combined together using the interpolation factor.

(64) The FIG. 3 circuitry is configured to apply the seed primitive matrix to four audio program channels S1, S2, S3, and S4 (i.e., to multiply samples of the channels by the matrix). More specifically, a sample of channel S1 is multiplied by coefficient α.sub.0 (identified as “m_coeff[p,0]”) of the matrix, a sample of channel S2 is multiplied by coefficient α.sub.1 (identified as “m_coeff[p,1]”) of the matrix, a sample of channel S3 is multiplied by coefficient α.sub.2 (identified as “m_coeff[p,2]”) of the matrix, and a sample of channel S4 is multiplied by coefficient α.sub.3 (identified as “m_coeff[p,3]”) of the matrix. The products are summed in summation element 10, and each sum output from element 10 is then quantized in quantization stage Qss to generate the quantized value which is the transformed version (included in channel S2′) of the sample of channel S2. In a typical implementation, each sample of each of channels S1, S2, S3, and S4 comprises 24 bits (as indicated in FIG. 3), and the output of each multiplication element comprises 38 bits (as also indicated in FIG. 3), and quantization stage Qss outputs a 24 bit quantized value in response to each 38-bit value which is input thereto.

(65) The FIG. 4 circuitry is configured to apply the interpolated primitive matrix to three audio program channels C1, C2, and C3 (i.e., to multiply samples of the channels by the matrix). More specifically, a sample of channel C1 is multiplied by coefficient α.sub.0 (identified as “m_coeff[p,0]”) of the seed primitive matrix, a sample of channel C2 is multiplied by coefficient α.sub.1 (identified as “m_coeff[p,1]”) of the seed primitive matrix, and a sample of channel S3 is multiplied by coefficient α.sub.2 (identified as “m_coeff[p,2]”) of the seed primitive matrix. The products are summed in summation element 12, and each sum output from element 12 is then added (in stage 14) to the corresponding value output from interpolation factor stage 13. The value output from stage 14 is quantized in quantization stage Qss to generate the quantized value which is the transformed version (included in channel C3′) of the sample of channel C3.

(66) The same sample of channel C1 is multiplied by coefficient δ.sub.0 (identified as “delta_cf[p,0]”) of the seed delta matrix, the sample of channel C2 is multiplied by coefficient δ.sub.1 (identified as “delta_cf[p,1]”) of the seed delta matrix, and the sample of channel S3 is multiplied by coefficient δ.sub.2 (identified as “delta_cf[p,2]”) of the seed delta matrix. The products are summed in summation element 11, and each sum output from element 11 is then quantized in quantization stage Qfine to generate a quantized value which is then multiplied (in interpolation factor stage stage 13) by the current value of the interpolation function, f(t).

(67) In a typical implementation of FIG. 4, each sample of each of channels C1, C2, and C3 comprises 32 bits (as indicated in FIG. 4), and the output of each of summation elements 11, 12, and 14 comprises 50 bits (as also indicated in FIG. 4), and each of quantization stages Qfine and Qss outputs a 32 bit quantized value in response to each 50-bit value which is input thereto.

(68) For example, a variation on the FIG. 4. circuit could transform a vector of samples of x audio channels, where x=2, 4, 8, or N channels. A cascade of x such variations on the FIG. 4 circuit could perform matrix multiplication of such x channels by an x×x seed matrix (or an interpolated version of such a seed matrix). For example, such a cascade of x such variations on the FIG. 4 circuit could implement stages 60 and 47 of decoder 42 (where x=8), or stages 61 and 48 of decoder 42 (where x=2), or stages 113 and 109 of decoder 102 (where x=N), or stages 112 and 108 of decoder 102 (where x=8), or stages 111 and 107 of decoder 102 (where x=6), or stages 110 and 106 of decoder 102 (where x=2).

(69) In the FIG. 4 embodiment, the seed primitive matrix and the seed delta matrix are applied in parallel to each set (vector) of input samples (each such vector including one sample from each of the input channels).

(70) With reference to FIG. 6, we next describe an embodiment of the invention in which the audio program to be decoded is an N-channel object-based audio program. The FIG. 6 system

(71) includes encoder 100 (an embodiment of the inventive encoder), delivery subsystem 31, and decoder 102 (an embodiment of the inventive decoder), coupled together as shown. Although subsystem 102 is referred to herein as a “decoder” it should be understood that may be implemented as a playback system including a decoding subsystem (configured to parse and decode a bitstream indicative of an encoded multichannel audio program) and other subsystems configured to implement rendering and at least some steps of playback of the decoding subsystem's output. Some embodiments of the invention are decoders which are not configured to perform rendering and/or playback (and which would typically be used with a separate rendering and/or playback system). Some embodiments of the invention are playback systems (e.g., a playback system including a decoding subsystem and other subsystems configured to implement rendering and at least some steps of playback of the decoding subsystem's output.

(72) In the FIG. 6 system, encoder 100 is configured to encode the N-channel object-based audio program as an encoded bitstream including four substreams, and decoder 102 is configured to decode the encoded bitstream to render either the original N-channel program (losslessly), or an 8-channel downmix of the original N-channel program, or a 6-channel downmix of the original N-channel program, or a 2-channel downmix of the original N-channel program. Encoder 100 is coupled and configured to generate the encoded bitstream and to assert the encoded bitstream to delivery system 31.

(73) Delivery system 31 is coupled and configured to deliver (e.g., by storing and/or transmitting) the encoded bitstream to decoder 102. In some embodiments, system 31 implements delivery of (e.g., transmits) an encoded multichannel audio program over a broadcast system or a network (e.g., the internet) to decoder 102. In some embodiments, system 31 stores an encoded multichannel audio program in a storage medium (e.g., a disk or set of disks), and decoder 102 is configured to read the program from the storage medium.

(74) The block labeled “InvChAssign3” in encoder 100 is configured to perform channel permutation (equivalent to multiplication by a permutation matrix) on the channels of the input program. The permutated channels then undergo encoding in stage 101, which outputs N encoded signal channels. The encoded signal channels may (but need not) correspond to playback speaker channels. The encoded signal channels are sometimes referred to as “internal” channels since a decoder (and/or rendering system) typically decodes and renders the content of the encoded signal channels to recover the input audio, so that the encoded signal channels are “internal” to the encoding/decoding system. The encoding performed in stage 101 is equivalent to multiplication of each set of samples of the permutated channels by an encoding matrix (implemented as a cascade of matrix multiplications, identified as P.sub.n.sup.−1, . . . , P.sub.1.sup.−1,P.sub.0.sup.−1.

(75) Each matrix P.sub.n.sup.−1, . . . , P.sub.1.sup.−1, and P.sub.0.sup.−1 (and thus the cascade applied by stage 101) is determined in subsystem 103, and is updated from time to time (typically infrequently) in accordance with a specified time-varying mix of the program's N channels to N encoded signal channels has been specified over the time interval.

(76) In variations on the exemplary embodiment of FIG. 6, the input audio program comprises an arbitrary number (N or X, where X is greater than N) channels. In such variations, the N multichannel audio program channels that are indicated by the encoded bitstream output from the encoder, which may be losslessly recovered by the decoder, may be N channels of audio content which have been generated from the X-channel input audio program by performing matrix operations on the X-channel input audio program to apply a time-varying mix to the X channels of the input audio program, thereby determining the encoded audio content of the encoded bitstream.

(77) Matrix determination subsystem 103 of FIG. 6 is configured to generate data indicative of the coefficients of four sets of output matrices (one set corresponding to each of four substreams of the encoded channels). Each set of output matrices is updated from time to time, so that the coefficients are also updated from time to time.

(78) One set of output matrices consists of two rendering matrices, P.sub.0.sup.2(t), P.sub.1.sup.2(t), each of which is a primitive matrix (preferably a unit primitive matrix) of dimension 2×2, and is for rendering a first substream (a downmix substream) comprising two of the encoded audio channels of the encoded bitstream (to render a two-channel downmix of the input audio). Another set of output matrices may consist of as many as six rendering matrices, P.sub.0.sup.6(t), P.sub.1.sup.6(t), P.sub.2.sup.6(t), P.sub.3.sup.6(t), P.sub.4.sup.6(t), and P.sub.5.sup.6(t), each of which is a primitive matrix (preferably a unit primitive matrix) of dimension 6×6, and is for rendering a second substream (a downmix substream) comprising six of the encoded audio channels of the encoded bitstream (to render a six-channel downmix of the input audio). Another set of output matrices consists of as many as eight rendering matrices, P.sub.0.sup.8(t), P.sub.1.sup.8(t), . . . , P.sub.7.sup.8(t), each of which is a primitive matrix (preferably a unit primitive matrix) of dimension 8×8, and is for rendering a third substream (a downmix substream) comprising eight of the encoded audio channels of the encoded bitstream (to render an eight-channel downmix of the input audio).

(79) The other set of output matrices consists of N rendering matrices, P.sub.0(t), P.sub.1(t), . . . , P.sub.n(t), each of which is a primitive matrix (preferably a unit primitive matrix) of dimension N×N, and is for rendering a fourth substream comprising all of the encoded audio channels of the encoded bitstream (for lossless recovery of the N-channel input audio program). For each time, t, a cascade of the rendering matrices, P.sub.0.sup.2(t), P.sub.1.sup.2(t), can be interpreted as a rendering matrix for the channels of the first substream, a cascade of the rendering matrices, P.sub.0.sup.6(t), P.sub.1.sup.6(t), . . . , P.sub.5.sup.6(t), can also be interpreted as a rendering matrix for the channels of the second substream, a cascade of the rendering matrices, P.sub.0.sup.8(t), P.sub.1.sup.8(t), . . . , P.sub.7.sup.8(t), can also be interpreted as a rendering matrix for the channels of the third substream, and a cascade of the rendering matrices, P.sub.0(t), P.sub.1(t), . . . , P.sub.n(t), is equivalent to a rendering matrix for the channels of the fourth substream.

(80) The coefficients (of each rendering matrix) that are output from subsystem 103 to packing subsystem 104 are metadata indicating relative or absolute gain of each channel to be included in a corresponding mix of channels of the program. The coefficients of each rendering matrix (for an instant of time during the program) represent how much each of the channels of a mix should contribute to the mix of audio content (at the corresponding instant of the rendered mix) indicated by the speaker feed for a particular playback system speaker.

(81) The N encoded audio channels (output from encoding stage 101), the output matrix coefficients (generated by subsystem 103), and typically also additional data (e.g., for inclusion as metadata in the encoded bitstream) are asserted to packing subsystem 104, which assembles them into the encoded bitstream which is then asserted to delivery system 31.

(82) The encoded bitstream includes data indicative of the N encoded audio channels, the four sets of time-varying output matrices (one set corresponding to each of four substreams of the encoded channels), and typically also additional data (e.g., metadata regarding the audio content).

(83) Stage 103 of encoder 100 updates each set of output matrices (e.g., set P.sub.0.sup.2, P.sub.1.sup.2, or set P.sub.0,P.sub.1, . . . , P.sub.n) from time to time. The first set of matrices P.sub.0.sup.2, P.sub.1.sup.2 that is output (at a first time, t1) is a seed matrix (implemented as a cascade of primitive matrices, e.g., unit primitive matrices) which determines a linear transformation to be performed at the first time during the program (i.e., on samples of two channels of the encoded output of stage 101, corresponding to the first time). The first set of matrices P.sub.0.sup.6(t), P.sub.1.sup.6(t), . . . , P.sub.n.sup.6(t), that is output (at time t1) is a seed matrix (implemented as a cascade of primitive matrices, e.g., unit primitive matrices) which determines a linear transformation to be performed at the first time during the program (i.e., on samples of six channels of the encoded output of stage 101, corresponding to the first time). The first set of matrices P.sub.0.sup.8(t), P.sub.1.sup.8(t), . . . , P.sub.n.sup.8(t), that is output (at time t1) is a seed matrix (implemented as a cascade of primitive matrices, e.g., unit primitive matrices) which determines a linear transformation to be performed at the first time during the program (i.e., on samples of eight channels of the encoded output of stage 101, corresponding to the first time). The first set of matrices P.sub.0, P.sub.1, . . . , P.sub.n that is output (at time t1) is a seed matrix (implemented as a cascade of unit primitive matrices) which determines a linear transformation to be performed at the first time during the program (i.e., on samples of all channels of the encoded output of stage 101 corresponding to the first time).

(84) Each updated set of matrices P.sub.0.sup.2, P.sub.1.sup.2 that is output from stage 103 is an updated seed matrix (implemented as a cascade of primitive matrices, which may also be referred to as a cascade of seed primitive matrices) which determines a linear transformation to be performed at the update time during the program (i.e., on samples of two channels of the encoded output of stage 101, corresponding to the update time). Each updated set of matrices P.sub.0.sup.6(t), P.sub.1.sup.6(t), . . . , P.sub.n.sup.6(t), that is output from stage 103 is an updated seed matrix (implemented as a cascade of primitive matrices, which may also be referred to as a cascade of seed primitive matrices) which determines a linear transformation to be performed at the update time during the program (i.e., on samples of six channels of the encoded output of stage 101, corresponding to the update time). Each updated set of matrices P.sub.0.sup.8(t), P.sub.1.sup.8(t), . . . , P.sub.n.sup.8(t), that is output from stage 103 is an updated seed matrix (implemented as a cascade of primitive matrices, which may also be referred to as a cascade of seed primitive matrices) which determines a linear transformation to be performed at the update time during the program (i.e., on samples of two channels of the encoded output of stage 101, corresponding to the update time). Each updated set of matrices P.sub.0, P.sub.1, . . . , P.sub.n that is output from stage 103 is also seed matrix (implemented as a cascade of unit primitive matrices, which may also be referred to as a cascade of unit seed primitive matrices) which determines a linear transformation to be performed at the update time during the program (i.e., on samples of all channels of the encoded output of stage 101 corresponding to the first time).

(85) Output stage 103 is also configured to output interpolation values, which (with an interpolation function for each seed matrix) enable decoder 102 to generated interpolated versions of the seed matrices (corresponding to times after the first time, t1, and between the update times). The interpolation values (which may include data indicative of each interpolation function) are included by stage 104 in the encoded bitstream output from encoder 100. Examples of such interpolation values are described elsewhere herein (the interpolation values may include a delta matrix for each seed matrix).

(86) With reference to decoder 102 of FIG. 6, parsing subsystem 105 is configured to accept (read or receive) the encoded bitstream from delivery system 31 and to parse the encoded bitstream. Subsystem 105 is operable to assert a first substream comprising only two encoded channels of the encoded bitstream), output matrices (P.sub.0, P.sub.1, . . . , P.sub.n) corresponding to the fourth (top) substream, and output matrices (P.sub.0.sup.2, P.sub.1.sup.2) corresponding to the first substream, to matrix multiplication stage 106 (for processing which results in a 2-channel downmix presentation of content of the original N-channel input program). Subsystem 105 is operable to assert the second substream of the encoded bitstream comprising six encoded channels of the encoded bitstream), and output matrices (P.sub.0.sup.6(t), P.sub.1.sup.6(t), . . . , P.sub.n.sup.6(t)) corresponding to the second substream, to matrix multiplication stage 107 (for processing which results in a 6-channel downmix presentation of content of the original N-channel input program). Subsystem 105 is operable to assert a third substream of the encoded bitstream comprising eight encoded channels of the encoded bitstream), and output matrices (P.sub.0.sup.8(t), P.sub.1.sup.8(t), . . . , P.sub.n.sup.8(t)) corresponding to the third substream, to matrix multiplication stage 108 (for processing which results in an eight-channel downmix presentation of content of the original N-channel input program). Subsystem 105 is also operable to assert the fourth (top) substream of the encoded bitstream (comprising all encoded channels of the encoded bitstream), and corresponding output matrices (P.sub.0, P.sub.1, . . . , P.sub.n) to matrix multiplication stage 109 for processing which results in lossless reproduction of the original N-channel program.

(87) Interpolation stage 113 is coupled to receive each seed matrix for the fourth substream (i.e., the initial set of primitive matrices, P.sub.0, P.sub.1, . . . , P.sub.n, for time t1, and each updated set of primitive matrices, P.sub.0, P.sub.1, . . . , P.sub.n) included in the encoded bitstream, and the interpolation values (also included in the encoded bitstream) for generating interpolated versions of each seed matrix. Stage 113 is coupled and configured to pass through (to stage 109) each such seed matrix, and to generate (and assert to stage 109) interpolated versions of each such seed matrix (each interpolated version corresponding to a time after the first time, t1, and before the first seed matrix update time, or between subsequent seed matrix update times).

(88) Interpolation stage 112 is coupled to receive each seed matrix for the third substream (i.e., the initial set of primitive matrices, P.sub.0.sup.8, P.sub.1.sup.8, . . . , P.sub.n.sup.8, for time t1, and each updated set of primitive matrices, P.sub.0.sup.8, P.sub.1.sup.8, . . . , P.sub.n.sup.8) included in the encoded bitstream, and the interpolation values (also included in the encoded bitstream) for generating interpolated versions of each such seed matrix. Stage 112 is coupled and configured to pass through (to stage 108) each such seed matrix, and to generate (and assert to stage 108) interpolated versions of each such seed matrix (each interpolated version corresponding to a time after the first time, t1, and before the first seed matrix update time, or between subsequent seed matrix update times).

(89) Interpolation stage 111 is coupled to receive each seed matrix for the second substream (i.e., the initial set of primitive matrices, P.sub.0.sup.6, P.sub.1.sup.6, . . . , P.sub.n.sup.6, for time t1, and each updated set of primitive matrices, P.sub.0.sup.6, P.sub.1.sup.6, . . . , P.sub.n.sup.6) included in the encoded bitstream, and the interpolation values (also included in the encoded bitstream) for generating interpolated versions of each such seed matrix. Stage 111 is coupled and configured to pass through (to stage 107) each such seed matrix, and to generate (and assert to stage 107) interpolated versions of each such seed matrix (each interpolated version corresponding to a time after the first time, t1, and before the first seed matrix update time, or between subsequent seed matrix update times).

(90) Interpolation stage 110 is coupled to receive each seed matrix for the first substream (i.e., the initial set of primitive matrices, P.sub.0.sup.2 and P.sub.1.sup.2, for time t1, and each updated set of primitive matrices, P.sub.0.sup.2 and P.sub.1.sup.2) included in the encoded bitstream, and the interpolation values (also included in the encoded bitstream) for generating interpolated versions of each such seed matrix. Stage 110 is coupled and configured to pass through (to stage 106) each such seed matrix, and to generate (and assert to stage 106) interpolated versions of each such seed matrix (each interpolated version corresponding to a time after the first time, t1, and before the first seed matrix update time, or between subsequent seed matrix update times).

(91) Stage 106 multiplies each vector of two audio samples of the two encoded channels of the first substream by the most recently updated cascade of the matrices P.sub.0.sup.2 and P.sub.1.sup.2 (e.g., a cascade of the most recent interpolated versions of matrices P.sub.0.sup.2 and P.sub.1.sup.2 generated by stage 110), and each resulting set of two linearly transformed samples undergoes channel permutation (equivalent to multiplication by a permutation matrix) represented by the block titled “ChAssign0” to yield each pair of samples of the required 2 channel downmix of the N original audio channels. The cascade of matrixing operations performed in encoder 40 and decoder 102 is equivalent to application of a downmix matrix specification that transforms the N input audio channels to the 2-channel downmix.

(92) Stage 107 multiplies each vector of six audio samples of the six encoded channels of the second substream by the most recently updated cascade of the matrices P.sub.0.sup.6, . . . , P.sub.n.sup.6 (e.g., a cascade of the most recent interpolated versions of matrices P.sub.0.sup.6, . . . , P.sub.n.sup.6 generated by stage 111), and each resulting set of six linearly transformed samples undergoes channel permutation (equivalent to multiplication by a permutation matrix) represented by the block titled “ChAssign1” to yield each set of samples of the required 6 channel downmix of the N original audio channels. The cascade of matrixing operations performed in encoder 100 and decoder 102 is equivalent to application of a downmix matrix specification that transforms the N input audio channels to the 6-channel downmix.

(93) Stage 108 multiplies each vector of eight audio samples of the eight encoded channels (of the third substream by the most recently updated cascade of the matrices P.sub.0.sup.8, . . . , P.sub.n.sup.8 (e.g., a cascade of the most recent interpolated versions of matrices P.sub.0.sup.8, . . . , P.sub.n.sup.8 generated by stage 112), and each resulting set of eight linearly transformed samples undergoes channel permutation (equivalent to multiplication by a permutation matrix) represented by the block titled “ChAssign2” to yield each pair of samples of the required eight channel downmix of the N original audio channels. The cascade of matrixing operations performed in encoder 100 and decoder 102 is equivalent to application of a downmix matrix specification that transforms the N input audio channels to the 8-channel downmix.

(94) Stage 109 multiplies each vector of N audio samples (one from each of the full set of N encoded channels of the encoded bitstream) by the most recently updated cascade of the matrices P.sub.0, P.sub.1, . . . , P.sub.n (e.g., a cascade of the most recent interpolated versions of matrices P.sub.0, P.sub.1, . . . , P.sub.n generated by stage 113) and each resulting set of N linearly transformed samples undergoes channel permutation (equivalent to multiplication by a permutation matrix) represented by the block titled “ChAssign3” to yield each set of N samples of the losslessly recovered original N-channel program. In order that the output N channel audio is exactly the same as the input N channel audio (to achieve the “lossless” characteristic of the system), the matrixing operations performed in encoder 100 should be exactly (including quantization effects) the inverse of the matrixing operations performed in decoder 102 on the fourth substream of the encoded bitstream (i.e., each multiplication in stage 109 of decoder 102 by a cascade of matrices P.sub.0, P.sub.1, . . . , P.sub.n). Thus, in FIG. 6, the matrixing operations in stage 103 of encoder 100 are identified as a cascade of the inverse matrices of the matrices P.sub.0, P.sub.1, . . . , P.sub.n, in the opposite sequence applied in stage 109 of decoder 102, namely: P.sub.n.sup.−1, . . . , P.sub.1.sup.−1,P.sub.0.sup.−1.

(95) In some implementations, parsing subsystem 105 is configured to extract a check word from the encoded bitstream, and stage 109 is configured to verify whether the N channels (of at least one segment of a multichannel audio program) recovered by stage 109 have been correctly recovered, by comparing a second check word derived (e.g., by stage 109) from audio samples generated by stage 109 against the check word extracted from the encoded bitstream.

(96) Stage “ChAssign3” of decoder 102 applies to the output of stage 109 the inverse of the channel permutation applied by encoder 100 (i.e., the permutation matrix represented by stage “ChAssign3” of decoder 102 is the inverse of that represented by element “InvChAssign3” of encoder 100).

(97) In variations on subsystems 100 and 102 of the system shown in FIG. 6, one or more of the elements are omitted or additional audio data processing units are included.

(98) The rendering matrix coefficients P.sub.0.sup.8, . . . , P.sub.n.sup.8 (or P.sub.0.sup.6, . . . , P.sub.n.sup.6, or P.sub.0.sup.2 and P.sub.1.sup.2) asserted to stage 108 (or 107 or 106) of decoder 100 are metadata (e.g., spatial position metadata) of the encoded bitstream which are indicative of (or may be processed with other data to be indicative of) relative or absolute gain of each speaker channel to be included in a downmix of the channels of the original N-channel content encoded by encoder 100.

(99) In contrast, the configuration of the playback speaker system to be employed to render a full set of channels of an object-based audio program (which is losslessly recovered by decoder 102) is typically unknown at the time the encoded bitstream is generated by encoder 100. The N channels losslessly recovered by decoder 102 may need to be processed (e.g., in a rendering system included in decoder 102 (but not shown in FIG. 6) or coupled to decoder 102) with other data (e.g., data indicative of configuration of a particular playback speaker system) to determine how much each channel of the program should contribute to a mix of audio content (at each instant of the rendered mix) indicated by the speaker feed for a particular playback system speaker. Such a rendering system may process spatial trajectory metadata in (or associated with) each losslessly recovered object channel, to determine the speaker feeds for the speakers of the particular playback speaker system to be employed for playback of the losslessly recovered content.

(100) In some embodiments of the inventive encoder, the encoder is provided with (or generates) a dynamically varying specification A(t) that specifies how to transform all channels of an N-channel audio program (e.g., an object-based audio program) into a set of N encoded channels, and at least one dynamically varying downmix specification that specifies each downmix of content of the N encoded channels to an M1-channel presentation (where M1 is less than N, e.g., M1=2, or M1=8, when N is greater than 8). In some embodiments, the encoder's job is to pack the encoded audio and data indicative of each such dynamically varying specification into an encoded bitstream having predetermined format (e.g., a TrueHD bitstream). For example, this may be done such that a legacy decoder (e.g., a legacy TrueHD decoder) is able to recover at least one downmix presentation (having M1 channels), while an enhanced decoder may be used to recover (losslessly) the original N-channel audio program. Given the dynamically varying specifications, the encoder may assume that the decoder will determine interpolated primitive matrices P.sub.0, P.sub.1, . . . , P.sub.n from interpolation values (e.g., seed primitive matrix and seed delta matrix information) included in the encoded bitstream to be delivered to the decoder. The decoder then performs interpolation to determine the interpolated primitive matrices which invert the encoder's operations that produced the encoded audio content of the encoded bitstream (e.g., to recover losslessly the content that was encoded, by undergoing matrix operations, in the encoder). Optionally the encoder may choose the primitive matrices for the lower substreams (i.e., the substreams indicative of downmixes of content of a top, N-channel substream) to be non-interpolated primitive matrices (and include a sequence of sets of such non-interpolated primitive matrices in the encoded bitstream), while also assuming that the decoder will determine interpolated primitive matrices (P.sub.0, P.sub.1, . . . , P.sub.n) for lossless recovery of the content of the top (N-channel) substream from interpolation values (e.g., seed primitive matrix and seed delta matrix information) included in the encoded bitstream to be delivered to the decoder.

(101) For example, an encoder (e.g., stage 44 of encoder 40, or stage 103 of encoder 100) may be configured to choose seed primitive and seed delta matrices (for use with an interpolation function, f(t)), by sampling the specification A(t) at different time instants t1, t2, t3, . . . (which maybe closely spaced), deriving the corresponding seed primitive matrices (e.g., as in a conventional TrueHD encoder) and to then calculate the rate of change of individual elements in the seed primitive matrices to calculate the interpolation values (e.g., “delta” information indicative of a sequence of seed delta matrices). The first set of seed primitive matrices would be the primitive matrices derived from the specification for the first of such time instants, A(t1). It is possible that a subset of the primitive matrices may not change at all over time, in which case the decoder would responds to appropriate control information in the encoded bitstream by zeroing out any corresponding delta information (i.e., to set the rate of change of such subset of primitive matrices to zero).

(102) Variations on the FIG. 6 embodiment of the inventive encoder and decoder may omit interpolation for some (i.e., at least one) of the substreams of the encoded bitstream. For example, interpolation stages 110, 111, and 112 may be omitted, and the corresponding matrices P.sub.0.sup.2, P.sub.1.sup.2, and P.sub.0.sup.6, P.sub.1.sup.6, . . . P.sub.n.sup.6, and P.sub.0.sup.8, P.sub.1.sup.8, . . . P.sub.n.sup.8, may be updated (in the encoded bitstream) with sufficient frequency so that interpolation between instants at which they are updated is unnecessary. For another example, if matrices P.sub.0.sup.6, P.sub.1.sup.6, . . . P.sub.n.sup.6, are updated with sufficient frequency so that interpolation at times between the updates is unnecessary, interpolation stage 111 is unnecessary and may be omitted. Thus, a conventional decoder (not configured in accordance with the invention to perform interpolation) could render the 6-channel downmix presentation in response to the encoded bitstream.

(103) As noted above, dynamic rendering matrix specifications (e.g., A(t)) may stem not only from the need to render object-based audio programs, but also due to the need to implement clip protection. Interpolated primitive matrices may enable a faster ramp to and release from clip-protection of a downmix, as well as lowering the data rate required to convey the matrixing coefficients.

(104) We next describe an example of operation of an implementation of the FIG. 6 system. In this case, the N-channel input program is a three-channel object-based audio program including a bed channel, C, and two object channels, U and V. It is desired that the program be encoded for transport via a TrueHD stream having two substreams such that a 2 channel downmix (a rendering of the program to a two channel speaker set up) can be retrieved using the first substream and the original 3-channel input program can be recovered losslessly by using both substreams.

(105) Let the rendering equation (or downmix equation) from the input program to the 2 channel mix be given by:

(106) A 2 ( t ) = [ 0.707 sin ( vt ) cos ( vt ) 0.707 cos ( vt ) sin ( vt ) ]
where the first column corresponds to the gains of the bed channel (a center channel, C) that feeds equally into the L and R channels. The second and third columns, respectively, correspond to object channel U and the object channel V. The first row corresponds to the L channel of the 2ch downmix and the second row corresponds to the R channel. The two objects are moving towards each other at a speed determined by.

(107) We will examine the rendering matrices at three different time instants t1, t2 and t3.

(108) In this example we will assume t1=0, i.e.,

(109) A 2 ( t 1 ) = [ 0.707 0 1 0.707 1 0 ] .
In other words at t1, object U completely feeds into R and object V completely mixes down into L. As the objects move towards each other their contribution to the farther speaker increases. To develop the example further, let us say that

(110) v = pi 4 × 1 40 T ,
where T is the length of an access unit (typically 0.8333 ms or 40 samples at 48 kHz sampling rate). Thus at t=40 T the two objects are at the center of the scene. We will now consider t2=1T and t3=30 T, so that:

(111) A 2 ( t 2 ) = [ 0.707 0.2903 0.9569 0.707 0.9569 0.2902 ] A 2 ( t 3 ) = [ 0.707 0.5556 0.8315 0.707 0.8315 0.5556 ]

(112) Let us consider decomposing the provided specification A.sub.2(t) into input and output primitive matrices. For the sake of simplicity let us assume that matrices P.sub.0.sup.2, P.sub.1.sup.2 are identity matrices and chAssign0 (in the decoder 102) is the identity channel assignment, i.e., equal to the trivial permutation (identity matrix).

(113) We can see that:

(114) [ 0.707 0 1 0.707 1 0 1 - 1.414 4.243 ] = [ 1 0 0 2 1 - 0.707 0 0 1 ] P 0 - 1 ( t 1 ) [ 1 - 2 0.7070 0 1 0 0 0 1 ] P 1 - 1 ( t 1 ) [ 1 0 0 0 1 0 - 1.414 4.243 1 ] P 2 - 1 ( t 1 ) [ 0 1 0 0 0 1 1 0 0 ] InvChAssign 1 ( t 1 )
The first two rows of the above product are exactly the specification A.sub.2(t1). In other words the primitive matrices P.sub.0.sup.−1(t1), P.sub.1.sup.−1(t1), P.sub.2.sup.−1(t1), and channel assignment indicated by InvChAssign1(t1) together result in transforming the input channel C, Object U, and Object V into three internal channels the first two of which are exactly the required downmixes L and R. Thus the above decomposition of A(t) into the primitive matrices P.sub.0.sup.−1(t1), P.sub.1.sup.−1(t1), P.sub.2.sup.−1(t1), and channel assignment InvChAssign1(t1) is a valid choice of input primitive matrices if the output primitive matrices and channel assignment for the two channel presentation have been chosen to be identity matrices. Note that the input primitive matrices are lossless invertible to retrieve C, Object U and Object V by a decoder that operates on all three internal channels. A two channel decoder, however, would only need internal channels 1 and 2 and apply the output primitive matrices P.sub.0.sup.2, P.sub.1.sup.2 and chAssign0, which in this case are all identity.

(115) 0 Similarly we can identify : [ 0.707 0.2903 0.9569 0.707 0.9569 0.2903 1 - 1.004 4.890 ] = [ 1 0 0 1.666 1 - 0.4713 0 0 1 ] P 0 - 1 ( t 2 ) [ 1 - 2.5 0.707 0 1 0 0 0 1 ] P 1 - 1 ( t 2 ) [ 1 0 0 0 1 0 - 1.003 4.889 1 ] P 2 - 1 ( t 2 ) [ 0 1 0 0 0 1 1 0 0 ] InvChAssign 1 ( t 2 )
where the first two rows are identical to A(t2), and

(116) [ 0.707 0.5556 0.8315 0.707 0.8315 0.5556 1 - 0.628 7.717 ] = [ 1 0 0 1.2759 1 - 0.1950 0 0 1 ] P 0 - 1 ( t 3 ) [ 1 - 4.624 0.707 0 1 0 0 0 1 ] P 1 - 1 ( t 3 ) [ 1 0 0 0 1 0 - 0.628 7.717 1 ] P 2 - 1 ( t 3 ) [ 0 1 0 0 0 1 1 0 0 ] InvChAssign 1 ( t 3 )
where the first two rows are identical to A(t3).

(117) A legacy TrueHD encoder(which does not implement the present invention) may choose to transmit the (inverse of the) primitive matrices designed above at t1, t2, and t3, i.e., {P.sub.0(t1),P.sub.1(t1),P.sub.2(t1)},{P.sub.0(t2),P.sub.1(t2),P.sub.2(t2)},{P.sub.0(t3),P.sub.1(t3),P.sub.2(t3)}. In this case the specification at any time t in between t1 and t2 is approximated by the specification at A(t1), and between t2 and t3 is approximated by A(t2).

(118) In the exemplary embodiment of the FIG. 6 system, the primitive matrix P.sub.0.sup.−1(t) at t=t1, or t=t2, or t=t3 operates on the same channel (channel 2), i.e., the non-trivial row in all three cases is the second row. Similar is the case with P.sub.1.sup.−1(t) and P.sub.2.sup.−1(t). Further InvChAssign1 at each of the time instants is the same.

(119) Thus, to implement encoding by the exemplary embodiment of encoder 100 of FIG. 6), we can calculate the following delta matrices:

(120) Δ 0 ( t 1 ) = P 0 ( t 2 ) - P 0 ( t 1 ) 15 = [ 0 0 0 0.0222 0 - 0.0157 0 0 0 ] Δ 1 ( t 1 ) = P 1 ( t 2 ) - P 1 ( t 1 ) 15 = [ 0 0.0333 0 0 0 0 0 0 0 ] Δ 2 ( t 1 ) = P 2 ( t 2 ) - P 2 ( t 1 ) 15 = [ 0 0 0 0 0 0 - 0.0274 - 0.0431 0 ] and Δ 0 ( t 2 ) = P 0 ( t 3 ) - P 0 ( t 2 ) 15 = [ 0 0 0 0.0261 0 - 0.0184 0 0 0 ] Δ 1 ( t 2 ) = P 1 ( t 3 ) - P 1 ( t 2 ) 15 = [ 0 0.1416 0 0 0 0 0 0 0 ] Δ 2 ( t 2 ) = P 2 ( t 3 ) - P 2 ( t 2 ) 15 = [ 0 0 0 0 0 0 - 0.0250 - 0.01885 0 ]

(121) In contrast to the legacy TrueHD encoder, an interpolated-matrixing enabled TrueHD encoder (the exemplary embodiment of encoder 100 of FIG. 6) may choose to send the seed (primitive and delta) matrices {P.sub.0(t1),P.sub.1(t1),P.sub.2(t1)},{Δ.sub.0(t1),Δ.sub.1(t1),Δ.sub.2(t1)},{Δ.sub.0(t2),Δ.sub.1(t2),Δ.sub.2(t2)}.

(122) The primitive matrices and delta matrices at any intermediate time-instant is derived by interpolation. The achieved downmix equations at a given time t in between t1 and t2 can be derived as the first two rows of the product:

(123) ( P 0 - 1 ( t 1 ) - Δ 0 ( t 1 ) * t T ) ( P 1 - 1 ( t 1 ) - Δ 1 ( t 1 ) * t T ) ( P 2 - 1 ( t 1 ) - Δ 2 ( t 1 ) * t T ) [ 0 1 0 0 0 1 1 0 0 ] and
between t2 and t3, as

(124) ( P 0 - 1 ( t 2 ) - Δ 0 ( t 2 ) * t T ) ( P 1 - 1 ( t 2 ) - Δ 1 ( t 2 ) * t T ) ( P 2 - 1 ( t 2 ) - Δ 2 ( t 2 ) * t T ) [ 0 1 0 0 0 1 1 0 0 ] .

(125) In the above the matrices {P.sub.0(t2),P.sub.1(t2),P.sub.2(t2)} are not actually transmitted but are derived as the primitive matrices of the last point of interpolation with the delta matrices {Δ.sub.0(t1),Δ.sub.1(t1),Δ.sub.2(t1)}.

(126) We thus know the achieved downmix equations at each instant “t” for both of the above scenarios. We can thus calculate the mismatch between the approximation at a given time “t” and the true specification for that time instant. FIG. 7 is a graph of the sum of squared errors between the achieved specification and the true specification at different instants of time t, using interpolation of primitive matrices (the curve labeled “Interpolated Matrixing”) and with piecewise constant (not interpolated) primitive matrices (the curve labeled “Non-interpolated Matrixing). It is apparent from FIG. 7 that interpolated matrixing results in achieving the specification A.sub.2(t) significantly more closely compared to non-interpolated matrixing in the region 0-600 s (t1−t2). To achieve the same level of distortion with non-interpolated matrixing one might have had to send matrix updates at multiple points in between t1 and t2.

(127) Non-interpolated matrixing may result in an achieved downmix that is closer to the true specification at some intermediate time instants (e.g., between 600 s-900 s in the FIG. 7 example) but the error in non-interpolated matrixing continuously builds up with decreasing time to the next matrix update while the error with interpolated matrixing diminish near the update points (in this case at t3=30*T=1200 s). The error in interpolated matrixing could be further reduced by sending yet another delta update in between t2 and t3.

(128) Various embodiments of the invention implement one or more of the following features:

(129) 1. transformation of one set of audio channels to an equal number of other audio channels by applying a sequence of primitive matrices (preferably, unit primitive matrices) where each of at least some of the primitive matrices is an interpolated primitive matrix calculated as a linear combination (determined in accordance with an interpolation function) of a seed primitive matrix and a seed delta matrix operating on the same audio channel. The linear combination coefficient is determined by the interpolation faction (i.e., each coefficient of an interpolated primitive matrix is a linear combination A+f(t)B, where A is a coefficient of the seed primitive matrix, B is a corresponding coefficient of the seed delta matrix, and f(t) is the value of the interpolation function at the time, t, associated with the interpolated primitive matrix). In some cases, the transformation is performed on encoded audio content of an encoded bitstream to implement lossless recovery of audio content which has been encoded to generate the encoded bitstream;

(130) 2. a transformation according to above feature 1, in which application of an interpolated primitive matrix is achieved by applying the seed primitive matrix and seed delta matrix separately on the audio channels to be transformed, and linearly combining the resultant audio samples (e.g., the matrix multiplications by the seed primitive matrix are performed in parallel with the matrix multiplications by the seed delta matrix, as in the FIG. 4 circuit);

(131) 3. a transformation according to above feature 1, in which the interpolation factor is held substantially constant over some intervals (e.g., short intervals) of samples of an encoded bitstream, and the most recent seed primitive matrix is updated (by interpolation) only during intervals in which the interpolation factor changes (e.g., in order to reduce the complexity of processing in a decoder);

(132) 4. a transformation according to above feature 1, in which the interpolated primitive matrices are unit primitive matrices. In this case, multiplication by a cascade of unit primitive matrices (in an encoder) followed by multiplication (in a decoder) by a cascade of their inverses can be implemented losslessly with finite precision processing;

(133) 5. a transformation according to above feature 1, wherein the transform is performed in an audio decoder which extracts encoded audio channels and seed matrices from an encoded bitstream, wherein the decoder is preferably configured to verify whether the decoded (post-matrixed) audio has been correctly determined, by comparing a check word derived from the post-matrixed audio against a check word extracted from the encoded bitstream;

(134) 6. a transformation according to above feature 1, wherein the transform is performed in a decoder of a lossless audio coding system which extracts encoded audio channels and seed matrices from an encoded bitstream, and the encoded audio channels have been generated by a corresponding encoder that applies the lossless inverse primitive matrices to input audio, thereby encoding the input audio losslessly into the bitstream;

(135) 7. a transformation according to above feature 1, wherein the transform is performed in a decoder which multiplies received encoded channels by a cascade of primitive matrices, and only a subset of the primitive matrices is determined by interpolation (i.e., updated versions of the other primitive matrices may be delivered to the decoder from time to time, but the decoder does not perform interpolation to update them);

(136) 8. a transformation according to above feature 1, wherein the seed primitive matrices, seed delta matrices, and interpolation function are chosen such that a subset of the encoded channels created by an encoder can be transformed via matrixing operations performed (using the matrices and interpolation function) by a decoder to achieve specific downmixes of the original audio encoded by the encoder;

(137) 9. a transformation according to above feature 8, where the original audio is an object-based audio program, and the specific downmixes correspond to rendering of channels of the program to static speaker layouts (e.g., stereo, or 5.1 channel, or 7.1 channel);

(138) 10. a transformation according to above feature 9, where audio objects indicated by the program are dynamic so that the downmix specifications to a particular static speaker layout change instantaneously, with the instantaneous change accommodated by performing interpolated matrixing on the encoded audio channels to create a downmix presentation;

(139) 11. a transformation according to above feature 1, wherein an interpolation enabled decoder (configured to perform interpolation in accordance with an embodiment of the invention) is also capable of decoding substreams of an encoded bitstream in conformance with a legacy syntax that without performing interpolation to determine any interpolated matrix;

(140) 12. a transformation according to above feature 1, where the primitive matrices are designed to exploit inter-channel correlation to achieve better compression; and

(141) 13. a transformation according to above feature 1, wherein interpolated matrixing is used to achieve dynamic downmix specifications designed for clip protection.

(142) Given that downmix matrices generated using interpolation in accordance with an embodiment of the invention (for recovering downmix presentations from an encoded bitstream) typically continuously change when the source audio is an object-based audio program, seed primitive matrices employed (i.e., included in the encoded bitstream) in typical embodiments of the invention typically need to be updated often to recover such downmix presentations.

(143) If seed primitive matrices are updated frequently, in order to closely approximate a continuously varying matrix specification, the encoded bitstream typically includes data indicative of a sequence of cascades of seed primitive matrix sets, {P.sub.0(t1),P.sub.1(t1), . . . , P.sub.n(t1)}, {P.sub.0(t21),P.sub.1(t2), . . . , P.sub.n(t2)}, {P.sub.0(t3),P.sub.1(t3), . . . , P.sub.n(t3)}, and so on. This allows a decoder to recover the specified cascade of matrices at each of the updating time instants t1, t2, t3, . . . . Since the rendering matrices specified in systems for rendering object-based audio programs typically vary continuously in time, each seed primitive matrix (in a sequence of cascades of seed primitive matrices included in the encoded bitstream) may have the same primitive matrix configuration (at least over an interval of the program). The coefficients in the primitive matrices may themselves change over time but the matrix configuration does not change (or does not change as frequently as do the coefficients). The matrix configuration for each cascade may be determined by such parameters as

(144) 1. the number of primitive matrices in the cascade,

(145) 2. the order of channels that they manipulate,

(146) 3. the order of magnitude of coefficients in them,

(147) 4. the resolution (in bits) required to represent the coefficients, and

(148) 5. and the positions of coefficients that are identically zero.

(149) The parameters which indicate such a primitive matrix configuration may remain unchanged during an interval of many seed matrix updates. One or more of such parameters may need to be transmitted via the encoded bitstream to the decoder in order for the decoder to operate as desired. Since such configuration parameters may not change as frequently as the primitive matrix updates themselves, in some embodiments the encoded bitstream syntax independently specifies whether matrix configuration parameters are transmitted alongside an update to the matrix coefficients of a set of seed matrices. In contrast, in conventional TrueHD encoding matrix updates (indicated by an encoded bitstream) are necessarily accompanied by configuration updates. In contemplated embodiments of the invention, the decoder retains and uses the last received matrix configuration information if an update is received only for matrix coefficients (i.e., without a matrix configuration update).

(150) While it is envisioned that interpolated matrixing will typically allow a low seed matrix update rate, the contemplated embodiments (in which a matrix configuration update may or may not accompany each seed matrix update) are expected to efficiently transmit configuration information and further reduce the bit rate required for updating rendering matrices. In the contemplated embodiments, the configuration parameters may include parameters relevant to each seed primitive matrix, and/or parameters relevant to transmitted delta matrices.

(151) In order to minimize the overall transmitted bit rate, the encoder may implement a tradeoff between updating the matrix configuration and spending a few more bits on matrix coefficient updates while maintaining the matrix configuration unchanged.

(152) Interpolated matrixing may be achieved by transmitting slope information to traverse from one primitive matrix for an encoded channel to another that operates on the same channel. The slope may be transmitted as the rate of change of matrix coefficients per access unit (“AU”). If m1 and m2 are primitive matrix coefficients for times which are K access units apart, then the slope to interpolate from m1 to m2 may be defined as delta=(m2−m1)/K.

(153) If coefficients m1 and m2 comprise bits having the following format: m1=a.bcdefg and m2=a.bcuvwx, where both coefficients are specified with a specific number (which may be denoted as “frac_bits”) of bits of precision, then slope “delta” would be indicated by a value of the form 0.0000mnop (with higher precision and extra leading zeros required due to the specification of deltas on a per-AU basis). The additional precision required to represent the slope “delta” may be defined as “delta_precision”. If an embodiment of the invention includes a step of including each delta value directly in an encoded bitstream, the encoded bitstream would need to include values having a number of bits, “B,” which satisfies the expression: B=frac_bits+delta_precision. Clearly it is inefficient to transmit the leading zeros after the decimal place. Thus, in some embodiments, what is coded in the encoded bitstream (which is delivered to the decoder) is a normalized delta (an integer) having form: mnopqr, which is represented with delta_bits plus one sign bit. The delta_bits and delta_precision values may be transmitted in the encoded bitstream as part of the configuration information for the delta matrices. In such embodiments, the decoder is configured to derive the required delta in this case as
delta=(normalized delta in bitstream)*2.sup.−(frac.sup._.sup.bits+delta.sup._.sup.precision).

(154) Thus, in some embodiments, the interpolation values included in the encoded bitstream include normalized delta values having Y bits of precision (where Y=frac_bits), and precision values. The normalized delta values are indicative of normalized versions of delta values, where the delta values are indicative of rates of change of coefficients of the primitive matrices, each of the coefficients of the primitive matrices has Y bits of precision, and the precision values are indicative of the increase in precision (i.e., “delta_precision”) required to represent the delta values relative to the precision required to represent the coefficients of the primitive matrices. The delta values may be derived by scaling the normalized delta values by a scale factor that is dependent on the resolution of the coefficients of the primitive matrices and the precision values.

(155) Embodiments of the invention may be implemented in hardware, firmware, or software, or a combination thereof (e.g., as a programmable logic array). For example, encoder 40 or 100, or decoder 42 or 102, or subsystems 47, 48, 60, and 61 of decoder 42, or subsystems 110-113 and 106-109 of decoder 102, may be implemented in appropriately programmed (or otherwise configured) hardware or firmware, e.g., as a programmed general purpose processor, digital signal processor, or microprocessor. Unless otherwise specified, the algorithms or processes included as part of the invention are not inherently related to any particular computer or other apparatus. In particular, various general-purpose machines may be used with programs written in accordance with the teachings herein, or it may be more convenient to construct more specialized apparatus (e.g., integrated circuits) to perform the required method steps. Thus, the invention may be implemented in one or more computer programs executing on one or more programmable computer systems (e.g., a computer system which implements encoder 40 or 100, or decoder 42 or 102, or subsystem 47, 48, 60, and/or 61 of decoder 42, or subsystems 110-113 and 106-109 of decoder 102), each comprising at least one processor, at least one data storage system (including volatile and non-volatile memory and/or storage elements), at least one input device or port, and at least one output device or port. Program code is applied to input data to perform the functions described herein and generate output information. The output information is applied to one or more output devices, in known fashion.

(156) Each such program may be implemented in any desired computer language (including machine, assembly, or high level procedural, logical, or object oriented programming languages) to communicate with a computer system. In any case, the language may be a compiled or interpreted language.

(157) For example, when implemented by computer software instruction sequences, various functions and steps of embodiments of the invention may be implemented by multithreaded software instruction sequences running in suitable digital signal processing hardware, in which case the various devices, steps, and functions of the embodiments may correspond to portions of the software instructions.

(158) Each such computer program is preferably stored on or downloaded to a storage media or device (e.g., solid state memory or media, or magnetic or optical media) readable by a general or special purpose programmable computer, for configuring and operating the computer when the storage media or device is read by the computer system to perform the procedures described herein. The inventive system may also be implemented as a computer-readable storage medium, configured with (i.e., storing) a computer program, where the storage medium so configured causes a computer system to operate in a specific and predefined manner to perform the functions described herein.

(159) While implementations have been described by way of example and in terms of exemplary specific embodiments, it is to be understood that implementations of the invention are not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.