Video transmission system with color gamut partitioning and method of operation thereof
09843812 · 2017-12-12
Assignee
Inventors
Cpc classification
International classification
H04B1/66
ELECTRICITY
Abstract
A video transmission system and the method of operation thereof includes: a video transmission unit for receiving a first video frame from an input device, the first video frame having base frame parameters; dividing a color gamut into uniform regions for collecting color data from pixels of the base frame parameters; collecting pixel statistics from each of the uniform regions from the base frame parameters; determining chroma partition coordinates from the pixel statistics; deriving a search pattern of search points based on the chroma partition coordinates; and selecting the search point from the search pattern for color mapping of the first video frame.
Claims
1. A method of operation of a video transmission system comprising: receiving a first video frame from an input device, the first video frame having base frame parameters; dividing a color gamut into uniform regions for collecting color data from pixels of the base frame parameters; collecting pixel statistics from each of the uniform regions from the base frame parameters; determining chroma partition coordinates from the pixel statistics; deriving a search pattern of search points based on the chroma partition coordinates; and selecting a search point from the search pattern for color mapping of the first video frame.
2. The method as claimed in claim 1 wherein selecting the search point from the search pattern includes: combining the pixel statistics of each of the uniform regions for forming merged statistics of non-uniform regions; determining merged prediction parameters and a merged prediction error of each of the non-uniform regions; selecting a selected partition within the search pattern having the merged prediction error with the least square error.
3. The method as claimed in claim 1 further comprising outputting a first encoded video frame from a base layer encoder unit to a video stream transport.
4. The method as claimed in claim 1 wherein dividing the color gamut into the uniform regions for collecting color data includes partitioning a chrominance square into M×N regions, where M and N are powers of 2.
5. The method as claimed in claim 1 further comprising dividing the color gamut into non-uniform regions for color mapping, the non-uniform regions include partitioning a chrominance square into 2×2 regions.
6. A method of operation of a video transmission system comprising: receiving a first video frame from an input device, the first video frame having base frame parameters; dividing a color gamut into uniform regions for collecting color data from pixels of the base frame parameters, the color data includes luminance and chrominance; collecting pixel statistics from each of the uniform regions from the base frame parameters; determining chroma partition coordinates from the pixel statistics; deriving a search pattern of search points based on the chroma partition coordinates; and selecting a search point from the search pattern for color mapping of the first video frame.
7. The method as claimed in claim 6 further comprising sending a first encoded video frame to a bit stream multiplex unit for transmission to a video decoder unit.
8. The method as claimed in claim 6 further comprising activating a reference capture unit includes combining a subsequent encoded video frame with a first encoded video frame for calculating the decoded video stream.
9. The method as claimed in claim 6 further comprising determining the chrominance partition coordinates for non-uniform regions includes determining a chroma blue partition and a chroma red partition based on an average value of a chroma blue range and an average value of a chroma red range of the first video frame.
10. The method as claimed in claim 6 further comprising generating a resampled color frame reference for matching subsequent video frames.
11. A video transmission system comprising a video transmission unit for: receiving a first video frame from an input device, the first video frame having base frame parameters; dividing a color gamut into uniform regions for collecting color data from pixels of the base frame parameters; collecting pixel statistics from each of the uniform regions from the base frame parameters; determining chroma partition coordinates from the pixel statistics; deriving a search pattern of search points based on the chroma partition coordinates; and selecting a search point from the search pattern for color mapping of the first video frame.
12. The system as claimed in claim 11 wherein the video transmission unit is for: combining the pixel statistics of each of the uniform regions for forming merged statistics of non-uniform regions; determining merged prediction parameters and a merged prediction error of each of the non-uniform regions; selecting a selected partition within the search pattern having the merged prediction error with the least square error.
13. The system as claimed in claim 11 wherein the video transmission unit is for outputting a first encoded video frame from a base layer encoder unit to a video stream transport.
14. The system as claimed in claim 11 wherein the video transmission unit is for dividing the color gamut into the uniform regions for collecting color data includes partitioning a chrominance square into M×N regions, where M and N are powers of 2.
15. The system as claimed in claim 11 wherein the video transmission unit is for dividing the color gamut into non-uniform regions for color mapping, a non-uniform regions include partitioning a chrominance square into 2×2 regions.
16. The system as claimed in claim 11 wherein the video transmission unit is for dividing the color gamut into uniform regions for collecting color data from pixels of the base frame parameters, the color data includes luminance and chrominance.
17. The system as claimed in claim 16 wherein the video transmission unit is for sending a first encoded video frame to a bit stream multiplex unit for transmission to a video decoder unit.
18. The system as claimed in claim 16 wherein the video transmission unit is for activating a reference capture unit includes combining a subsequent encoded video frame with a first encoded video frame for calculating the decoded video stream.
19. The system as claimed in claim 16 wherein the video transmission unit is for determining the chrominance partition coordinates for non-uniform regions includes determining a chroma blue partition and a chroma red partition based on an average value of a chroma blue range and an average value of a chroma red range of the first video frame.
20. The system as claimed in claim 16 wherein the video transmission unit is for generating a resampled color frame reference for matching subsequent video frames.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
BEST MODE FOR CARRYING OUT THE INVENTION
(11) The following embodiments are described in sufficient detail to enable those skilled in the art to make and use the invention. It is to be understood that other embodiments would be evident based on the present disclosure, and that system, process, or mechanical changes may be made without departing from the scope of the embodiments of the present invention.
(12) In the following description, numerous specific details are given to provide a thorough understanding of the invention. However, it will be apparent that the invention may be practiced without these specific details. In order to avoid obscuring the embodiments of the present invention, some well-known circuits, system configurations, and process steps are not disclosed in detail.
(13) The drawings showing embodiments of the system are semi-diagrammatic and not to scale and, particularly, some of the dimensions are for the clarity of presentation and are shown exaggerated in the drawing FIGs. Similarly, although the views in the drawings for ease of description generally show similar orientations, this depiction in the FIGs. is arbitrary for the most part. Generally, the invention can be operated in any orientation.
(14) Where multiple embodiments are disclosed and described, having some features in common, for clarity and ease of illustration, description, and comprehension thereof, similar and like features one to another will ordinarily be described with similar reference numerals.
(15) The term “module” referred to herein can include software, hardware, or a combination thereof in an embodiment of the present invention in accordance with the context in which the term is used. For example, the software can be machine code, firmware, embedded code, and application software. Also for example, the hardware can be circuitry, processor, computer, integrated circuit, integrated circuit cores, a pressure sensor, an inertial sensor, a microelectromechanical system (MEMS), passive devices, or a combination thereof.
(16) The term “unit” referred to herein means a hardware device, such as an application specific integrated circuit, combinational logic, core logic, integrated analog circuitry, or a dedicated state machine. The color components of a pixel within a video frame, such as the three values for Luminance (Y) and Chrominance (C.sub.b and C.sub.r).
(17) Referring now to
(18) The video stream transport 106 can be a wired connection, a wireless connection, a digital video disk (DVD), FLASH memory, or the like. The video stream transport 106 can capture the coded video stream from the video transmission unit 102.
(19) The video transmission unit 102 can include a control unit 103 for performing system operations and for controlling other hardware components. For example, the control unit 103 can include a processor, an embedded processor, a microprocessor, a hardware control logic, a hardware finite state machine (FSM), a digital signal processor (DSP), or a combination thereof. The control unit 103 can provide the intelligence of the video system, can control and operate the various sub-units of the video transmission unit 102, and can execute any system software and firmware.
(20) The video transmission unit 102 can include the various sub-units described below. An embodiment of the video transmission unit 102 can include a first input color space unit 112, which can receive a first video frame 113 or a portion of a video frame. An input device 109 can be coupled to the video transmission unit 102 for sending a source video signal of full color to the video transmission unit 102. The video input sent from the input device 109 can include color data 111, which can include Luminance (Y) values and Chrominance (Cb and Cr) values for the whole picture.
(21) The first input color space unit 112 can be coupled to a base layer encoder unit 114, which can determine a Luma (Y) level for the first video frame 113 captured by the first input color space unit 112. The base layer encoder unit 114 can output a first encoded video frame 116 as a reference for the contents of the video stream transport 106. The first encoded video frame 116 can be loaded into the reference capture unit 107, of the video decoder unit 104, in order to facilitate the decoding of the contents of the video stream transport 106.
(22) The base layer encoder unit 114 can extract a set of base frame parameters 117 from the first video frame 113 during the encoding process. The base frame parameters 117 can include range values of Luminance (Y) and Chrominance (Cb and Cr).
(23) The base layer encoder unit 114 can be coupled to a base layer reference register 118, which captures and holds the base frame parameters 117 of the first video frame 113 held in the first input color space unit 112. The base layer reference register 118 maintains the base frame parameters 117 including values of the Luminance and Chrominance derived from the first input color space unit 112.
(24) The base layer reference register 118 can provide a reference frame parameter 119, which includes range values of Luminance (Y) and Chrominance (C.sub.b and C.sub.r) and is coupled to a color mapping unit 120 and the phase alignment unit 150. The color mapping unit 120 can map Luminance (Y) and Chrominance (C.sub.b and C.sub.r) in the frame parameter 119 to the Luminance (Y′) and Chrominance (C.sub.b′ and C.sub.r′) in the color reference frame 121.
(25) The color reference frame 121 can provide a basis for a resampling unit 122 to generate a resampled color frame reference 124. The resampling unit 122 can interpolate the color reference frame 121 on a pixel-by-pixel basis to modify the resolution or bit depth of the resampled color frame reference 124. The resampled color frame reference 124 can match the color space, resolution, or bit depth of subsequent video frames 128 based on the Luminance and Chrominance of the first video frame 113 associated with the same encode time.
(26) The first video frame 113 and the subsequent video frames 128 can be matched on a frame-by-frame basis, but can differ in the color space, resolution, or bit depth. By way of an example, the reference frame parameter 119 can be captured in BT.709 color HD format using 8 bits per pixel, while the resampled color frame reference 124 can be presented in BT.2020 color 4K format using 10 bits per pixel. It is understood that the frame configuration of the resampled color frame reference 124 and the subsequent video frames 128 are the same.
(27) The phase alignment unit 150 is coupled to the base layer reference register 118 and to the mapping parameter determination unit 152 for aligning luma sample locations with chroma sample locations and aligning chroma sample locations with luma sample locations. Chroma sample locations are usually misaligned with luma sample locations. For example, in input video in 4:2:0 chroma format, the spatial resolution of chroma components is half of that of luma component in both horizontal and vertical directions.
(28) The sample locations can be aligned by phase shifting operations that rely on addition steps. It has been found that phase alignment on the luma (Y) sample locations, chroma blue (Cb) sample locations, and chroma red (Cr) samples locations of the reference frame parameter 119 can be implemented with several additions and shifts per sample, which has much less impact on computational complexity than multiplication steps.
(29) A second input color space unit 126 can receive the subsequent video frames 128 of the same video scene as the first video frame 113 in the same or different color space, resolution, or bit depth. The subsequent video frames 128 can be coupled to an enhancement layer encoding unit 130. Since the colors in the video scene represented by the first video frame 113 and the subsequent video frames 128 are related, the enhancement layer encoding unit 130 can differentially encode only the difference between the resampled color frame reference 124 and the subsequent video frames 128. This can result in a compressed version of a subsequent encoded video frame 132 because the differentially encoded version of the subsequent video frames 128 can be applied to the first encoded video frame 116 for decoding the subsequent video frames 128 while transferring fewer bits across the video stream transport 106.
(30) A matrix mapping with cross-color prediction for Luminance (Y) and Chrominance (C.sub.b and C.sub.r) can be calculated by the color mapping unit 120 using the equations as follows:
(31)
(32) where the variables of g** and b* are mapping parameters. For each region, the output of Y′, C.sub.b′ and C.sub.r′ of the color mapping process is computed from the input from Equation 1a and 1b above. The y of the first input color space 112 is obtained by phase alignment of Y to the sampling position of C.sub.b and C.sub.r of the first input color space 112. The c.sub.b and c.sub.r of the first input color space 112 are obtained by phase alignment of C.sub.b and C.sub.r to the sampling position of Y of the first input color space 112. It has been found that the color mapping unit 120 can use equation 1a and equation 1b to map color between a base layer and an enhancement layer using addition and shifting steps instead of more computer intensive trilinear and tetrahedral interpolation.
(33) The phase alignment operations performed inside the color mapping unit 120 are the same as the phase alignment performed by the phase alignment unit 150 as defined by standard Scalable High Efficiency Video coding (SHVC). The phase alignment of the sampling positions will be explained in further detail below.
(34) A downscale unit 154 is used to downscale input from the second input color space unit 126. The subsequent video frames 128 from the second input color space unit 126 can be sent to the downscale unit 154 before creation of an enhancement layer by the enhancement layer encoding unit 130. The subsequent video frames 128 are downscaled to a much lower resolution to match the resolution provided by the base layer encoder unit 114. The downscale unit 154 is coupled to the mapping parameter determination unit 152 for sending a downscaled video input 155 to the mapping parameter determination unit 152.
(35) The mapping parameter determination unit 152 is used to determine mapping parameters 157 and to estimate the corresponding prediction errors between the downscale video input 155 and the output of the color reference frame 121 from the color mapping unit 120. The mapping parameter determination unit 152 determines the parameters in a 3D lookup table color gamut scalability (CGS) model on the inputted video layers.
(36) The mapping parameter determination unit 152 can be coupled to the color mapping unit 120. Using 3D lookup tables based on uniform and non-uniform partitioning, the mapping parameter determination unit 152 can determine more accurate mapping parameters. The mapping parameters 157 are sent to the color mapping unit 120.
(37) The color mapping unit 120 can use the mapping parameters 157 from the mapping parameter determination unit 152. The color mapping unit 120 uses the mapping parameters 157 to color map color data in the reference frame parameter 119 to the color data in the color reference frame 121. The mapping parameter determination unit 152 will be explained in further detail below.
(38) A bit stream multiplex unit 134 multiplexes the first encoded video frame 116 and the subsequent encoded video frame 132 for the generation of the video stream transport 106. During operation, the bit stream multiplex unit 134 can pass look-up table (LUT), generated by the mapping parameter determination unit 152, that can be used as a reference to predict the subsequent encoded video frame 132 from the first encoded video frame 116. It has been found that this can minimize the amount of data sent in the video stream transport 106 because the pixels in the first input color space unit 112 and the second input color space unit 126 represent the same scene, so the pixels in the first color space unit 112 can be used to predict the pixels in the second input color space unit 126.
(39) Since the color relationship between the first video frame 113 and the subsequent video frames 128 may not change drastically over time, the LUT for mapping color from the reference frame parameter 119 to the color reference frame 121 is only transferred at the beginning of the video scene or when update is needed. The LUT can be used for any follow-on frames in the video scene until it is updated.
(40) It has been discovered that an embodiment of the video transmission unit 102 can reduce the transfer overhead and therefore compress the transfer of the video stream transport 106 by transferring the first encoded video frame 116 as a reference. This allows the subsequent encoded video frame 132 of the same video scene to be transferred to indicate only the changes with respect to the associated first encoded video frame 116.
(41) Referring now to
(42) The cube shown in
(43) The color (Y,Cb,Cr) of a pixel 211 at a location in a picture is a point in the cube, which is defined by [0, Y.sub.max)×[0, Cb.sub.max)×[0, Cr.sub.max), where Y.sub.max=2.sup.BitDepthY, and Cb.sub.max=Cr.sub.max=2.sup.BitDepthC. The exemplary partitioning of the color gamut 201 as a CGS model depicts the luminance axis 202 that can be divided into eight uniform steps.
(44) The chroma blue axis 204 can proceed away from the luminance axis 202 at a 90 degree angle. The Chroma red axis 206 can proceed away from the luminance axis 202 and the chroma blue axis 204 at a 90 degree angle to both. For example, the representation of the color gamut 201 is shown as a cube divided into 8×2×2 regions.
(45) The exemplary partitioning shows a method of using a 8×2×2 non-uniform partitioning of the color gamut 201. The partitioning of Y or the luminance axis 202 is divided into eight uniform regions whereas the partitioning of the chroma blue axis 204 and the chroma red axis 206 can be divided jointly into four regions, where the regions of the chroma blue axis 204 and the chroma red axis 206 are non-uniform.
(46) The partitioning of the chroma blue axis 204 and the chroma red axis 206 can be indicated by the coordinates of of point (m, n) or as chroma partition coordinates 250. The partition for Cb is m and the partition of Cr is n, where 0≦m<Cb.sub.max and 0≦n<Cr.sub.max. The partitions of Cb and Cr are independent of Y and are signaled relative to the mid position.
(47) Using the color space model, color statistics of the pixels can be collected and mapped from information provided by the base layer reference register 118. These color statistical values can fall within a luminance range 260, a chroma blue range 262, and a chroma red range 264. These color statistical range values are used by the system to provide a prediction for the encoding process of the subsequent encoded video frame 132 of
(48) The relative position (m−Cb.sub.max/2) and (n−Cr.sub.max/2) are signaled, where Cb.sub.max=Cr.sub.max=2.sup.BitDepthC. Each pixel collected from the base frame parameters 117 is assigned into one of the regions created by the chroma partition coordinates 250. To locate chroma partition boundaries, the offsets relative to the uniform partitioning in dot line (centerlines), which correspond to zero chroma values, are signaled for the two chroma components, respectively, in bitstream.
(49) For example, a blue offset 252 represents the value for the offset for Cb and a red offset 254 represents the value for the offset for Cr. Generally, the partition offsets of chroma components are highly content dependent. To determine suboptimal offsets at encoder side with minimal computation, it has been found that average values of all samples in each chroma component in base layer reconstructed picture are calculated and used as the positions of the non-uniform partition boundaries.
(50) Consequently, the offsets are calculated relative to the centerlines and signaled as part of the color mapping parameters. The determination of the chroma partition coordinates 250 will be explained in further detail below.
(51) It has been discovered that the partitioning of the color gamut 201 into 8×2×2 non-uniform regions can further minimize the amount of data encoded in the subsequent encoded video frame 132 because only a single partition value for the chroma blue axis 204 and a single partition value for Chroma red axis 206 can be transferred as opposed to a different partition for each of the luminance regions of the luminance axis 202. This reduction in coding overhead results in better balance of bitrate and the detail transferred in the video stream transport 106 of
(52) Referring now to
(53) The chrominance square 302 includes the chrominance values, excluding the luminance values, of a pixel from (0, 0) to (Cb.sub.max, Cr.sub.max). The chrominance square 302 can be partitioned into a plurality of the non-uniform regions 303, such as the 2×2 regions shown by the fine dotted lines.
(54) In this example, the partition of the color gamut 201 can be figuratively represented by a square divided into four non-uniform regions. The partitioning of the chrominance square 302 is designated by the chroma partition coordinates 250 of (m, n), which designate a chroma blue partition 304 and a chroma red partition 306.
(55) A collection of chrominance statistics from the base frame parameters 117 of
m=
(56) The coordinate of n equals the average of Cr collected from the base frame parameters 117 for all of the pixels in the same image. The average values of Cr can be rounded into an integer and represented by the equation:
n=
(57) The chroma partition coordinates 250 determine the non-uniform partitioning of the chrominance square 302 into four regions. The values of (m, n) are converted into offset values 308 relative the midpoints of the range of Cb and Cr. For color mapping parameters, the offsets are calculated relative to the centerlines and signaled.
(58) Referring now to
(59) The chrominance square 302 can include uniform partitions of the chroma blue axis 204 and the chroma red axis 206. The number of partitions for the chroma blue axis 204 can be an even integer of M and the number of partitions for the chroma red axis 206 can be an even integer of N. For illustrative purposes, the chrominance square 302 can include uniform M×N partitions of 32×32, where M and N are powers of two. It is understood that M×N can be 2×2, 4×4, 8×8, and 16×16, as examples.
(60) Each intersection of the 32×32 square can include a grid point 402. The grid point 402 is the designated partition for the estimated chrominance value (Cb, Cr) of a pixel. The grid point 402 value for the chroma blue axis 204 can be represented by (u). The grid point 402 for the chroma red axis 206 can be represented by (v).
(61) It has been found that to reduce computation time, the search points for the chroma partition coordinate 250 of
(62)
and where 0≦v<N.
(63) Referring now to
(64) The uniform M×N partitioning of the chrominance square 302 can be used to find a more accurate 2×2 non-uniform partitioning than the partition designated by the average values of Cb and Cr. To improve the partition designated by the average values of Cb and Cr, the average values of Cb and Cr, which were collected from the base frame parameters 117 of
ū=└
(65) where the boundary or limits of the search points for the partition coordinate 250 are limited to (u×Cb.sub.max/M, v×Cr.sub.max/N) where 0≦u<M and 0≦v<N. It has been found that computation is reduced because searches can be limited to these boundaries instead of conducting a search at all possible partition coordinates.
(66) The search point 502 of (ū,
(67) Referring now to
(68) The squares marked as Y correspond to a luma sample location 602. The circles marked with a C correspond to a chroma sample location 604. For input video in 4:2:0 chroma format, the spatial resolution of chroma components is half that of luma component in both horizontal and vertical directions. Further, the chroma sample locations 604 are usually misaligned with luma sample locations 602. In order to improve the precision of the color mapping process, it has been found that sample locations of different color components can be aligned before the cross component operations are applied.
(69) For example, when calculating the output for luma component, chroma sample values will be adjusted to be aligned with the corresponding the luma sample location 602 to which they apply. Similarly, when calculating the output for chroma components, luma sample values will be adjusted to be aligned with the corresponding samples of the chroma sample location 604 to which they apply.
(70) When calculating the output for luma component, chroma sample values will be adjusted to be aligned with the corresponding luma sample location to which they apply as shown in the pseudo code example below:
y(C.sub.0)=(Y(Y.sub.0)+Y(Y.sub.4)+1)>>1
y(C.sub.1)=(Y(Y.sub.2)+Y(Y.sub.6)+1)>>1
y(C.sub.2)=(Y(Y.sub.8)+Y(Y.sub.12)+1)>>1
y(C.sub.3)=(Y(Y.sub.10)+Y(Y.sub.14)+1)>>1
c.sub.b(Y.sub.4)=(C.sub.b(C.sub.0)×3+C.sub.b(C.sub.2)+2)>>2
c.sub.b(Y.sub.5)=((C.sub.b(C.sub.0)+C.sub.b(C.sub.1))×3+(C.sub.b(C.sub.2)+C.sub.b(C.sub.3))+4)>>3
c.sub.b(Y.sub.8)=(C.sub.b(C.sub.2)×3+C.sub.b(C.sub.0)+2)>>2
c.sub.b(Y.sub.9)=((C.sub.b(C.sub.2)+C.sub.b(C.sub.3))×3+(C.sub.b(C.sub.0)+C.sub.b(C.sub.1))+4)>>3
c.sub.r(Y.sub.4)=(C.sub.r(C.sub.0)×3+C.sub.r(C.sub.2)+2)>>2
c.sub.r(Y.sub.5)=((C.sub.r(C.sub.0)+C.sub.r(C.sub.1))×3+(C.sub.r(C.sub.2)+C.sub.r(C.sub.3))+4)>>3
c.sub.r(Y.sub.8)=(C.sub.r(C.sub.2)×3+C.sub.r(C.sub.0)+2)>>2
c.sub.r(Y.sub.9)=((C.sub.r(C.sub.2)+C.sub.r(C_3))×3+(C.sub.r(C.sub.0)+C.sub.r(C.sub.1))+4)>>3
(71) As shown above, it has been discovered that the formulas used in the calculation of the phase alignment of the luma sample location 602 and the chroma sample location 604 can be implemented with several additions operations and shift operations per sample, which reduces computational time and complexity than computations used on multiplication operations.
(72) Referring now to
(73) A pixel (Y, Cb, Cr) is in a region R.sub.L×M×N (l, m, n), where 0≦l<L, 0≦m<M, 0≦n<N, if:
(74)
For example, the (Y, Cb, Cr) in here is the phase aligned (Y, c.sub.b, c.sub.r) or (y, C.sub.b, C.sub.r) version of Y, C.sub.b, C.sub.r from the first input color space unit 112. The color space of (Y, Cb, Cr) can be divided into L×M×N regions and each region can be represented as R(l, m, n).
(75) The parameters g.sub.00, g.sub.01, g.sub.02, b.sub.0 of region R(l, m, n) are obtained by linear regression which minimizes a L2 distance of luma between the color reference frame 121 of
E.sub.y′(l, m, n)=Σ.sub.(Y,c.sub.
(76) The parameters g.sub.10, g.sub.11, g.sub.12, b.sub.1 of region R(l, m, n) are obtained by linear regression which minimizes a L2 distance of chroma blue between the color reference frame 121 and the downscaled video input 155 and can be represented by:
E.sub.c.sub.
(77) The parameters g.sub.20, g.sub.21, g.sub.22, b.sub.2 of region R(l, m, n) are obtained by linear regression which minimizes a L2 distance of chroma red between the color reference frame 121 of
E.sub.c.sub.
The prediction error of E(l, m, n) in a region R(l, m, n) in the color space is given by the following equation:
E(l, m, n)=E.sub.y′(l, m, n)+E.sub.c.sub.
(78) The prediction error of partitioning the color space into L×M×N regions is the sum of the prediction error in each region:
(79)
(80) In general, linear regression has a closed form solution using the statistics of the training data. In this case, the statistics for determining the parameters for R(l, m, n) is designated as S(l, m, n) which consists of statistics for Equation A, Equation B, and Equation C, above.
(81) Further for example, when fully expanding Equation A, the follow statistics are found to be necessary and sufficient to evaluate Equation A and obtain the corresponding optimal solution:
Σ.sub.(Y,c.sub.
Σ.sub.(Y,c.sub.
Σ.sub.(Y,c.sub.
Σ.sub.(Y,c.sub.
(82) By fully expanding Equation B and Equation C, the following statistics are found to be necessary and sufficient to evaluate Equations B and C and obtain the corresponding optimal solutions:
Σ.sub.(Y,c.sub.
Σ.sub.(Y,c.sub.
Σ.sub.(Y,c.sub.
Σ.sub.(Y,c.sub.
Σ.sub.(Y,c.sub.
(83) As shown by the example region marked in
(84) Referring now to
(85) The sub-units of the prediction unit 152 can include a color unit 802, a collection unit 804, a calculation unit 805, and a selection unit 812. In another embodiment, the control unit 103 of
(86) The color unit 802 can divide the color gamut 201 of
(87) Each phase aligned pixel of a video frame can fall within or be represented by a location within the 3D lookup table or cube. The color unit 802 can divide the color gamut 201 using the uniform partitioning method shown in
(88) The base layer color space, taken from the first input color space unit 112 of
(89) Further, for a given example of the chroma partition coordinates 250,
(90) The collection unit 804 collects statistics 803 for each of the 8×32×32 uniform regions R(l, m, n) in 401. The pixel statistics 803 can include statistics in Equation E and Equation F for the 8×32×32 uniform partition.
(91) The collection unit 804 is coupled to the color unit 802 and the calculation unit 805. The collection unit 804 can send the pixel statistics 803 to the calculation unit 805.
(92) The calculation unit 805 can receive the pixel statistics 803 and can obtain average and quantized values for Cb and Cr for deriving search points based on the pixel statistics 803. The calculation unit 805 can include an averages unit 806, a quantization unit 808, and a pattern unit 810.
(93) The averages unit 806 can obtain the average values for Cb and Cr (
(94) The quantization unit 808 quantizes the average values of Cb and Cr (
(95) The pattern unit 810 derives a list of search points for generating the search pattern 504 of
(96) The selection unit 812 can search for the most accurate chroma partition coordinate 250 as a search point from the search pattern 504 which minimize a L2 distance between 121 and 155 in Equation G for the L×2×2 partition designated by the chroma coordinate 250.
(97) In order to minimize the L2 distance of the 8×2×2 partition, the selection unit 812 includes a merge block 814 to compute the statistic vector S.sub.L×2×2(l, m, n) 815 according to Equation E and F for the 8×2×2 partition from the statistic vector S.sub.L×M×N(l, m, n) 803 collected for the L×M×N uniform partition by vector addition: S.sub.L×2×2(l, m, n)=Σ.sub.R.sub.
(98) The selection unit 812 can include the merged prediction block 816. The merged prediction block 816 can determine merged prediction parameters 817 which minimize the merged prediction error Equation H of each L×2×2 region by the least square method and the corresponding merged prediction error 819.
(99) The selection unit 812 can include an addition block 818 coupled to the merged prediction block 816. The addition block 818 can add the merged prediction error 819 of every regions in the L×2×2 non-uniform partition to generate a partition prediction error 821 of the L×2×2 regions according to Equation G. The addition block 818 can be coupled to a selection block 820.
(100) The selection unit 812 can include a selection block 820 for receiving the partition prediction error 821 and the corresponding prediction parameters 817 of the L×2×2 non-uniform partition. The selection block 820 can identify the search point within the search pattern 504 with the last square error. The selection unit 812 can output the identified partition and the associated prediction parameters 830.
(101) Referring now to
(102) The search point 502 of (u, v) is used to divide the chrominance square 302 into four regions where R.sub.L×2×2(.Math., m, n), 0≦m<2, 0≦n<2. The statistics of the search point 502 of (u, v) are the statistics of the four regions and are taken from the base frame parameters 117 of
(103) Each region R.sub.L×2×2 (l, m, n) is a collection of regions from R.sub.L×M×N. Statistics of a region R.sub.L×2×2(l, m, n) is a sum of the statistics of regions R.sub.L×M×N(l, m′, n′) in R.sub.L×2×2(l, m, n).
(104) It has been discovered that the video transmission unit 102 can reduce computation time in video encoding and produce highly accurate color prediction of 8×2×2 and improvements over other compression methods.
(105) In summary, the video transmission unit 102 of
(106) Referring now to
(107) The resulting method, process, apparatus, device, product, and/or system is straightforward, cost-effective, uncomplicated, highly versatile and effective, can be surprisingly and unobviously implemented by adapting known technologies, and are thus readily suited for efficiently and economically operating video transmission systems fully compatible with conventional encoding and decoding methods or processes and technologies.
(108) Another important aspect of the embodiments of the present invention is that it valuably supports and services the historical trend of reducing costs, simplifying systems, and increasing performance.
(109) These and other valuable aspects of the embodiments of the present invention consequently further the state of the technology to at least the next level.
(110) While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the scope of the included claims. All matters hithertofore set forth herein or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.