Signal-processing apparatus including a second processor that, after receiving an instruction from a first processor, independantly controls a second data processing unit without further instruction from the first processor
11563985 · 2023-01-24
Assignee
Inventors
- Tomonori Kataoka (Oonojyo, JP)
- Hideshi Nishida (Nishinomiya, JP)
- Kouzou Kimura (Toyonaka, JP)
- Nobuo Higaki (Kobe, JP)
- Tokuzo Kiyohara (Osaka, JP)
Cpc classification
H04N19/42
ELECTRICITY
G06F9/3885
PHYSICS
H04N19/44
ELECTRICITY
International classification
G06F9/38
PHYSICS
H04N19/86
ELECTRICITY
H04N19/44
ELECTRICITY
H04N19/42
ELECTRICITY
Abstract
A signal-processing apparatus includes an instruction-parallel processor, a first data-parallel processor, a second data-parallel processor, and a motion detection unit, a de-blocking filtering unit and a variable-length coding/decoding unit which are dedicated hardware. With this structure, during signal processing of an image compression and decompression algorithm needing a large amount of processing, the load is distributed between software and hardware, so that the signal-processing apparatus can realize high processing capability and flexibility.
Claims
1. A signal-processing apparatus comprising: a first processor; a second processor controlled directly by the first processor; a data processing unit controlled directly by the second processor; a first bus directly coupled to the data processing unit; a shared memory directly coupled to the first bus and used by the data processing unit; a video input/output unit coupled to the first bus; a second bus directly coupled to the first processor; and a bridge unit directly coupled to the first bus and the second bus, wherein the data processing unit comprises at least: a first hardware block dedicated to deblocking filtering; and a second hardware block dedicated to motion estimation, and performs video encode processings on data received from the video input/output unit or performs video decode processings to generate data to be output through the video input/output unit.
2. The signal-processing apparatus according to claim 1, wherein the first bus and the second bus are free from direct connection.
3. The signal-processing apparatus according to claim 1, wherein the second bus is coupled to the first processor and at least one additional device other than the first bus.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(18) Hereinafter, embodiments of the present invention are described with reference to the accompanying drawings.
First Embodiment
(19)
(20) The SIMD processor adopted as the first data-parallel processor 101 and the second data-parallel processor 102 include eight processing elements, and are capable of processing eight data streams in parallel at one instruction.
(21) The motion detection unit 103, the de-blocking filtering unit 104, the variable-length coding/decoding unit 105 and the input and output interface 106 are each dedicated hardware.
(22) Next, the operation of the present embodiment will be described in outline with image coding processing as an example.
(23) After an externally inputted video signal is A/D converted, the signal is stored in the first shared memory 121 from the input and output interface 106 via the first data bus 132.
(24) The motion detection unit 103 calculates the motion vector based on the image data of the previous frame stored in the first shared memory 121 and the image data of the present frame.
(25) The first data-parallel processor 101 calculates the prediction image data by performing motion compensation processing based on the image data of the previous frame stored in the first shared memory 121 and the motion vector calculated by the motion detection unit 103. Moreover, difference image data of the image data of the current frame with respect to the predicted image data is calculated.
(26) The second data-parallel processor 102 discrete-cosine-transforms the difference image data, and quantizes the obtained DCT coefficient. Moreover, the second data-parallel processor 102 inversely quantizes the quantized DCT coefficient, inversely discrete-cosine-transforms it, calculates the difference image data, and calculates reconstructed image data from the difference image data and the predicted image data processed by the first data-parallel processor 101.
(27) In the signal-processing apparatus of the present embodiment, while the first data-parallel processor 101 is performing the calculation of the pixel value of the motion compensation processing, the second data-parallel processor 102 performs the DCT processing. As described above, it is possible to cause two different data-parallel processors to perform different processing while maintaining the operating ratios thereof, whereby the performance is improved.
(28) The de-blocking filtering unit 104 performs de-blocking filtering processing to the reconstructing image data, removes block noise, and stores it into the first shared memory 121.
(29) The variable-length coding/decoding unit 105 performs variable-length coding processing using an arithmetic code on the quantized DCT coefficient and the motion vector, and outputs the coded data as a bit stream.
(30) The instruction-parallel processor 100 performs the overall control of the above-described various processing through the first instruction bus 130. Moreover, the instruction-parallel processor 100 performs a coding mode determination as to whether to perform the generation of the predicted image by intra prediction coding or by inter prediction coding.
(31) The data transfer between the processors and the units is performed through the first data bus 132.
(32) High-efficiency image processing can be realized by performing sequential processing of the image compression/decompression by the instruction-parallel processor 100, performing routine processing of the image compression/decompression by the first data-parallel processor 101 and the second data-parallel processor 102, and performing heavy processing such as the motion detection processing, the de-blocking filtering processing and the variable-length coding processing by the dedicated hardware as described above.
(33) The demarcation in sharing the object of processing between the first data-parallel processor 101 and the second data-parallel processor 102 in the present embodiment is an example, and it may be different. In other words, according to the performance of the processor, the processing of the first data-parallel processor 101 and the second data-parallel processor 102 may be performed by one data-parallel processor.
(34) Further, the motion compensation processing performed by the first data-parallel processor 101 may be performed by the motion detection unit 103.
Second Embodiment
(35)
(36) The signal-processing apparatus of the present embodiment further comprises, compared to the signal-processing apparatus of the first embodiment, a control processor 107, a second shared memory 122, a second instruction bus 131, a second data bus 133, and a bridge unit 120 connecting the first data bus 132 and the second data bus 133.
(37) The instruction-parallel processor 100, the control processor 107 and the variable-length coding/decoding unit 105 are connected to the first instruction bus 130. The control processor 107, the first data-parallel processor 101, the second data-parallel processor 102, the motion detection unit 103 and the de-blocking filtering unit 104 are connected to the second instruction bus 131.
(38) The local memories 111 to 115, the first shared memory 121, the input and output interface 106 and the bridge unit 120 are connected to the first data bus. The local memory 110, the second shared memory 122 and the bridge unit 120 are connected to the second data bus.
(39) In the signal-processing apparatus of the present embodiment, the parallel processing of data is enhanced compared to that of the first embodiment. In other word, the control processor 107 introduced in the present embodiment controls, in response to an instruction from the instruction-parallel processor 100, the first data-parallel processor 101, the second data-parallel processor 102, the motion detection unit 103 and the de-blocking filtering unit 104 through the second instruction bus 131. Consequently, the signal-processing apparatus of the present embodiment is capable of more rapidly performing parallel processing by the data-parallel processors and the dedicated hardware.
(40) Further, the second shared memory 122 of the present embodiment stores data related to the instruction-parallel processor 100, and data accessed at a comparatively low frequency among the data handled by the components connected to the first data bus 132. The structure reduces the load on the first shared memory 121, so that the processing efficiency of the entire signal-processing apparatus is improved.
(41) The operation of the present embodiment will be described in detail in the third embodiment described below.
Third Embodiment
(42)
(43) The video encoder of the present embodiment is an encoder capable of the MPEG-4 AVC. Each component is given a name adequately expressing a function of the video encoder corresponding to the MPEG-4 AVC.
(44) The video encoder of the present embodiment shown in
(45) The processing of a coding controller 301 and a mode switcher 303 are performed by the instruction-parallel processor 100 of
(46) The processing of a motion compensator 312 and a difference detector 302 are performed by the first data-parallel processor 101 of
(47) The processing of a 4×4 DCT transformer 304, a quantizer 305, an inverse quantizer 306, an inverse 4×4 DCT transformer 307 and a reconstructor 309 are performed by the second data-parallel processor 102 of
(48) A variable-length coder 308 corresponds to the variable-length coding/decoding unit 105 of
(49) Next, primary signal processing of the MPEG-4 AVC will be described with reference to the operation of the components of the present embodiment.
(50) First, encoding processing will be described with reference to
(51) According to existing coding standards such as the MPEG-2 and the H.263, a real-precision DCT is adopted for the 8×8 block size, and a mismatch occurs unless the DCT precision is defined. However, according to the MPEG-4 AVC, an integral-precision DCT is applied for the 4×4 block size, and consequently, a mismatch due to the DCT precision does not occur.
(52) The quantized DCT coefficient is entropy-coded by use of an arithmetic coder at the variable-length coder 308. Details thereof will be described later.
(53) Next, variable-length coding/decoding processing will be described.
(54) The outline of the MPEG-4 AVC is described in Document 3 “The overview of MPEG-4 AVC|H.264 and its standardization” (Teruaki SUZUKI, Information Processing Society of Japan, Audio Visual and Multimedia Information Processing 38-13, pp. 69-73, November, 2002). Description will be given based on Document 3.
(55) In the variable-length coding of syntax elements such as the number of macro-blocks, the motion vector difference and the conversion factor, the following two entropy coding methods are selectively used: CAVLC (Context Adaptive Variable Length Coding); and CABAC (Context Adaptive Binary Arithmetic Coding).
(56) In this description, an arithmetic coding method called CABAC used in the main profile will be explained. In the arithmetic coding, a line segment with a length of “1” is divided according to the probability of occurrence of the symbol to be coded, and since the divided line segment and the symbol to be coded correspond one to one to each other, coding is performed with respect to the line segment. Since the binary number representative of the line segment is a code, the segment of the line is large, that is, the higher the probability of occurrence of the symbol to be coded is, the shorter the binary number the symbol can be expressed by and consequently, the compression rate is increased. Therefore, when coding of the object block is performed, the probability of occurrence is manipulated in accordance with the context of the peripheral block so that the compression rate is increased.
(57)
(58) The context modeling is a probability model when each symbol is coded. A context is defined for each syntax element, and arithmetic coding is performed by switching the probability table in accordance with the context.
(59)
(60) In the above-described arithmetic coding processing, the decoding processing of the variable-length-coded code is a sequential processing of analyzing occurrence probability information by a decoder and performing reconstruction based on the information. Moreover, since the manipulation of the probability of occurrence is performed by use of a table, performing these coding processing and decoding processing by use of a VLIW (Very Long Instruction Word)-compliant instruction-parallel processor (in the above-described second embodiment, corresponding to the instruction-parallel processor 100 shown in
(61) In
(62) Next, the motion compensation processing of ¼ pixel precision performed by the motion compensator 312 of
(63) Motion compensation is to construct a predicted image closer to the image to be coded, by use of information on the motion vector when a predicted image is constructed from an image referred to. Since the code amount decreases as the prediction error decreases, the MPEG-4 AVC adopts the motion compensation of ¼ pixel precision. The motion vector comprises two parameters representative of a translational movement in the unit of blocks (the distance moved in the horizontal direction and the distance moved in the vertical direction).
(64) The predicted image of the reference image pointed by the motion vector is obtained by the following manner:
(65) In
(66) The procedure of obtaining the values of these pixels is now described. First, the pixel b of ½ precision is obtained in the following manner: With the pixels E, F, H, I and J in the vicinity of the pixel b in the horizontal direction as variables, intermediate data b1 is generated by use of a 6-tap filter defined by (Equation 1).
b1=(E−5*F+20*G+20*H−5*I+J) [Equation 1]
(67) Then, the intermediate data b1 is rounded and normalized by (Equation 2) and clipped to 0 to 255, whereby the pixel b is obtained.
b=Clip((b1+16)/32) [Equation 2]
Here, Clip(X) is a function that clips the variable X inside the parentheses to a range of 0 to 255. That is, when the variable X is less than 0, b=0, when the variable X is in the range of 0 to 255, b=X, and when the variable X is not less than 256, b=255.
(68) Likewise, the pixel h of ½ precision is obtained in the following manner: With the pixels A, C, M, R and T in the vicinity of the pixel h in the vertical direction as variables, intermediate data h1 is generated by use of a 6-tap filter defined by (Equation 3).
h1=(A−5*C+20*G+20*M−5*R+T) [Equation 3]
(69) The intermediate data h1 is rounded and normalized by (Equation 4) and clipped to 0 to 255, whereby the pixel h is obtained.
h=Clip((h1+16)/32) [Equation 4]
(70) The pixels a, c, d, f, i, k, n and q of ¼ precision are each obtained by a rounded average by use of two neighboring pixels as shown in (Equation 5).
a=(G+b+1)/2
c=(H+b+1)/2
d=(G+h+1)/2
f=(b+j+1)/2
i=(h+j+1)/2
k=(j+m+1)/2
n=(M+h+1)/2
q=(j+s+1)/2 [Equation 5]
(71) Likewise, the pixels e, g, p and r of ¼ precision are each obtained by a rounded average by use of two neighboring pixels as shown in (Equation 6).
e=(b+h+1)/2
g=(b+m+1)/2
p=(h+s+1)/2
r=(m+s+1)/2 [Equation 6]
(72) In the predicted image generation as described above, the motion vector can be set for each sub-macro-block. In the case of 4×4 where the sub-macro-blocks are smallest, it is necessary to interpolate pixels in 16 real positions from the pixels in the integral positions by use of a 6-tap filter. In the pixel interpolation, since there is no data dependence among pixels, processing can be performed in parallel. Therefore, by using the SIMD data-parallel processor as shown in the present embodiment, filtering processing can be efficiently performed.
(73) Next, the de-blocking filtering will be described.
(74) According to the MPEG-4 AVC, since the DCT processing is performed in the unit of 4×4 pixels, block distortion occurs at the pixel boundary. The de-blocking filtering processing smoothes the distortion by performing filtering on the block boundary. The filtering processing performed on the 4×4 boundaries of the image is an adaptive filtering processing in which the filter strength is adjusted to a value most suitable for each block boundary in accordance with the value of the Boundary Strength (BS). That is, the boundary strength BS is used for determining whether to perform filtering on the boundary or not and defining the maximum value of pixel value variations when filtering is performed.
(75)
(76) In the de-blocking filter 310 shown in
(77) The processing of the de-blocking filter 310 is now described with reference to
(78)
(79) Filtering processing when the boundary strength BS=4 will be described. In the first filtering processing on the boundary [1] of a 4×4 sub-macro-block, with eight pixels p3, p2, p1, p0, q0, q1, q2 and q3 sandwiching the boundary [1] as the inputs, six pixels p2, p1, p0, q0, q1 and q2 are rewritten to pixels P2, P1, P0, Q0, Q1 and Q2.
(80) The pixels P2, P1 and P0 are switched between filtering equations by the condition of (Equation 7), and are calculated by (Equation 8) or (Equation 9).
ap<β and |p0−q0|<4α+2
ap=|p2−p0| [Equation 7]
α: coefficient 1 calculated from quantization parameter
β: coefficient 2 calculated from quantization parameter
(81) When the condition of (Equation 7) is satisfied, the pixels P0, P1 and P2 are obtained by (Equation 8).
P0=(p2+2*p1+2*p0+2*q0+q1+4)/8
P1=(p2+p1+p0+q0+2)/4
P2=(2*p3+3*p2+p1+p0+q0+4)/8 [Equation 8]
(82) When the condition of (Equation 7) is not satisfied, the pixels P0, P1 and P2 are obtained by (Equation 9).
P0=(2*p1+p0+q1+2)/4
P1=p1
P2=p2 [Equation 9]
(83) The pixels Q0, Q1 and Q2 are switched between filtering equations by the condition of (Equation 10), and are calculated by (Equation 11) or (Equation 12).
ap<β and |p0−q0|<4α+2
aq=|q2−q0| [Equation 10]
α: coefficient 1 calculated from quantization parameter
β: coefficient 2 calculated from quantization parameter
(84) When the condition of (Equation 10) is satisfied, the pixels Q0, Q1 and Q2 are calculated by (Equation 11).
Q0=(p1+2*p0+2*q0+2*q1+q2+4)/8
Q1=(p0+q0+q1+q2+2)/4
Q2=(2*q3+3*q2+q1+q0+p0+4)/8 [Equation 11]
(85) When the condition of (Equation 10) is not satisfied, the pixels Q0, Q1 and Q2 are calculated by (Equation 12).
Q0=(2*q1+q0+p1+2)/4
Q1=q1
Q2=q2 [Equation 12]
(86) When the filtering processing is adaptively switched according to the quantization parameter and the pixel value as described above, with the data processor by the SIMD data-parallel processor, the BS condition determination cannot be performed in parallel, so that the calculators disposed in parallel cannot be effectively used. Instead, by performing the de-blocking filtering processing by dedicated hardware comprising the BS condition determination processor 602 and the filtering processor 605 as shown in
(87) The image having undergone the de-blocking filtering processing by the de-blocking filter 310 in the video encoder of the present embodiment shown in
(88) Next, the processing amount required when the video encoder shown in
(89)
(90) In
(91) In the encoding processing, the processing amount in the motion detection, the motion compensation, the variable-length coding and the de-blocking filtering is large. Concrete numerical values of these processing amounts among the methods are as follows:
(92) In method 1, the motion detection processing is “3048” megacycles, the variable-length coding processing is “1000” megacycles, the de-blocking filtering processing is “321” megacycles, the motion compensation processing is “314” megacycles, and the remaining processing is “217” megacycles. The total processing amount is “4900” megacycles.
(93) In method 2, the motion detection processing is “381” megacycles, the variable-length coding processing is “333” megacycles, the de-blocking filtering processing is “107” megacycles, the motion compensation processing is “39” megacycles, and the remaining processing is “52” megacycles. The total processing amount is “900” megacycles.
(94) In method 3, the motion detection processing is “381” megacycles, the variable-length coding processing is “67” megacycles, the de-blocking filtering processing is “80” megacycles, the motion compensation processing is “39” megacycles, and the remaining processing is “30” megacycles. The total processing amount is “607” megacycles.
(95) In the method 4, the motion detection processing is “203” megacycles, the variable-length coding processing is “67” megacycles, the de-blocking filtering processing is “21” megacycles, the motion compensation processing is “21” megacycles, and the remaining processing is “29” megacycles. The total processing amount is “352” megacycles.
(96) The motion detection processing is a process of selecting a position (motion vector) where the sum of the absolute values of the differences between the pixel values of the object macro-block and the reference macro-block is the smallest. In the case of the MPEG-4 AVC, the motion vector can be set in the unit of 4×4 sub-macro-blocks. Therefore, the calculation of the sum of the absolute values of the differences among 16 pixels can be processed in parallel. In the methods 2 and 3, the motion detection processing is performed by an 8-parallel SIMD parallel data processor, and compared to the method 1, a significant speedup is realized. In the method 4, since the motion detection processing is performed by 16-parallel dedicated hardware capable of calculating the sum of the absolute values of the differences, higher-speed processing than the SIMD parallel data processor is realized.
(97) The motion compensation processing is a processing of obtaining the reference image pointed by the motion vector, with ¼ pixel precision. In this processing, parallel processing is also possible because processing is performed in the unit of 4×4 sub-macro-blocks. Like in the case of the motion detection processing, the motion compensation processing is performed by the 8-parallel SIMD parallel data processor in the methods 2 and 3 and by the dedicated hardware in the method 4, thereby a significant speedup is realized.
(98) The variable-length coding processing which is an arithmetic coding processing called CABAC is a sequential processing of performing coding by changing the probability of occurrence of the object block in accordance with the context of the peripheral block. Method 2 is intended to perform the variable-length coding processing by using an MIMD parallel data processor capable of issuing four instructions, and the processing amount is, at most, ⅓ that of the one instruction issuing processor of the method 1. In methods 3 and 4, the VLC processing is performed by dedicated hardware, and since the determination processing and the table search processing are performed at high speed, the processing time can be reduced to 1/15 that of the method 1.
(99) The de-blocking filtering processing is a parallel processing by the MIMD parallel data processor in method 2, and a parallel processing by the SIMD parallel data processor in method 3. Since the performance of the filtering processing and the performance of the BS determination processing are not improved in the MIMD type and the SIMD type, respectively, the processing time can be reduced only to ⅓ to ¼. On the other hand, in the method 4, the de-blocking filtering processing is performed by the dedicated hardware, and by dividing the BS determination processing and the filtering processing, and by pipeline operation, the processing time can be reduced to 1/15 that of the method 1.
(100) As is apparent from the above, by implementing the motion detection processing, the motion compensation processing, the variable-length coding processing and the de-blocking filtering as dedicated hardware like in the present embodiment, a significant speedup is realized.
Fourth Embodiment
(101)
(102) The video decoder of the present embodiment is a decoder capable of the MPEG-4 AVC. Each component is given a name adequately expressing a function of the video decoder according to the MPEG-4 AVC.
(103) The video decoder of the present embodiment shown in
(104) The processing of a decoding controller 331 is performed by the instruction-parallel processor 100 of
(105) The processing of a motion vector decoder 336 and a motion compensator 337 are performed by the first data-parallel processor 101 of
(106) The processing of an inverse quantizer 333, an inverse 4×4 DCT transformer 334 and a reconstructor 335 are performed by the second data-parallel processor 102 of
(107) A variable-length decoder 332 corresponds to the variable-length encoding/decoding unit 105 of
(108) The outline of the operation of the video decoder of the present embodiment is now described.
(109) An encoded video input 341 encoded by arithmetic encoding is inputted to the variable-length decoder 332 and decoded to obtain the quantized DCT coefficient and the motion vector difference. The obtained quantized DCT coefficient is inversely quantized by the inverse quantizer 333, and then, inversely discrete-cosine-transformed by the inverse 4×4 DCT transformer 334 to obtain the difference image data.
(110) On the other hand, the motion vector is obtained by the motion vector decoder 336 from the motion vector difference obtained by the variable-length decoder 332, and the predicted image is obtained by the motion compensator 337 from the reference image and the motion vector stored in the frame memory 339.
(111) A new image is reconstructed by the reconstructor 335 from the difference image data and the predicted image and outputted as a video output 342. The outputted video output 342 is, at the same time, de-blocking-filtering-processed by the de-blocking filter 338, and then, stored into the frame memory 339.
(112) The control of the quantizer 333 and the inverse 4×4 DCT transformer 334 is performed by the decoding controller 331.
(113) The de-blocking filtering processing, the inverse quantization processing and the inverse DCT processing are similar to those in the third embodiment, and descriptions thereof are omitted.
(114) In the present embodiment, by performing the variable-length decoding processing and the de-blocking filtering processing with dedicated hardware, a significant speedup can be realized.
(115) Moreover, while the above description takes up an example in which the video decoder of the present embodiment is implemented by use of the signal-processing apparatus of the second embodiment of the present invention shown in
Fifth Embodiment
(116)
(117) In the audio encoder shown in
(118) In the audio decoder shown in
(119) Audio encoding and decoding can be processed by any processor because the required processing amount is small compared to that of video encoding and decoding according to the MPEG-4 AVC.
(120) When the audio encoder and the audio decoder of the present embodiment are implemented by use of the signal-processing apparatus of the first embodiment, the processing of the compressor 351 and the coder 352 shown in
Sixth Embodiment
(121)
(122) The AV reproduction system of the present embodiment has a reproducer 801, a demodulator/error corrector 802, an AV decoder 803, a memory 804, and D/A converters 805 and 807. The AV decoder 803 has a video decoder 803A and an audio decoder 803B.
(123) The video decoder 803A is the video decoder of the fourth embodiment of the present invention shown in
(124) The audio decoder 803B is an audio decoder of the fifth embodiment of the present invention shown in
(125) The reproducer 801 reproduces media on which coded AV signals are recorded, and outputs reproduction signals. The reproducer 801 may be any reproducer that is capable of reproducing media on which coded AV signals according to the MPEG-4 AVC standard are recorded such as a DVD video reproducer or an HD (hard disk) video reproducer.
(126) The demodulator/error corrector 802 demodulates the signal reproduced by the reproducer 801, error-corrects the demodulated signal, and outputs the error-corrected signal to the AV decoder 803.
(127) The video decoder 803A of the AV decoder 803 decodes the coded video signal and outputs the decoded signal, and the outputted signal is converted to an analog signal by the D/A converter 805 and outputted as a video output 806.
(128) The audio decoder 803B of the AV decoder 803 decodes the coded audio signal and outputs the decoded signal, and the outputted signal is converted to an analog signal by the D/A converter 807 and outputted as an audio output 808.
(129) In the memory 804, AV signals before decoding, during decoding and/or after decoding, and other data are stored.
(130) In the AV reproduction system of the present embodiment, part or all of the functions of the demodulator/error corrector 802 may be provided to the reproducer 801.
(131) The AV reproduction system of the present embodiment can be used for receiving MPEG-4 AVC-compliant AV signals transmitted from CATV, the Internet or satellite communications, and can be also used for demodulating and decoding them. In this case, the AV reproduction system can be performed to input the received signal to the demodulator/error corrector 802 and decode the signal by the above-described process. Further, the AV reproduction system of the present embodiment can be applied as a digital television by displaying the video output on a display.
Seventh Embodiment
(132)
(133) The AV recording system of the present embodiment has an AV encoder 825, an error correcting code adder/modulator 827, a recorder 828, a memory 826 and A/D converters 822 and 824. The AV encoder 825 has a video encoder 825A and an audio encoder 825B.
(134) The video encoder 825A is the video encoder of the third embodiment of the present invention shown in
(135) The audio encoder 825B is an audio encoder of the fifth embodiment of the present invention shown in
(136) The outline of the operation of the AV recording system of the present embodiment is now described.
(137) A video input 821 is A/D converted by the A/D converter 822, an audio input 823 is A/D converted by the A/D converter 824, and these are outputted to the A/V encoder 825.
(138) The video encoder 825A of the AV encoder 825 encodes the inputted video signal according to the MPEG-4 AVC specifications, and outputs the signal as an encoded video bit stream. Likewise, the audio encoder 825B encodes the inputted audio signal according to the MPEG-4 AVC specifications, and outputs the signal as an encoded audio bit stream.
(139) The error corrector/modulator 827 adds an error correcting code to the encoded video bit stream and the encoded audio bit stream outputted by the AV encoder 825, modulates the bit streams, and outputs them to the recorder.
(140) The recorder 828 records the modulated AV signal onto a recording medium. The recording medium includes an optical medium such as a DVD, a magnetic recording medium such as an HD (hard disk) or a semiconductor memory.
(141) In the memory 826, AV signals before encoding, during encoding and/or after encoding by the AV encoder 825, and other data are stored.
(142) In the AV recording system of the present embodiment, part or all of the functions of the error corrector/modulator 827 may be included in the recorder 828.
(143) The AV recording system of the present embodiment can be used as a video camera system in which a video camera is connected to an input and the signal therefrom is encoded and recorded according to the MPEG-4 AVC specifications.
Eighth Embodiment
(144)
(145) As for the function, the AV encoder/decoder 843 has functions equal to those of the video encoder of the third embodiment of the present invention, the video decoder of the fourth embodiment and the audio encoder and the audio decoder of the fifth embodiment, and is structured by one signal-processing apparatus of the first embodiment or one signal-processing apparatus of the second embodiment. Descriptions of the operation thereof are omitted in this embodiment because they have already been given.
(146) The recorder/reproducer 841 records/reproduces modulated AV signals according to the MPEG-4 AVC specifications. The recording medium includes an optical medium such as a DVD, a magnetic recording medium such as an HD (hard disk) or a semiconductor memory. The recorder/reproducer 841 has a different recording/reproduction mechanism according to the recording medium being used.
(147) The modem/error processor 842, at the time of recording, adds an error correcting code to the video bit stream and the audio bit stream encoded by the AV encoder/decoder 843, modulates the bit streams, and transmits them to the recorder/reproducer 841. The modem/error processor 842, at the time of reproduction, demodulates the AV signal reproduced by the recorder/reproducer 841, error-corrects the demodulated signal, and then, transmits the video bit stream and the audio bit stream to the AV encoder/decoder 843.
(148) The AV interface 845, at the time of reproduction, D/A converts the video signal and the audio signal decoded by the AV encoder/decoder 843, and outputs a video output 846 and an audio output 848. The AV interface 845, at the time of recording, A/D converts a video input 847 and an audio input 849, and transmits them to the AV encoder/decoder 843.
(149) In the memory 844, AV signals before encoding, during encoding and/or after encoding and AV signals before decoding, during decoding and/or after decoding by the AV encoder/decoder 843, and other data are stored.
(150) The controller 840 controls the recorder/reproducer 841, the modem/error processor 842, the AV encoder/decoder 843 and the AV interface 845 to switch the functions thereof between at the time of recording and at the time of reproduction, and controls data transfer.
(151) In the AV recording/reproduction system of the present embodiment, part or all of the functions of the modem/error processor 842 may be included in the recorder/reproducer 841.
(152) As described above in detail, the signal-processing apparatus of the present invention and an electronic apparatus using the same are expected to be applied to various electronic apparatuses to which the MPEG-4 AVC encoding standard is applied. The application to electronic apparatuses is over a wide range from domestic stationary terminals to battery-driven mobile terminals such as DVD systems, video camera systems and picture-phone systems for mobile telephones currently performed according to the MPEG-2.
(153) In these systems, the performance required of the LSI realizing the MPEG-4 AVC standard differs according to the manner of system application. For stationary systems, since large image sizes are handled, processing performance is important, whereas for mobile terminals, reduction in power consumption is important to increase the battery life. The signal-processing apparatus of the present invention and an electronic apparatus using the same are applicable to both of them. That is, by combining the instruction-parallel processor, the data-parallel processor and the dedicated hardware, an improvement in processing performance and a reduction in power consumption are enabled.
(154) The signal-processing apparatus of the present invention comprises a plurality of SIMD processors (in the example of
(155) For example, in the signal-processing apparatus for mobile terminals requiring low power consumption, by providing two SIMD processors, the degree of parallelism can be made 16, so that a low-voltage operation and the reduction in operating frequency are enabled.
(156) Moreover, instead of using the degree of parallelism of 16, it can be implemented using two pairs of SIMD processors each comprising eight processing elements and causing them to perform different processing.
(157) By dividing the entire processor and performing parallel processing such that the second SIMD processor performs DCT processing while the first SIMD processor is performing the pixel value calculation of the motion compensation, a plurality of processing can be performed while the operating ratios are maintained. Consequently, the calculation performance can be significantly improved.
(158) While applications conforming to the MPEG-4 AVC standard are described in the above-described embodiments, the present invention is not limited to these applications. The gist of the present invention is to realize an improvement in processing performance and a reduction in power consumption by combining the instruction-parallel processor, the data-parallel processor and the dedicated hardware, and various applications are possible without departing from the gist of the invention.
(159) According to the present invention, a signal-processing apparatus capable of performing high-performance and high-efficiency image processing for image processing requiring a large data processing amount like the encoding/decoding processing of the MPEG-4 AVC, and an electronic apparatus using the same can be provided.
(160) Having described preferred embodiments of the invention with reference to the accompanying drawings, it is to be understood that the invention is not limited to those precise embodiments, and that various changes and modifications may be effected therein by one skilled in the art without departing from the scope or spirit of the invention as defined in the appended claims.