Video decoding device, method, and program
11695961 · 2023-07-04
Assignee
Inventors
Cpc classification
H04N19/196
ELECTRICITY
H04N19/159
ELECTRICITY
H04N19/68
ELECTRICITY
H04N19/105
ELECTRICITY
H04N19/167
ELECTRICITY
H04N19/70
ELECTRICITY
H04N19/174
ELECTRICITY
H04N19/44
ELECTRICITY
International classification
H04N19/159
ELECTRICITY
H04N19/105
ELECTRICITY
H04N19/107
ELECTRICITY
H04N19/167
ELECTRICITY
H04N19/174
ELECTRICITY
H04N19/196
ELECTRICITY
H04N19/44
ELECTRICITY
H04N19/68
ELECTRICITY
Abstract
A video decoding device includes a demultiplexing unit which demultiplexes a video bitstream including video data of an encoded slice, first Supplemental-Enhancement-Information having information indicating segments where a refresh has completed in a current picture, and second Supplemental-Enhancement-Information having information indicating a synchronization starting picture and a synchronization completed picture, an extracting unit which extracts the information indicating segments where a refresh has completed in a current picture from a message which is part of the demultiplexed Supplemental-Enhancement-Information; and a video decoding unit which decodes image data from the demultiplexed video bitstream by using at least inter prediction, wherein the synchronization starting picture is a leading picture within a refreshing period, and the synchronization completed picture is the end position of the refreshing period.
Claims
1. A video decoding device comprising: a demultiplexing unit which demultiplexes a video bitstream including video data of an encoded slice, first Supplemental-Enhancement-Information having information indicating segments where a refresh has completed in a current picture, and second Supplemental-Enhancement-Information having information indicating a synchronization starting picture and a synchronization completed picture; an extracting unit which extracts the information indicating segments where a refresh has completed in a current picture from a message which is part of the demultiplexed Supplemental-Enhancement-Information; and a video decoding unit which decodes image data from the demultiplexed video bitstream by using at least inter prediction, wherein the synchronization starting picture is a leading picture within a refreshing period, and the synchronization completed picture is the end position of the refreshing period.
2. A video decoding method comprising: demultiplexing a video bitstream including video data of an encoded slice, first Supplemental- Enhancement-Information having information indicating segments where a refresh has completed in a current picture, and second Supplemental-Enhancement-Information having information indicating a synchronization starting picture and a synchronization completed picture; extracting the information indicating segments where a refresh has completed in a current picture from a message which is part of the demultiplexed Supplemental-Enhancement-Information; and decoding image data from the demultiplexed video bitstream by using at least inter prediction, wherein the synchronization starting picture is a leading picture within a refreshing period, and the synchronization completed picture is the end position of the refreshing period.
3. A non-transitory computer readable information recording medium storing a video decoding program, when executed by a processor, performs: demultiplexing a video bitstream including video data of an encoded slice, first Supplemental-Enhancement-Information having information indicating segments where a refresh has completed in a current picture, and second Supplemental-Enhancement-Information having information indicating a synchronization starting picture and a synchronization completed picture; extracting the information indicating segments where a refresh has completed in a current picture from a message which is part of the demultiplexed Supplemental-Enhancement-Information; and decoding image data from the demultiplexed video bitstream by using at least inter prediction, wherein the synchronization starting picture is a leading picture within a refreshing period, and the synchronization completed picture is the end position of the refreshing period.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
DESCRIPTION OF EMBODIMENTS
(18) Exemplary embodiments of the present invention will be described below with reference to accompanying drawings.
First Exemplary Embodiment
(19)
(20) The video encoder 101 performs encoding on each of pictures in an input video and supplies a bitstream of the video to the multiplexer 104.
(21) When an adaptive predictive coding is performed in the video encoder 101, the refresh controller 102 supplies a control signal to the video encoder 101 such that prediction restriction is kept between refreshing groups of segments so as to perform intended refresh. The refresh controller 102 supplies display-enabled area information of each picture obtained from the refreshing groups of segments to the display-enabled area encoder 103.
(22) The display-enabled area encoder 103 encodes the display-enabled area information supplied from the refresh controller 102 and supplies the encoded display-enabled area information to the multiplexer 104 as a display-enabled area information bitstream.
(23) The multiplexer 104 multiplexes a video bitstream obtained from the video encoder 101 and the display-enabled area information bitstream obtained from the display-enabled area encoder 103 and outputs the multiplexed bitstream as a bitstream.
(24) A video encoding process by the video encoding device of the first exemplary embodiment will be described below with reference to a flowchart of
(25) In step S10001, the refresh controller 102 supplies control information defining a refresh operation to the video encoder 101 and the display-enabled area encoder 103.
(26) In step S10002, the video encoder 101 encodes respective pictures of an input video and supplies a video bitstream thereof to the multiplexer 104.
(27) In step S10003, when a picture to be currently encoded is a picture between a synchronization starting picture and a synchronization completed picture, the process proceeds to step S10004. When the picture to be currently encoded is not a picture between the synchronization starting picture and the synchronization completed picture, processing of step S10004 is skipped and the process proceeds to step S10005.
(28) In step S10004, the display-enabled area encoder 103 encodes information defining a display-enabled area of each picture and outputs a bitstream thereof to the multiplexer 104.
(29) In step S10005, the multiplexer 104 multiplexes a video bitstream supplied from the video encoder 101 and a display-enabled area information bitstream supplied from the display-enabled area encoder 103 and outputs a bitstream.
(30) In step S10006, when the bitstream output in step S10005 corresponds to a final picture to be encoded, video encoding is completed. In a case where the bitstream does not correspond to the final picture, the process returns to S10001.
(31) According to the above configuration, in a case in which the video encoding device generates a bitstream by using gradual refresh, the video encoding device can generate a bitstream so that a video decoding device can start partial display until the synchronization completed picture and reduce display delay, when the video decoding device receives and replays the bitstream in the middle of the bitstream. This is because the video decoding device can know an area that can be displayed partially in pictures until the synchronization completed picture since the multiplexer 104 receives control information defining a display-enabled start area in each picture from the display-enabled area encoder 103, multiplexes the control information on a bitstream, and transmits the control information to the video decoding device.
EXAMPLE
(32) A specific example of the video encoding device of the above first exemplary embodiment will be described below.
(33) In the present example, with respect to a progressive video in which a spatial resolution for each frame is 320×240 pixels, the most left area of 64×240-pixel areas obtained by uniformly dividing a picture by 5 in a horizontal direction is set as an intra-coded segment, and refresh is performed so that an intra-coded segment is shifted so as not to be overlapped with each other through 5 pictures.
(34)
(35) Information of the above-described display-enabled areas may be multiplexed on a bitstream as a part (referred to as a partial display-enabled area supplemental enhancement information message) of a supplemental enhancement information message, according to the description of “Specification of syntax functions, categories, and descriptors” of Non Patent Literature 1, for example.
(36) An example of multiplexing of display-enabled area information in a list of
(37) Assuming that a picture number of a picture to which the partial display-enabled area supplemental enhancement information message is associated is CurrPicOrderCnt in a case where decoding is started with a picture, which is assumed as a synchronization starting picture, defined by a picture number RecvStartPicOrderCnt obtained by Formula 1 below, recovery_starting_poc_cnt indicates that a display-enabled area represented in the partial display-enabled area supplemental enhancement information message is applied.
RecvStartPicOrderCnt=CurrPicOrderCnt−recovery_starting_poc_cnt (Formula 1)
(38) partial_recovery_cnt_minus1 represents a number of rectangular areas which exist in the partial display-enabled area supplemental enhancement information message. A display-enabled area in a picture to which the partial display-enabled area supplemental enhancement information message is associated is obtained as a union of the display-enabled rectangular areas.
(39) exact_match_flag[i] indicates whether a decoded image in a case where decoding is started from a picture defined by RecvStartPicOrderCnt is exactly matched with a decoded image in a case where a bitstream is received and decoded from the beginning thereof in the display-enabled rectangular area. In a case where a decoded value of exact_match_flag[i] is 1, it is indicated that exact matching of a decoded image is achieved with respect to all pixels in the display-enabled rectangular area. In a case where a decoded value of exact_match_flag[i] is 0, it is indicated that there is a possibility that exact matching of a decoded image will not be achieved with respect to all pixels in the display-enabled rectangular area.
(40) recovery_rect_left_offset[i], recovery_rect_right_offset[i], recovery_rect_top_offset[i], and recovery_rect_bottom_offset[i] are a group of parameters that designate a position on a screen of the display-enabled rectangular area. Assuming that both a horizontal coordinate and a vertical coordinate of an upper left pixel of a decoded image are set to 0, and a horizontal coordinate and a vertical coordinate of a lower right pixel of the decoded image are set to PicWidthInSamples−1 and PicHeightInSamples−1, respectively, when an upper left pixel horizontal coordinate, an upper left pixel vertical coordinate, a lower right pixel horizontal coordinate, and a lower right pixel vertical coordinate of the rectangular area in the picture defined in CurrPicOrderCnt are CurrRecvRectLeft[i], CurrRecvRectRight[i], CurrRecvRectTop[i], and CurrRecvRectBottom[i], values of CurrRecvRectLeft[i], CurrRecvRectRight[i], CurrRecvRectTop[i], and CurrRecvRectBottom[i] are calculated by Formula 2 below.
CurrRecvRectLeft[i]=recovery_rect_left_offset[i]CurrRecvRectRight[i]=PicWidthInSamples−1−recovery_rect_right_offset[i]CurrRecvRectTop[i]=recovery_rect_top_offset[i]CurrRecvRectBottom[i]=PicHeightInSamples−1−recovery_rect_bottom_offset[i] (Formula 2)
(41) Now, the description of the multiplexing example of the display-enabled area information of
(42) The present example is an example in which display-enabled area information is directly encoded and is multiplexed on a bitstream as a supplemental enhancement information message, with respect to pictures from the synchronization starting picture to the synchronization completed picture on a picture-by-picture basis. However, according to the present invention, for example, with respect to a first picture, the display-enabled area information is multiplexed as partial display-enabled area supplemental enhancement information message shown in
(43) A multiplexing example of the display-enabled area information in a list of
(44) partial_recovery_ref_poc_cnt indicates that, assuming that the picture number of a picture to which the partial display-enabled area update supplemental enhancement information message is associated is CurrPicOrderCnt, a display-enabled area of the picture is calculated with reference to another display-enabled area defined by the partial display-enabled area supplemental enhancement information message, or the partial display-enabled area update supplemental enhancement information message, associated with a picture defined by a picture number PartRecvRefPicOrderCnt calculated by Formula 3 below.
PartRecvRefPicOrderCnt=CurrPicOrderCnt−partial_recovery_ref_poc_cnt (Formula 3)
(45) exact_match_flag[i] indicates whether a decoded image in a case where decoding is started from a picture defined by RecoveryStartingPicOrderCnt is exactly matched with a decoded image in a case where a bitstream is received and decoded from the beginning thereof in the display-enabled rectangular area. In a case where a decoded value of exact_match_flag[i] is 1, it is indicated that exact matching of a decoded image is achieved with respect to all pixels in a display-enabled rectangular area. In a case where a decoded value of exact_match_flag[i] is 0, it is indicated that there is a possibility that exact matching of a decoded image will not be achieved with respect to all pixels in a display-enabled rectangular area.
(46) recovery_rect_left_offset_decr[i], recovery_rect_right_offset_decr[i], recovery_rect_top_offset_decr[i], and recovery_rect_bottom_offset_decr[i] are a group of parameters for updating a position on a screen of the display-enabled rectangular area in the picture. Assuming that an upper-left pixel-horizontal coordinate, an upper-left pixel-vertical coordinate, a lower-right pixel-horizontal coordinate, and a lower-right pixel-vertical coordinate in an i-th display-enabled rectangular area, determined in a picture defined by PartRecvRefPicOrderCnt, are RefRecvRectLeft[i], RefRecvRectRight[i], RefRecvRectTop[i], and RefRecvRectBottom[i], values of the upper-left pixel-horizontal coordinate CurrRecvRectLeft[i], the upper-left pixel-vertical coordinate CurrRecvRectRight[i], the lower-right pixel-horizontal coordinate CurrRecvRectTop[i], and the lower-right pixel-vertical coordinate CurrRecvRectBottom[i] in the rectangular area in the picture defined in CurrPicOrderCnt are calculated by Formula 4 below.
CurrRecvRectLeft[i]=RefRecvRectLeft[i]-recovery_rect_left_offset_decr[i]CurrRecvRectRight[i]=RefRecvRectRight[i]+recovery_rect_right_offset_decr[i]CurrRecvRectTop[i]=RefRecvRectTop[i]-recovery_rect_top_offset_decr[i]CurrRecvRectBottom[i]=RefRecvRectBottom[i]+recovery_rect_bottom_offset_decr[i] (Formula 4)
(47) With respect to recovery_rect_left_offset_decr[i], recovery_rect_right_offset_decr[i], recovery_rect_top_offset_decr[i], and recovery_rect_bottom_offset_decr[i], when each variable does not exist on the bitstream, its value is considered to be 0.
(48) Now, the description of the multiplexing example of the display-enabled area update information of
(49) Further, the present example is an example in which display-enabled area information is multiplexed on a bitstream as a supplemental enhancement information message independent of other supplemental enhancement information messages in
(50) A multiplexing example of the display-enabled area information illustrated in the list of
(51) partital_recovery_info_present_flag is a parameter indicating whether information of the partial display-enabled area exists in a relevant supplemental enhancement information message.
(52) Since other parameters displayed in the list of
(53) Now, the description of the multiplexing example of the display-enabled area information of
(54) Further, in the above description of the video encoding process in the video encoding device of the first exemplary embodiment of the present invention, an operation example of encoding the display-enabled area information and multiplexing the display-enabled area information on a bitstream, for each picture having a display-enabled area, is illustrated. However, according to the present invention, the display-enabled area information in the synchronization starting picture may be superposed on the recovery point supplemental enhancement information message and may be multiplexed on a bitstream until the synchronization completed picture as illustrated in the list of
(55) A multiplexing example of the display-enabled area information that is illustrated in
(56) partial_recovery_update_info_present_flag[i] is a parameter indicating whether update information of an i-th display-enabled area exists in a picture after the synchronization starting picture.
(57) Since other parameters illustrated in the list of
(58) Now, the description of the multiplexing example of the display-enabled area information of
(59) The present example is an example in which the display-enabled area information is multiplexed as a supplemental enhancement information message with respect to each picture. However, according to the present invention, the display-enabled area information may be multiplexed with, for example, a sequence parameter set used for encoding and decoding of a whole bitstream or the display-enabled area information may be multiplexed with, for example, a picture parameter set.
(60) In addition, in the present example, the display-enabled area information is encoded with respect to all pictures from the synchronization starting picture to the synchronization completed picture and is multiplexed on the bitstream. However, the present invention is not limited to encoding of the display-enabled area information for all pictures from the synchronization starting picture to the synchronization completed picture, and the display-enabled area information may be encoded selectively for certain pictures from the synchronization starting picture to the synchronization completed picture and may be multiplexed on a bitstream.
(61) In the present example, refresh is performed by shifting an intra-coded segment from the left to the right in a screen at uniform intervals without overlapping. However, the present invention is not limited to the above refresh, and a refresh direction may be arbitrarily selected. For example, refresh may be performed in an arbitrary direction, such as from right to left, up to down, upper left to lower right, from the center of a picture to left and right, or along an eddy shape from the center of a picture. In addition, the method of shifting an intra-coded segment for refresh is arbitrary. A size of the intra-coded segment may vary with respect to respective pictures and the same area may be set as the intra-coded segment two or more times during a refreshing period. In addition, as known from the description of gradual refresh described herein, the present invention is not limited to refresh based on intra-coded segments, and refresh may be performed using an arbitrary method, such as a method of limiting a prediction range of inter-picture predictive coding.
(62) Although the display-enabled area is encoded as offset values from the upper, lower, left and right edges of a picture and is multiplexed on a bitstream in the present example, the present invention is not limited to the configuration in which the display-enabled area is encoded as offset values from the upper, lower, left and right edges of a picture, and the display-enabled area may be encoded by using an arbitrary expression method.
(63) Although the display-enabled area information is multiplexed according to the describing method of the AVC described in Non Patent Literature 1, the present invention is not limited to use of the AVC, and may be applicable to other video encoding methods or to an arbitrary dimensional video encoding method instead of the two dimensional video.
(64) In the present example, although the display-enabled area is encoded as a set of rectangular areas on a two dimensional image and is multiplexed on a bitstream, the present invention is not limited to the encoding of the display-enabled area as the set of rectangular areas on the two-dimensional image, and the display-enabled area may be encoded by using an arbitrary expression method instead of the set of rectangular areas or may be encoded as an arbitrary dimensional area.
(65) The description of the video encoding device of the first exemplary embodiment of the present invention is ended now.
Second Exemplary Embodiment
(66)
(67) The demultiplexer 201 demultiplexes a bitstream, extracts a video bitstream and a display-enabled area information bitstream, and supplies the video bitstream to the video decoder 202, and supplies the display-enabled area information bitstream to the display-enabled area decoder 203.
(68) The video decoder 202 decodes the video bitstream and supplies an obtained reconstructed image to the video output controller 204.
(69) The display-enabled area decoder 203 decodes the display-enabled area information bitstream and supplies obtained display-enabled area information to the video output controller 204.
(70) The video output controller 204 controls such that an image outside the display-enabled area obtained from the display-enabled area decoder 203 is not displayed with respect to the reconstructed image supplied from the video decoder 202, and outputs a processing result as a decoded video.
(71) A video decoding process by the video decoding device of the second exemplary embodiment will be described below with reference to a flowchart of
(72) In step S20001, the demultiplexer 201 demultiplexes a bitstream, extracts a video bitstream and a display-enabled area information bitstream, and supplies the video bitstream to the video decoder 202, and the display-enabled area information bitstream to the display-enabled area decoder 203.
(73) In step S20002, the video decoder 202 decodes the video bitstream and supplies an obtained reconstructed image to the video output controller 204.
(74) In step S20003, when display-enabled area information is associated with a picture, the process proceeds to step S20004. When display-enabled area information is not associated with the picture, processing of step S20004 is skipped and the process proceeds to step S20005.
(75) In step S20004, the display-enabled area decoder 203 decodes the display-enabled area information bitstream and supplies obtained display-enabled area information to the video output controller 204.
(76) In step S20005, the video output controller 204 controls such that an image outside the display-enabled area obtained from the display-enabled area decoder 203 is not displayed with respect to the reconstructed image supplied from the video decoder 202, and outputs a processing result as a decoded video.
(77) In step S20006, when the bitstream decoded in step S20002 corresponds to a final picture to be decoded, video decoding is ended. In a case where the bitstream does not correspond to the final picture, the process returns to S20001.
(78) According to the above configuration, when receiving and replaying a bitstream generated by using gradual refresh in the middle thereof, the video decoding device starts display partially without obtaining a display-enabled area from a prediction reference range by calculation, rather than waits for decoding of a synchronization completed picture, thereby reducing display delay. This is because the video output controller 204 can control such that an image outside the display-enabled area is not output as a decoded video since the video the decoding device receives information defining a display-enabled area as a bitstream from the video encoding device, and the demultiplexer 201 supplies the information to the display-enabled area decoder 203.
(79) Now, the description of the video decoding device of the second exemplary embodiment of the present invention is ended.
Third Exemplary Embodiment
(80)
(81) The video encoder 101 performs encoding on each of pictures constituting an input video and provides a bitstream of the video to the multiplexer 104. In addition, information related to an area in which refresh is completed is supplied to the display-enabled area encoder 103.
(82) The display-enabled area encoder 103 calculates and encodes display-enabled area information based on display-enabled area information supplied from the refresh controller 102 and refresh completed area information supplied from the video encoder 101, and supplies the display-enabled area information to the multiplexer 104 as a display-enabled area information bitstream.
(83) A video encoding process by the video encoding device of the third exemplary embodiment is represented by the flowchart of
(84) According to the above configuration, an effect is obtained in which a display-enabled area wider than that of the video encoding device of the first exemplary embodiment can be transferred to a video decoding device. This is because in a case where intra coding is eventually selected independently of a control signal supplied from the refresh controller 102 and thereby a display-enabled area is generated by completing refresh in the video encoder 101, the display-enabled area encoder 103 can encode a wider display-enabled area and supply the display-enabled area to the multiplexer 104.
(85) Now, the description of the video encoding device of the third exemplary embodiment of the present invention is ended.
Fourth Exemplary Embodiment
(86)
(87) The display-enabled area encoding controller 105 supplies a control signal for controlling encoding of display-enabled area information to the display-enabled area encoder 103.
(88) The display-enabled area encoder 103 encodes information, of which the encoding is indicated by the control signal supplied from the display-enabled area encoding controller 105, among display-enabled area information supplied from the refresh controller 102 and supplies the information to the multiplexer 104 as a display-enabled area information bitstream.
(89) A video encoding process by the video encoding device of the fourth exemplary embodiment will be described below with reference to a flowchart of
(90) In step S10001, the refresh controller 102 provides control information defining a refresh operation to the video encoder 101 and the display-enabled area encoder 103.
(91) In step S10002, the video encoder 101 encodes respective pictures of an input video and provides a video bitstream thereof to the multiplexer 104.
(92) In step S10003, in a case where a picture to be currently encoded is a picture between a synchronization starting picture and a synchronization completed picture, the process proceeds to step S10007. In the case where a picture to be currently encoded is not a picture from the synchronization starting picture to the synchronization completed picture, processing of step S10007 and step S10004 is skipped and the process proceeds to step S10005.
(93) In step S10007, when a current display-enabled area is an area to be encoded, the process proceeds to step S10004. Otherwise, processing of step S10004 is skipped and the process proceeds to step S10005.
(94) In step S10004, the display-enabled area encoder 103 encodes information defining a display-enabled area of each picture, and outputs a bitstream thereof to the multiplexer 104.
(95) In step S10005, the multiplexer 104 multiplexes a video bitstream supplied from the video encoder 101 and a display-enabled area information bitstream supplied from the display-enabled area encoder 103 and outputs the multiplexed bitstream as a bitstream.
(96) In step S10006, when the bitstream output in step S10005 corresponds to a final picture to be encoded, video encoding is ended. In a case where the bitstream does not correspond to the final picture, the process returns to S10001.
(97) According to the above configuration, such effect is obtained that a reduction in compression efficiency of the video data can be relatively suppressed when the video encoding device notifies the video decoding device of a partial area where display can be started before the synchronization completed picture, in addition to the effect obtained by the video encoding device of the first exemplary embodiment. This is because a data amount required to transfer the display-enabled area information can be reduced since the display-enabled area encoding controller 105 limits the display-enabled area to be encoded.
(98) Now, the description of the video encoding device of the fourth exemplary embodiment of the present invention is ended.
(99) As it is obvious from the description of the exemplary embodiments of the present invention, the present invention may be realized by hardware, or a computer program.
(100) An information processing system illustrated in
(101) In the information processing system illustrated in
(102) Part or all of the aforementioned exemplary embodiments can be described as Supplementary notes mentioned below, but the structure of the present invention is not limited to the following structures.
(103) (Supplementary Note 1)
(104) A video encoding device comprising: video encoding means for encoding image data of an input moving image based on prediction and generating a video bitstream of encoded pictures; refresh control means for refreshing such that a partial area in the picture is assumed as a unit area to be refreshed and the unit area to be refreshed is moved on a picture-by-picture basis; display-enabled area encoding means for encoding a display-enabled area for each picture in refreshing and generating a display-enabled area information bitstream; and multiplexing means for multiplexing a video bitstream and the display-enabled area information bitstream.
(105) (Supplementary Note 2)
(106) The video encoding device according to Supplementary note 1, wherein the display-enabled area encoding means generates the display-enabled area information bitstream by using a difference between a display-enabled area in a picture to be encoded and a display-enabled area in a picture encoded in the past.
(107) (Supplementary Note 3)
(108) The video encoding device according to Supplementary note 1 or 2, further comprising display-enabled area encoding control means for controlling whether to encode the display-enabled area, wherein the display-enabled area encoding means selects and encodes a display-enabled area which is determined to be encoded by the display-enabled area encoding control means.
(109) (Supplementary Note 4)
(110) The video encoding device according to any one of Supplementary notes 1 to 3, wherein the refresh control means performs refresh by intra coding.
(111) (Supplementary Note 5)
(112) The video encoding device according to any one of Supplementary notes 1 to 4, wherein the refresh control means shifts a unit area to be refreshed within a prediction limitation range configured by a plurality of pictures, and the video encoding means excludes a predicted value by intra-picture prediction or inter-picture prediction which is beyond the prediction limitation range when performing encoding based on prediction.
(113) (Supplementary Note 6)
(114) A video decoding device comprising: demultiplexing means for demultiplexing a bitstream including video data and a bitstream including display-enabled area information in an image to be decoded; video decoding means for decoding the demultiplexed video bitstream based on prediction and generating image data; video output control means for limiting an output area of the image data based on the demultiplexed display-enabled area information; and display-enabled area decoding means for decoding the demultiplexed display-enabled area information bitstream according to a predetermined method and extracting at least a part of the display-enabled area information.
(115) (Supplementary Note 7)
(116) The video decoding device according to Supplementary note 6, wherein the display-enabled area information bitstream includes at least a difference between a display-enabled area in a picture to be decoded and a display-enabled area in a picture decoded in the past, and the display-enabled area decoding means extracts the difference between the display-enabled area in the picture to be decoded and the display-enabled area in the picture decoded in the past from the display-enabled area information bitstream and obtains a display-enabled area in the picture to be decoded.
(117) (Supplementary Note 8)
(118) A video encoding method comprising: encoding image data of an input moving image based on prediction and generating a video bitstream of encoded pictures; refreshing such that a partial area in the picture is assumed as a unit area to be refreshed and the unit area to be refreshed is moved on a picture-by-picture basis; encoding a display-enabled area for each picture and generating a display-enabled area information bitstream in refreshing; and multiplexing the video bitstream and the display-enabled area information bitstream.
(119) (Supplementary Note 9)
(120) The video encoding method according to Supplementary note 8, further comprising generating the display-enabled area information bitstream by using a difference between a display-enabled area in a picture to be encoded and a display-enabled area in a picture encoded in the past.
(121) (Supplementary Note 10)
(122) The video encoding method according to Supplementary note 8 or 9, further comprising controlling whether to encode the display-enabled area, and selecting and encoding a display-enabled area which is determined to be encoded by the control.
(123) (Supplementary Note 11)
(124) The video encoding method according to any one of Supplementary notes 8 to 10 further comprising refreshing by intra coding when picture refresh is performed.
(125) (Supplementary Note 12)
(126) The video encoding method according to any one of Supplementary notes 8 to 11, wherein, when picture refresh is performed, shifting a unit area to be refreshed within a prediction limitation range configured by a plurality of pictures, and excluding a predicted value by intra-picture prediction or inter-picture prediction which is out of the prediction limitation range when performing encoding based on prediction.
(127) (Supplementary Note 13)
(128) A video decoding method comprising: demultiplexing a bitstream including video data and a bitstream including display-enabled area information in an image to be decoded; decoding the demultiplexed video bitstream based on prediction and generating image data; limiting an output area of the image data based on the demultiplexed display-enabled area information; and decoding the demultiplexed display-enabled area information bitstream according to a predetermined method and extracting at least a part of the display-enabled area information.
(129) (Supplementary Note 14)
(130) The video decoding device according to Supplementary note 13, further comprising extracting a difference between a display-enabled area in a picture to be decoded and a display-enabled area in a picture decoded in the past from the display-enabled area information bitstream including at least the difference between the display-enabled area in the picture to be decoded and the display-enabled area in the picture decoded in the past and obtaining a display-enabled area in the picture to be decoded.
(131) (Supplementary Note 15)
(132) A video encoding program which causes a computer to perform: a video encoding process of encoding image data of an input moving image based on prediction and generating a video bitstream of encoded pictures; a refresh control process of refreshing such that a partial area in the picture is assumed as a unit area to be refreshed and the unit area to be refreshed is moved on a picture-by-picture basis; a display-enabled area encoding process of encoding a display-enabled area for each picture and generating a display-enabled area information bitstream in refreshing; and a multiplexing process of multiplexing a video bitstream and the display-enabled area information bitstream.
(133) (Supplementary Note 16)
(134) The video encoding program according to Supplementary note 15, wherein in the display-enabled area encoding process, the computer is caused to perform generating the display-enabled area information bitstream by using a difference between a display-enabled area in a picture to be encoded and a display-enabled area in a picture encoded in the past.
(135) (Supplementary Note 17)
(136) The video encoding program according to Supplementary note 15 or 16, wherein the computer is caused to perform the display-enabled area encoding control process of controlling whether to encode the display-enabled area, and in the display-enabled area encoding process, to perform selecting and encoding a display-enabled area which is determined to be encoded in the display-enabled area encoding control process.
(137) (Supplementary Note 18)
(138) The video encoding program according to any one of Supplementary note 15 to 17, wherein in the refresh control process, the computer is caused to perform refresh by intra coding.
(139) (Supplementary Note 19)
(140) The video encoding program according to any one of Supplementary note 15 to 18, wherein in the refresh control process, the computer is caused to perform shifting a unit area to be refreshed within a prediction limitation range configured by a plurality of pictures, and in the video encoding process, to perform excluding a predicted value by intra-picture prediction or inter-picture prediction which is out of the prediction limitation range when encoding is performed based on prediction.
(141) (Supplementary Note 20)
(142) A video decoding program which causes a computer to perform: a demultiplexing process of demultiplexing a bitstream including video data and a bitstream including display-enabled area information in an image to be decoded; a video decoding process of decoding the demultiplexed video bitstream based on prediction and generating image data; a video output control process of limiting an output area of the image data based on the demultiplexed display-enabled area information; and a display-enabled area decoding process of decoding the demultiplexed display-enabled area information bitstream according to a predetermined method and extracting at least a part of the display-enabled area information.
(143) (Supplementary Note 21)
(144) The video decoding program according to Supplementary note 20, wherein in the display-enabled area decoding process, the computer is caused to perform extracting a difference between a display-enabled area in a picture to be decoded and a display-enabled area in a picture decoded in the past from the display-enabled area information bitstream including at least the difference between the display-enabled area in the picture to be decoded in display-enabled area and the display-enabled area in the picture decoded in the past, and obtaining a display-enabled area in the picture to be decoded.
(145) While the present invention has been described with reference to the exemplary embodiments and examples, the present invention is not limited to the aforementioned exemplary embodiments and examples. Various changes understandable to those skilled in the art within the scope of the present invention can be made to the structures and details of the present invention.
(146) This application claims priority based on Japanese Patent Application No. 2012-141924 filed on Jun. 25, 2012, the disclosures of which are incorporated herein in their entirety.
REFERENCE SIGNS LIST
(147) 101 Video encoder 102 Refresh controller 103 Display-enabled area encoder 104 Multiplexer 105 Display-enabled area encoding controller 201 Demultiplexer 202 Video decoder 203 Display-enabled area decoder 204 Video output controller