Semiconductor integrated circuit, display device provided with same, and control method
10855946 ยท 2020-12-01
Assignee
Inventors
- Yoshinori Okajima (Osaka, JP)
- Masaki Toyokura (Hyogo, JP)
- Masayuki Taniyama (Osaka, JP)
- Masahiro Takeuchi (Osaka, JP)
- Takashi Akiyama (Osaka, JP)
Cpc classification
H04N21/440245
ELECTRICITY
H04N13/122
ELECTRICITY
H04N21/4316
ELECTRICITY
H04N21/440263
ELECTRICITY
H04N21/440254
ELECTRICITY
H04N13/383
ELECTRICITY
G09G2340/0407
PHYSICS
H04N13/279
ELECTRICITY
H04N21/44218
ELECTRICITY
G09G2320/0261
PHYSICS
International classification
H04N13/383
ELECTRICITY
H04N21/4402
ELECTRICITY
H04N13/279
ELECTRICITY
H04N21/442
ELECTRICITY
Abstract
Disclosed herein is a semiconductor integrated circuit which controls the quality of an image and includes a viewer detector, a region specifier, and a controller. The viewer detector detects the number of viewer(s) watching the image and a gaze region being watched by the viewer within the image. If the number of viewers is plural, the region specifier specifies a local region of the image as a target region based on a plurality of gaze regions being watched by the viewers. The controller performs image quality control on the target region.
Claims
1. A semiconductor integrated circuit for use in a display device, the circuit comprising an output unit, an information input unit, a viewer detector, a region specifier, and a controller, wherein the output unit outputs output image information to the display device, the information input unit receives viewer information about one or more viewers, in front of the display device, watching an image on a single screen of the display device, the image being displayed based on the output image information, the viewer detector is configured to detect (1) how many viewers are watching the image on the single screen of the display device and (2) one or more gaze regions within the image being watched by the detected one or more viewers, based on the viewer information from the information input unit, the region specifier is configured to specify, in response to detecting a plurality of viewers, a target region in the image based on a plurality of gaze regions being watched by the plurality of viewers, the controller is configured to perform image quality control on the target region, and output the image including the target region subjected to the image quality control as the output image information to the output unit, the viewer detector detects a distance between the one or more viewers and the display device displaying the image, and the region specifier specifies the target region in consideration of the distance detected by the viewer detector as well.
2. The semiconductor integrated circuit of claim 1, wherein the region specifier specifies, as the target region, an overlapping region in which two or more of the gaze regions overlap with each other at least partially.
3. The semiconductor integrated circuit of claim 1, wherein the controller performs image quality improvement processing as the image quality control.
4. A semiconductor integrated circuit for use in a display device, the circuit comprising an output unit, an information input unit, a viewer detector, a region specifier, and a controller, wherein the output unit outputs output image information to the display device, the information input unit receives viewer information about one or more viewers, in front of the display device, watching an image on a single screen of the display device, the image being displayed based on the output image information, the viewer detector is configured to detect (1) how many viewers are watching the image on the single screen of the display device and (2) one or more gaze regions within the image being watched by the detected one or more viewers, based on the viewer information from the information input unit, the region specifier is configured to specify, in response to detecting a plurality of viewers, a target region in the image based on a plurality of gaze regions being watched by the plurality of viewers, the controller is configured to perform image quality control on the target region, and output the image including the target region subjected to the image quality control as the output image information to the output unit, and if at least one of the gaze regions includes character information, the region specifier specifies the target region by excluding the at least one gaze region including the character information.
5. The semiconductor integrated circuit of claim 1, wherein if any of the one or more gaze regions remains at a fixed location within the image for a predetermined amount of time, the region specifier specifies the gaze region as the target region.
6. The semiconductor integrated circuit of claim 1, wherein the viewer detector detects the one or more viewers and the one or more gaze regions based on information acquired from an imaging device capturing an image including one or more viewers.
7. The semiconductor integrated circuit of claim 1, further comprising an image generator configured to generate the image.
8. A display device comprising the semiconductor integrated circuit of claim 1.
9. The semiconductor integrated circuit of claim 1, wherein if the gaze regions do not overlap with one another, the region specifier specifies the gaze regions as target regions.
10. The semiconductor integrated circuit of claim 9, wherein the controller performs, as the image quality control, multiple different types of image quality control on the gaze regions specified as the target regions.
11. The semiconductor integrated circuit of claim 10, wherein the multiple different types of image quality control are determined based on the viewer information.
12. A method of controlling a quality of an image, the method comprising: a1) outputting output image information, by an output unit, to a display device; a2) receiving viewer information about one or more viewers, in front of the display device, watching the image on a single screen of the display device, the image being displayed based on the output image information; a3) detecting that (1) how many viewers are watching the image on the single screen of the display device and (2) a plurality of gaze regions within the image being watched by the detected one or more viewers, based on the viewer information; b) specifying a target region in the image based on the plurality of gaze regions being watched by the plurality of viewers; c) performing image quality control on the target region; and d) outputting the image including the target region subjected to the image quality control as the output image information to the output unit, wherein a3) includes detecting a distance between the one or more viewers, and wherein b) includes specifying the target region in consideration of the distance detected by the viewer detector as well.
13. The method of claim 12, wherein c) includes performing image quality improvement processing as the image quality control.
14. The method of claim 12, wherein b) includes specifying, if the gaze regions do not overlap with one another, the gaze regions as the target regions.
15. The semiconductor integrated circuit of claim 1, further comprising: an interface configured to output the image including the target region subjected to the image quality control to the display device.
16. The semiconductor integrated circuit of claim 1, wherein the display device is a single display device, and the one or more viewers see the images displayed on the single display device.
17. The method of claim 12, wherein c) includes emphasizing an edge of an image within the target region.
18. The method of claim 12, wherein c) includes increasing a display magnification of the image within the target region.
19. The method of claim 12, wherein c) includes transmitting information about the image displayed within the target region to a mobile terminal the viewers own.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
DETAILED DESCRIPTION
First Embodiment
(15)
(16) The information input unit 12 receives information about an image to be displayed on the display device. This information includes viewer information about viewer(s). The viewer information includes, e.g., the number of the viewers and a gaze region or gaze regions, and may be obtained from an imaging device such as a camera.
(17) The information input unit 12 may also receive viewer information from, e.g., a sensor, a pair of dedicated glasses, or a speech recognizer. Examples of the viewer information may include information about his or her brain wave, heart rate, blood pressure, age, and gender, information about his or her feeling or emotion (e.g., his or her facial expression), the distance from the display device to him or her, and the amount of time for which he or she is watching the gaze region (which time will be hereinafter referred to as a gaze time).
(18) The display information input unit 14 receives display information to be output to the display device. The display information may include, e.g., information about the image, compressed broadcast information, and character information transmitted from a network, or may also be information which can be displayed on the display device (e.g., a still picture).
(19) The image generator 16 generates, based on the display information, image information to be displayed on the display device and outputs it. That is to say, if the output of the image generator 16 is supplied directly to the display device, a normal image will be displayed on the display device.
(20) The image information may include character information or any other kind of non-image information.
(21) Also, for example, the display information input unit 14 may form an integral part of the image generator 16, and the image generator 16 may be provided outside the semiconductor integrated circuit 10 such that its output is supplied to the semiconductor integrated circuit 10.
(22) The viewer detector 18 may detect an arbitrary kind of information included in the viewer information. For example, the viewer detector 18 may detect, e.g., the number of the viewers and the gaze region indicating a particular region being watched by the viewer within the image. The gaze region may be a region having a predetermined range within the image and having its center defined by a point at which the viewer is fixing his or her eyes, for example.
(23) The viewer detector 18 may detect the gaze region by reference to, in addition to or instead of the viewer's eye direction, his or her face orientation or any other piece of information included in the viewer information.
(24) The information input unit 12 may form an integral part of the viewer detector 18.
(25) The region specifier 20 specifies, as a target region, a local region of the image based on the number of the viewers and the gaze region that have been detected by the viewer detector 18.
(26) The target region may be a local region of the image to which one or more viewers pay particular attention. For example, if there are a plurality of gaze regions (viewers), each of the gaze regions may possibly be a target region, and either a region in which some of the gaze regions overlap with each other at least partially or any one of the gaze regions may be specified as the target region.
(27) The region specifier 20 may take any other piece of information, such as the distance and/or a gaze time, included in the output of the viewer detector 18 into account in the processing step of specifying the target region. The output of the image generator 16 may also be supplied to the region specifier 20. In that case, for example, the region specifier 20 may specify the target region in consideration of image information including character information as well.
(28) The controller 22 performs, as image quality control of the target region within the image information output from the image generator 16, such processing that will make the target region more easily viewable for the viewer, for example. That is to say, the region subjected to the image quality control by the controller 22 is not the entire image on the screen, but only a local region that would attract multiple viewers' attention deeply. The reason is that viewers generally tend to view only a local region (e.g., a region including their object of interest) of an image more attentively. Accordingly, the controller 22 may perform, e.g., image quality improvement processing on the target region to allow the target region to have higher image quality than the rest of the image. Examples of the image quality improvement processing include improvement in definition using frequency information, and improvement in reproducibility of color information.
(29) The output unit 24 outputs output image information subjected to the image quality control by the controller 22. The output unit 24 may form an integral part of the controller 22.
(30) The output image information is then actually displayed on the display device.
(31) Next, it will be described, with reference to
(32)
(33) In Step S101, viewer information and display information are input to the semiconductor integrated circuit 10.
(34) In Step S102, the number of viewers and a gaze region are detected based on the viewer information. Also, image information is generated based on the display information.
(35) In Step S103, the number of viewers is determined.
(36) If the number of viewers is less than one, i.e., if there are no viewers, then there are no gaze regions. Thus, no target regions are specified, and the process proceeds to Step S109. Accordingly, the image is not subjected to the image quality control, and thus the entire image keeps its original image quality (for example, low image quality).
(37) If the number of viewers is singular, his or her gaze region is specified as a target region in Step S104.
(38) On the other hand, if the number of viewers is plural, a determination is made in Step S105 whether or not any of their respective gaze regions has a region overlapping with another gaze region.
(39) If there is any overlapping region (i.e., if the answer to the question of Step S105 is YES), the overlapping region is specified as the target region in Step S106.
(40) On the other hand, if there are no overlapping regions (i.e., if the answer to the question of Step S105 is NO), the respective gaze regions are specified as the target regions in Step S107.
(41) When the target region(s) is/are specified, the target region(s) is/are subjected, in Step S108, to image quality control such as image quality improvement processing to generate output image information. If there are plural target regions, the image quality control is performed on each of those regions.
(42) Then, in Step S109, the output image information is provided from the semiconductor integrated circuit 10.
(43) As can be seen, according to the semiconductor integrated circuit 10 of this embodiment, even if two or more viewers are watching the same image, the image generated may have been subjected to such image quality control that would make the image viewable for each viewer to his or her preference.
(44) Japanese Unexamined Patent Publication No. H01-141479 discloses a technique for weighting encoding processing of an image according to the viewers' eye directions. Japanese Unexamined Patent Publication No. 2013-055675 discloses a technique for performing stereoscopic processing on an image only when a single viewer is watching the image without performing such processing when two or more viewers are watching the image.
(45) Thus, a simple combination of the two techniques of Japanese Unexamined Patent Publications No. H01-141479 and No. 2013-055675 would result in failure to process the image if two or more viewers are watching the image. In addition, none of these techniques explicitly teach, if two or more viewers are watching the image, how to specify a local region of the image, or subjecting that region to some image quality control such as image quality improvement.
(46) In contrast, according to this embodiment, the image quality control is always performed on the target region regardless of the number of viewers. This allows the respective viewers to watch the image to his or her preference.
(47) In addition, the image quality control such as image quality improvement processing may be performed on only a local region of the image (i.e., the target region), not the entire image on the screen, and thus, the area of the range to be processed may be narrowed. This may reduce power consumption. Also, such a reduction in the area of the range to be subjected to the image quality control may require lower processing performance than in a situation where the image quality control is performed on the entire image. This may reduce costs and a circuit size.
(48) Optionally, to further reduce power consumption and for other purposes, not all of the respective gaze regions have to be specified in Step S107 as the target regions. That is to say, only a region in which some of the gaze regions overlap with each other may be specified as the target region.
(49) In this embodiment, any other piece of information included in the output of the viewer detector 18 may also be taken into account during the processing to be performed until the target region is specified. Thus, some variations will be described below.
First Processing Example
(50)
(51) In Step S201, viewer information is input.
(52) In Step S202, the gaze region and the gaze time are detected based on the viewer information.
(53) In Step S203, a determination is made whether or not there is any gaze region being watched by the viewer for a longer time than the last time. That is to say, a determination is made whether or not there is any gaze region being watched by the viewer more attentively.
(54) If there are no gaze regions continuously watched for a longer time (i.e., if the answer to the question of Step 203 is NO), the range (area) of the gaze region is expanded in Step S204. That is to say, it cannot be said that the viewer is watching the gaze region attentively, and thus, the gaze region is not a candidate for the target region.
(55) If there is any gaze region continuously watched for a longer time (i.e., if the answer to the question of Step 203 is YES), the range (area) of the gaze region is decreased in Step S205. That is to say, the viewer is watching the gaze region more attentively, and thus, the gaze region may be regarded as a candidate for the target region.
(56) In Step S206, the target region is specified based on the results of Steps S204 and S205. For example, a gaze region having a predetermined area or less may be specified as a target region.
(57) After that, the same series of processing steps will be repeatedly performed. Optionally, the area of the target region may be increased or decreased according to the length of the gaze time after the target region has been specified from among the gaze regions. In other words, the gaze time may be included as parameters for determining the target region.
(58) This further narrows down the target range of the image quality control, and thus results in a further reduction in power consumption and costs.
Second Processing Example
(59)
(60) In Step S301, viewer information is input.
(61) In Step S302, the gaze region is detected based on the viewer information.
(62) In Step S303, the magnitude of shift of the gaze region within the image is calculated. That is to say, this shows how much the gaze region has shifted within the image.
(63) In Step S304, a determination is made whether or not the magnitude of shift of the gaze region is equal to or more than a predetermined value.
(64) If the magnitude of shift of the gaze region is equal to or more than the predetermined value (i.e., if the answer to the question of Step S304 is YES), the gaze region is excluded from candidates for the target region in Step S305.
(65) On the other hand, if the magnitude of shift of the gaze region is less than the predetermined value (i.e., if the answer to the question of Step S304 is NO), the gaze region is regarded as a candidate for the target region in Step S306.
(66) In Step S307, the target region is specified based on the results of Steps S305 and S306.
(67) After that, the same series of processing steps will be repeatedly performed. As can be seen, even in a situation where the gaze region shifts along with the movement of the viewer's eyes, it can be said that the gaze region is shifting relatively slowly as long as his or her eye movement falls within a predetermined range. Thus, the gaze region may be regarded as remaining at a fixed location within the image for a predetermined amount of time, and therefore, may be specified as the target region.
(68) This may achieve the same or similar advantage as/to the first processing example.
(69) The semiconductor integrated circuit 10 may be provided with a memory or any other storage device to make the determination in Steps S203 and S304.
Third Processing Example
(70)
(71) In Step S401, viewer information is input.
(72) In Step S402, a gaze region is detected based on the viewer information.
(73) In Step S403, a subtitle region including character information such as movie subtitles or weather forecast is detected. Steps S402 and S403 may be performed in reverse order.
(74) In Step S404, a determination is made whether or not there is any gaze region including any subtitle region. That is to say, a determination is made whether or not the viewer is watching any subtitle region.
(75) If there is any gaze region including a subtitle region (i.e., if the answer to the question of Step S404 is NO), the gaze region is excluded from candidates for the target region in Step S405.
(76) On the other hand, if there are no gaze regions including any subtitle regions (i.e., if the answer to the question of Step S404 is YES), the gaze region is regarded as a candidate for the target region in Step S406.
(77) In Step S407, the target region is specified based on the results of Steps S405 and S406.
(78) After that, the same series of processing steps will be repeatedly performed. As can be seen, if the gaze region and the subtitle region at least partially overlap with each other, excluding the gaze region from the target regions further narrows down the target range of the image quality control. That is because it can be said that the image quality improvement or any other image quality control is required less often for characters in an image than for persons and other subjects in the image.
(79) This may achieve the same or similar advantage as/to the first processing example.
(80) In the first to third processing examples, other pieces of information included in the viewer information may also be used as parameters until the target information is specified.
(81) For example, the shape of the gaze region may be changed in accordance with the viewer posture information that has been input. Also, viewer's speech information or biological information about his or her brain wave, heart rate, or blood pressure may also be input to analyze the target watched by the viewer. For example, information about the viewer's feeling or emotions may be acquired based on his or her facial expressions, and based on the information thus acquired, the degree of importance of the image (in particular, the target that the viewer is watching) may be calculated to weight a plurality of target regions. In this case, the image quality may be controlled in accordance with those weights.
(82) Optionally, any other piece of personal information about the viewer such as his or her age or gender may also be used as a parameter for specifying the target region.
(83) The target region may also be specified by combining the first to third processing examples with one another.
Second Embodiment
(84)
(85) The imaging device 32 may be, e.g., a camera that captures an image of a viewer watching the image on the display unit 36, and may output viewer information to the semiconductor integrated circuit 10.
(86) The distance measuring sensor 34 measures the distance between the display unit 30 and the viewer, and outputs distance information to the semiconductor integrated circuit 10. Optionally, the imaging device 32 may be configured to have the function of the distance measuring sensor 34.
(87) The display unit 36 receives output image information from the semiconductor integrated circuit 10 to display an image with quality which has been controlled to the viewer's preference.
(88) The semiconductor integrated circuit 10 receives display information via an antenna which is not illustrated.
(89) The image displayed on the display unit 36 of the display device 30 having such a configuration also has a gaze region and a target region to be described in detail below. Note that the semiconductor integrated circuit 10, the imaging device 32, and the distance measuring sensor 34 are not illustrated in any of the drawings to be referred to in the following examples.
First Example
(90)
(91) For example, as illustrated in
(92) In this case, the windows W1 and W4 are gaze regions 37 (as indicated by the broken lines in
(93) In this state, if the viewers A and B continue watching the windows W1 and W4, respectively, the windows W1 and W4 are specified as the target regions.
(94) On the other hand, as illustrated in
Second Example
(95)
(96) For example, as illustrated in
(97) In this case, the regions 37A and 37B are respectively gaze regions 37A and 37B, and the other region on the display unit 36 is a non-gaze region 39.
(98) In this state, if the viewers A and B fix their eyes on the same locations in the gaze regions 37A and 37B on the display unit 36 for a predetermined amount of time, for example, the gaze regions 37A and 37B are specified as the target regions.
(99) On the other hand, if the gaze regions 37A and 37B have shifted to the locations illustrated in
(100) Alternatively, in
Third Example
(101)
(102) For example, as illustrated in
(103) The distance from the viewer A or B to the display device 30 may be measured by the distance measuring sensor 34 (see
(104) That is because it may be more difficult for a viewer located farther away from the display device 30 to sense improvement of the image quality than for a viewer located closer to the display device 30.
(105) As a result, as illustrated in
(106) Yet alternatively, the gaze region being watched by the viewer located within a predetermined distance from the display device 30 may be specified as the target region, instead of the gaze region being watched by the viewer located closer to the display device 30.
(107) In other words, the distance may be included in parameters for determining the target region.
Fourth Example
(108)
(109) For example, as illustrated in
(110) In this case, as illustrated in
(111) As can be seen, in the first to fourth examples, subjecting the target region 38 to image quality improvement or any other type of image quality control provides at least one of the viewers A and B with an image, of which the quality has been controlled to his or her preference. In addition, the range to be subjected to the image quality control is narrower than the entire display unit 36. This leads to a reduction in power consumption and costs.
(112) In the first to fourth examples, a decimation process may be performed on the non-gaze region 39. Alternatively, any two or more of the first to fourth examples may be arbitrarily combined with each other.
(113) Optionally, settings of the image quality control of the target region may be adjusted with, e.g., a remote controller. If the target region includes a plurality of target regions, the specifics of the image quality controls of the respective target regions may be set independently of one another.
(114) The image quality control does not have to be image quality improvement but may also be any other type of control as long as it is beneficial for the viewer. For example, the image may have its quality controlled by having its luminance, lightness, saturation, and/or hue adjusted or by having the edges of a person or any other object in the target region enhanced such that the image will look clearer to the viewer's eyes.
(115) Moreover, the area of the target region may be expanded to display information about the person or any other object in the target region, or the image displayed within the target region may be zoomed in to make the target region more clearly visible to the viewer. Such relevant information may be displayed on a mobile terminal owned by the viewer.
(116) Optionally, the controller 22 may correct the output of the region specifier 20 based on the image information supplied from the image generator 16. For example, the boundary or edges of the person or any other object in the image information may be detected, and the image information included in the target region may be corrected based on the data. The controller 22 may receive tag information indicating details of the persons and other objects included in the image information. In this case, the image information included in the target region may be corrected based on this tag information.
(117) If the viewer is watching a plurality of gaze regions intermittently, the history of the cumulative gaze time of each of those gaze regions may be recorded, and a gaze region, of which the cumulative gaze time has reached a predetermined value, may be specified as the target region.
(118) Moreover, the target region may also be specified in consideration of the order of viewing the image by plural viewers. Alternatively, the respective gaze regions, including a region in which some of the gaze regions overlap with one another, may be specified as the target regions.
(119) For example, unalterable information such as the viewer's age or eyesight or the owner of the display device 30 may be entered with, e.g., a remote controller. For example, the gaze region being watched by a viewer who is elderly or has weak eyesight or the gaze region being watched by the owner may be preferentially specified as the target region.
(120) In the second embodiment described above, the display device is configured as a digital TV set. However, the display device may also be any other device such as a personal computer or a projector as long as the device can display an image thereon.
(121) A semiconductor integrated circuit according to the present disclosure may perform, even if a plurality of viewers are watching the same image on a display device, image quality improvement processing or any other image quality control on a region being watched by each of those viewers particularly attentively. Thus, this circuit is useful for display devices such as large-screen TV sets and monitors that need to have their power consumption and costs reduced.