Image processing apparatus, image processing method and computer-readable storage medium with improved character display
09584699 ยท 2017-02-28
Assignee
Inventors
Cpc classification
H04N1/58
ELECTRICITY
H04N1/4058
ELECTRICITY
International classification
Abstract
An image processing apparatus includes a first screen processing unit that performs first screen processing using a first screen angle on a line region of a character, a second screen processing unit that performs second screen processing using a second screen angle on at least an outline region out of the line region, the second screen angle being different from the first screen angle, and a composition unit that performs composition of a processing result of the first screen processing and a processing result of the second screen processing, on at least the outline region out of the line region.
Claims
1. An image processing apparatus comprising a hardware processor configured to: perform a first screen dither processing on an outline region of a line region of a character using a first screen angle resulting in a first screen dither processing result of pixel data; perform a second screen dither processing on the outline region on which the first screen dither processing is performed using a second screen angle resulting in a second screen dither processing result of pixel data wherein the second screen angle is different from the first screen angle; and composite the first screen dither processing result and the second screen dither processing result for the outline region of a same color plane image that has undergone the first screen dither processing and the second screen dither processing; the processor is configured to output, for only the outline region out of the line region, a value obtained by the compositing the first screen dither processing result and the second screen dither processing result, as an output gradation value, and the processor is configured to output, for an inner region enclosed by the outline region, the first screen dither processing result of the first screen dither processing as a second output gradation value.
2. The image processing apparatus according to claim 1, wherein The processor is configured to composite the first screen dither processing result and the second screen dither processing result for the outline region by averaging the first screen dither processing result and the second screen dither processing result.
3. The image processing apparatus according to claim 2, wherein the processor is further configured to composite the first screen dither processing result and the second screen dither processing result for the outline region by halving the first screen dither processing result to produce a first screen processing halved result, halving the second screen dither processing result to produce a second screen processing halved result, and adding the first screen processing halved result and the second screen processing halved result.
4. The image processing apparatus according to claim 1, wherein the second screen angle is an angle orthogonal to the first screen angle.
5. The image processing apparatus according to claim 1, wherein the processor is configured to specify, for the character, the outline region and an inner region enclosed by the outline region.
6. The image processing apparatus according to the claim 5, wherein The processor also is configured to perform the first screen dither processing on the inner region that has been specified, and output only the first screen processing result for the inner region.
7. An image processing method comprising the steps of: a) performing first screen dither processing using a first screen angle on a line region of a character; b) performing second screen dither processing using a second screen angle on at least an outline region out of the line region, the second screen angle being different from the first screen angle; c) performing composition of pixel data resulting from the first screen processing and pixel data resulting from the second screen processing, on at least the outline region of a same color plane image out of the line region; d) outputting, for only the outline region out of the line region, a value obtained by the composition of the pixel data from the first screen dither processing and the pixel data from the second screen dither processing, as an output gradation value, and e) outputting, for an inner region enclosed by the outline region, the pixel data result of the first screen dither processing as a second output gradation value.
8. A non-transitory computer-readable storage medium storing a program for causing a computer to execute the steps of: a) performing first screen dither processing using a first screen angle on a line region of a character; b) performing second screen dither processing using a second screen angle on at least an outline region out of the line region, the second screen angle being different from the first screen angle; c) performing composition of pixel data resulting from the first screen processing and pixel data resulting from the second screen processing, on at least the outline region of a same color plane image out of the line region; d) outputting, for only the outline region out of the line region, a value obtained by the composition of the pixel data from the first screen dither processing and the pixel data from the second screen dither processing, as an output gradation value, and e) outputting, for an inner region enclosed by the outline region, the pixel data result of the first screen dither processing as a second output gradation value.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)
(28)
(29)
(30)
(31)
(32)
(33)
(34)
(35)
(36)
(37)
(38)
(39)
(40)
(41)
(42)
(43)
(44)
(45)
(46)
(47)
(48)
(49)
(50)
(51)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
(52) Hereinafter, embodiments of the present invention will be described with reference to the drawings.
1. First Embodiment
1-1. Configuration
(53)
(54) The MFP 1 is an apparatus (also referred to as a Multi-Functional Peripheral) that has a scanner function, a printer function, a copy function, a facsimile function and the like. Specifically, the MFP 1 includes an image reading unit 2, an image processing unit 3, a printout unit 4, a communication unit 5, an input/output unit 6, a storage unit 8, and a controller 9, and realizes the functionality of the aforementioned units by operating these units integrally. Note that the MFP 1 is also referred to as an image forming apparatus or the like.
(55) The image reading unit 2 is a processing unit that optically reads an original document placed at a predetermined position on the MFP 1 and generates an image of the original document (also referred to as a document image). The image reading unit 2 is also referred to as a scanner unit.
(56) The image processing unit 3 is a processing unit that performs various types of image processing on the scanned image generated by the image reading unit 2. The image processing unit 3 includes a character outline extraction unit 32, a first screen processing unit 33, a second screen processing unit 34, and a composition unit 35.
(57) The character outline extraction unit 32 extracts an outline region of a character as well as detecting a line region of the character, and sections the line region of the character into the outline region and an inner region enclosed by the outline region.
(58) The first screen processing unit 33 is a processing unit that performs screen processing (also referred to as dithering) using a first screen angle a. The second screen processing unit 34 is a processing unit that performs screen processing using a second screen angle b. The second screen angle b is an angle different from the first screen angle a. For example, the second screen angle b is an angle orthogonal to the first screen angle a.
(59) The composition unit 35 is a processing unit that composites the result of screen processing performed with the first screen angle a and the result of screen processing performed with the second screen angle b.
(60) As described later, the composition unit 35 generates an output image for the outline region of a character by averaging the result of screen processing performed with the first screen angle a and the result of screen processing performed with the second screen angle b and compositing the averaged results. On the other hand, the composition unit 35 generates an output image for the inner region of a character by using only the result of screen processing performed with the first screen angle a.
(61) In the present embodiment, the image processing unit 3 generates an output image by performing AM screen processing on an input image having intermediate gradation values, under the control of the controller 9.
(62) The printout unit 4 is an output unit that prints out a target image on various types of media such as paper, based on image data (output image) of that image.
(63) The communication unit 5 is a processing unit capable of facsimile communication via a public network or the like. The communication unit 5 is also capable of network communication via a communication network NW. Using such network communication enables the MFP 1 to exchange various types of data with the desired party. The MFP 1 is also capable of transmitting/receiving e-mails, using such network communication.
(64) The input/output unit 6 includes an operation input unit 61 that receives input to the MFP 1, and a display unit 62 that displays and outputs various types of information.
(65) The storage unit 8 is constituted as a storage device such as a hard disk drive (HDD). A document image or the like generated by the image reading unit 2 or the like is stored in the storage unit 8.
(66) The controller 9 is a control device that performs overall control of the MFP 1, and is constituted by a CPU and various types of semiconductor memories (such as a RAM and a ROM). The various types of functionality of the MFP 1 are realized by the various processing units operating under the control of the controller 9.
1-2. Overview of Image Processing
(67) Next is a description of an overview of screen processing performed on characters.
(68)
(69) In this image forming apparatus, gradation representation is realized by performing screen processing on each plane image on each page including such a character. In order to suppress interference of the plane images, different screen angles a are employed for different plane images.
(70) Incidentally, as mentioned above, there is the problem that jaggies are noticeable if a difference between the outline angle of a character and the screen angle a is relatively small in a certain plan image. Specifically, referring to an oblique portion of the character N that is sandwiched by the vertical lines on both sides (a linear region extending from the top left to the bottom right) as shown in
(71) Hereafter, such a problem will be first discussed in more detail. Note that although the case where the screen angle a is 45 degrees is shown in
(72)
(73) By way of example, a gradation value Din of each pixel in an input pixel group having an intermediate gradation value of 20 and forming a uniform region (see
(74) Specifically, firstly, the gradation value Din at each position in the input pixel group is compared with the reference value (critical value) Dref at the corresponding position in the screen table TB1. Then, if the input gradation value Din at a certain position is less than or equal to the corresponding reference value Dref, the output gradation value Dout at that position is set to OFF (zero). On the other hand, if the input gradation value Din at a certain position is greater than the corresponding reference value Dref, the output gradation value Dout at that position is set to ON. Note that, in the multi-valued screen processing as employed herein, the ON state of each pixel is further distinguished into multi-levels (here, 16 levels) corresponding to each value. To be more specific, the output gradation value Dout is set to a difference Ddif between the input gradation value Din and the reference value Dref. If the difference Ddif is greater than 16, the output gradation value Dout is set to the maximum value of 16.
(75) For example, referring to a position (x1, y1), the input gradation value Din of 20 (see
(76) Furthermore, the input gradation value Din of 20 at a position (x1, y2) (see
(77) The input gradation value Din of 20 at a position (x2, y2) (see
(78) The output gradation values Dout at the other positions are also determined in a similar manner. Accordingly, the output gradation values Dout as shown in
(79) Similarly, the gradation value Din of each pixel in an input pixel group having an intermediate gradation value of 64 and forming a uniform region (not shown) is converted into the gradation value Dout of each pixel in an output pixel group as shown in
(80) As shown in
(81)
(82)
(83)
(84) As shown in
(85) In view of this, processing as described below is further performed in the present embodiment. This processing enables jaggies to be reduced.
(86) Specifically, processing for sectioning the line region LR of a character in the input image (see
(87) Furthermore, another screen processing SR2 using another screen angle b is also performed, in addition to screen processing SR1 based on the above screen angle a. This screen processing SR2 is performed on only the outline region RE. For pixels in the outline region RE, the processing result of the screen processing SR1 based on the screen angle a and the processing result of the screen processing SR2 based on the screen angle b are both used.
(88)
(89)
(90)
(91) Furthermore, for the outline region RE, the processing result of the screen processing SR2 and the processing result of the above-described screen processing SR1 are composited together. Both of the processing results are averaged before composition.
(92) Specifically, each output gradation value Dout calculated by the screen processing SR2 is changed to a half value as shown in
(93) Each output gradation value Dout (see
(94) Then, as for the outline region RE, composition is performed by adding (averaging) each output gradation value Dout calculated by the screen processing SR1 and each output gradation value Dout calculated by the screen processing SR2.
(95) As shown in
(96) In this way, for the outline region RE, composition is performed by averaging the processing result of the screen processing SR1 and the processing result of the screen processing SR2. As a result, apparently similar gradation values to those obtained from only the processing result of the screen processing SR1 are realized in a 44 matrix that has undergone both the screen processing SR1 and the screen processing SR2. This avoids a substantially excessive increase in gradation value.
(97) Furthermore, a similar operation is performed at each position in the outline region RE. As a result, an apparent outline as indicated by the extra thick line EL in
(98) On the other hand, as for pixels in the inner region RN, only the processing result of the screen processing SR1 is used, out of the processing results of both the screen processing SR1 and the screen processing SR2. Accordingly, similar screen processing to that in
(99) Now, refer again to the example in
(100) As a result, the processing result as shown in
(101)
(102) As can be seen from a comparison of
(103) Note that in the present embodiment, for the inner region RN of the character, only the processing result of the screen processing SR1 using the screen angle a is reflected as also shown in
1-3. Exemplary Processing
(104) Next, screen processing performed by hardware of the image processing unit 3 will be described in detail with reference to
(105) The character outline extraction unit 32 firstly performs processing for sectioning the line region LR of a character in an input image (here, a character image) into the outline region RE and the inner region RN. Specifically, an edge region having a width of approximately the outermost one or several pixels in the line region LR of the character is specified as the outline region RE (see
(106) Next, both the screen processing SR1 and the screen processing SR2 using the two screen angles a and b are performed in parallel. Specifically, the screen processing SR1 based on the screen angle a and the screen processing SR2 based on the screen angle b are performed in parallel.
(107) To be specific, the screen processing SR1 based on the screen angle a is performed by the first screen processing unit 33, based on the results of comparison processing between the input gradation value Din at each position in the input image and the reference value Dref at the corresponding position in the screen table TB1. Note that the reference value Dref at each corresponding position is acquired by invoking it from the screen table TB1, based on address information on each position. For example, the input gradation values Din at respective positions (x1, y1), (x2, y1), (x3, y1), (x4, y1), (x2, y1) and so on in the input image are sequentially compared with the reference values Dref at the respective corresponding positions (x1, y1), (x2, y1), (x3, y1), (x4, y1), (x2, y1) and so on in the screen table TB1. Then, the output gradation values at the respective positions are determined based on the comparison results.
(108) Similarly, the screen processing SR2 based on the screen angle b is performed by the second screen processing unit 34, based on the results of comparison processing between the input gradation value Din at each position in the input image and the reference value Dref at the corresponding position in the screen table TB2. Note here that in the screen processing SR2 of the present example, the reference value Dref at each corresponding position is acquired by invoking it from the screen table TB1, based on address information corresponding to a position after the original screen table TB1 is rotated by 90 degrees. For example, the input gradation values Din at respective positions (x1, y1), (x2, y1), (x3, y1), (x4, y1), (x2, y1) and so on in the input image are sequentially compared respectively with the reference values Dref at positions (x4, y1), (x4, y2), (x4, y3), (x4, y4), (x3, y1) and so on in the screen table TB1. Then, the output gradation values at the respective positions are determined based on the comparison results. Such an operation is equivalent to an operation using the reference values Dref at the corresponding positions in the screen table TB2.
(109) Then, either or both of the processing results of the screen processing SR1 and the screen processing SR2 are used depending on whether each position is in the inner region RN or the outline region RE. In other words, one of two types of operations is performed depending on the attribute information of a pixel at each position in the input image.
(110) As for positions in the inner region RN, only the processing result of the screen processing SR1 is used, out of the processing results of both the screen processing SR1 and the screen processing SR2. To be specific, a selector 39 outputs the processing result obtained by the first screen processing unit 33 directly as pixel values in the output image. Accordingly, similar screen processing to that in
(111) Meanwhile, as for positions in the outline region RE, both of the processing results of the screen processing SR1 based on the screen angle a and the screen processing SR2 based on the screen angle b are used.
(112) To be more specific, the composition unit 35 performs composition by averaging the processing result of the screen processing SR1 and the processing result of the screen processing SR2 (see
(113) The operation as described above is performed on all of the pixels in the input image, and the output image is generated as a result.
(114) As described above, in the present embodiment, not only the first screen processing SR1 using the first screen angle a but also the second screen processing SR2 using the second screen angle b are performed on the outline region RE. Performing both the screen processing SR1 and the screen processing SR2 using the two screen angles a and b in this way enables jaggies to be suppressed as compared to the case where only single screen processing using a single screen angle is performed. In particular, since the processing for compositing both the screen processing SR1 and the screen processing SR2 using the two screen angles a and b is performed on respective portions of the outline region RE that have various angles, it is possible to suppress the appearance of jaggies in character outlines having various angles.
(115) In particular, the results of both the screen processing SR1 and the screen processing SR2 using the two screen angles a and b are averaged for the outline region. This makes it possible to avoid a substantially excessive increase in gradation value in the case of performing both the screen processing SR1 and the screen processing SR2 using the two screen angles a and b.
(116) Furthermore, since the second screen angle b is an angle orthogonal to the first screen angle a, it is possible to comprehensively and favorably reduce anisotropy in screen processing (dithering).
(117) Furthermore, as for pixels in the inner region RN, the processing result of the screen processing SR1 using the first screen angle a is utilized as is to generate an output image. That screen angle a is determined as appropriate for each plane image. Accordingly, as for the inner region RN, it is possible to favorably avoid interference of multiple plane images (the occurrence of moire or the like).
2. Second Embodiment
(118) A second embodiment is a variation of the first embodiment. Although the above first embodiment illustrates the case where the present invention is applied to the multi-valued screen processing, the second embodiment illustrates the case where the present invention is applied to binary screen processing. The following description focuses on differences from the first embodiment.
(119)
(120) For example, the gradation value Din of each pixel in an input pixel group (see
(121)
(122)
(123) As shown in
(124) In view of this, processing as described below is further performed in this second embodiment.
(125) Specifically, processing for sectioning the line region LR of a character in the input image into the outline region RE and the inner region RN is firstly performed.
(126) Furthermore, another screen processing SR2 using another screen angle b is also performed, in addition to the above-described screen processing SR1 based on the screen angle a. This screen processing SR2 is performed on only the outline region RE. As for pixels in the outline region RE, the processing results of the screen processing SR1 based on the screen angle a and the screen processing SR2 based on the screen angle b are both used.
(127)
(128) In the present embodiment, the screen processing SR2 using the screen angle b is performed on the outline region RE (see
(129)
(130) Furthermore, the processing results of the screen processing SR2 and the above-described screen processing SR1 are composited for the outline region RE. Both of the processing results are averaged before composition.
(131) Referring to, for example, cells in the uppermost left 44 matrix, if all the ON-state pixels as a result of both of the processing results were determined directly as ON-state pixels, pixels whose values are larger than 4 would be set to the ON state and a relatively greater gradation value than originally expected would be represented.
(132) For this reason, in the present embodiment, adjustment is made through the averaging processing such that, as shown in
(133) In this way, for the outline region RE, composition is performed by averaging the processing result of the screen processing SR1 and the processing result of the screen processing SR2. Accordingly, apparently similar gradation values to those obtained from only the processing result of the screen processing SR1 are obtained in a 44 matrix that has undergone both the screen processing SR1 and the screen processing SR2.
(134) Furthermore, a similar operation is performed at each position in the outline region RE. As a result, an apparent outline as indicated by the extra thick line EL in
(135) Meanwhile, for pixels in the inner region RN, only the processing result of the screen processing SR1 is used, out of the processing results of both the screen processing SR1 and the screen processing SR2. Accordingly, similar screen processing to that in
(136) Through the above-described operation, a similar effect to that of the first embodiment can also be achieved in the binary screen processing.
3. Third Embodiment
3-1. Overview
(137) A third embodiment describes a technique for detecting a tilt angle c of a line segment region in the line region LR of a character and performing screen processing on that line segment region using a screen angle d whose difference from the tilt angle c is closest to a predetermined angle (here, 45 degrees).
(138)
(139) As shown in
(140) The character outline extraction unit 32 extracts an outline region of a character as well as detecting a line region of the character, and sections the line region of the character into the outline region and an inner region enclosed in the outline region. Note that a thin portion (thin-line portion) of a character includes no inner region and is constituted by only the outline region, so that the character outline extraction unit 32 extracts a thin-line portion of a character as the outline region.
(141) The character angle detection unit 36 is a processing unit that detects the tilt angle c of the line region LR of a character (to be specific, the tilt angle of the outline region of the line region LR).
(142) The screen angle determination unit 37 is a processing unit that selects, from among multiple screen angles i (e.g., angles 1 to 5) prepared in advance, a screen angle d whose difference from the tilt angle c of the line region LR of the character is closest to a predetermined angle (here, 45 degrees).
(143) The screen processing execution unit 38 is a processing unit that performs screen processing on a line segment region of the line region LR of a character, using the screen angle d selected from among the multiple screen angles i.
3-2. Image Processing
(144) As in the first and second embodiments, this image forming apparatus realizes gradation representation by performing screen processing on each plane image on a page including a character. Note that different screen angles a are fundamentally employed for different plane images in order to avoid interference of the plane images.
(145) As mentioned above, in the case where a difference between the outline angle of a character and the screen angle a is relatively small, there is the problem that jaggies are noticeable. This problem can also be solved with the third embodiment.
(146) Also, in particular in the case where the outline of a character having an intermediate gradation value is thin, that thin line may disappear depending on the positional relationship between the screen matrix reference table (screen table) and the thin line.
(147) For example, assume the case where a vertical thin line having an input gradation value of 30 exists in only the third column from the left, as shown in
(148) If the vertical thin line having an input gradation value of 30 was present in the first column from the left, the input gradation value of 30 at a position PG1 in the second row from the top and the first column from the left exceeds the corresponding reference value Dref of 0. Thus, an ON-state pixel is drawn at that position PG1. In this case, a situation where the thin line disappears is avoided, although the substantial gradation value of the thin line is reduced.
(149) However, in the case where a vertical thin line having an input gradation value of 30 is present in the third column from the left as mentioned above, the thin line disappears.
(150) Assume also, for example, the case where a vertical thin line having an input gradation value of 30 is present in only the first row from the top as shown in
(151) Such a situation can also occur in the case of employing the screen table TB2 similar to that in
(152) Assume, for example, the case where the screen table TB2 similar to that in
(153) Similarly, assume the case where the screen table TB2 (see
(154) In this way, there is a relatively high possibility that a thin line will disappear in the case where a difference between the angle of the thin line (in other words, the angle of a line segment region of the character outline) and the screen angle is 0 degrees or 90 degrees.
(155) In view of this, in the third embodiment, an angle whose difference from the angle of the thin line (in other words, the angle of the line segment region of the character outline) c is close to a predetermined angle e (here, 45 degrees) is employed as the screen angle d. Then, screen processing using the screen angle d is performed on the outline portion (outline region RE) of the character.
(156)
(157) If the vertical thin line having a gradation value of 30 (see
(158) Similarly, if the horizontal thin line having a gradation value of 30 (see
(159) Performing such processing makes it possible to suppress a situation where a thin line disappears as a result of screen processing. In particular, the possibility that a thin line will disappear can be minimized by performing screen processing using an angle that corresponds to a 45-degree tilt angle with respect to the direction of extension of the thin line in the input image.
(160) Further details of such processing will be discussed below.
(161) In the third embodiment as well, the processing for sectioning the line region LR (see
(162) On the other hand, for the outline region RE, each process described below is performed.
(163) First, the character outline extraction unit 32 sections the outline region RE in the line region LR of a character into multiple partial regions (also referred to as segments). Then, the character angle detection unit 36 detects the tilt angle c of the outline region RE in the line region LR of the character (the tilt angle c of an edge portion of the line region LR). Here, the tilt angle c of the outline region RE is detected for each partial region of the outline region RE.
(164) Specifically, a direction detection filter FT1 as shown in
(165) The filter FT1 has an M-by-M pixel size in which the center pixel and pixels existing in both upper-right and lower-left directions from that center pixel have a pixel value of 1, and the other pixels have a pixel value of 0. The filter FT1 has the characteristic of calculating a high value for a line that extends in a 45-degree diagonal direction.
(166) By applying such an image filter to the outline region RE in an input image and, for example, determining whether or not the calculation result is greater than or equal to a predetermined value, it is possible to determine whether the angle of the outline region (the angle of the outline of a character) is 45 degrees or not (in other words, to detect a 45-degree outline).
(167) Note that other angles may also be detected using other image processing filters for detecting respective angles.
(168) Next, the screen angle determination unit 37 selects, from among multiple screen angles i prepared in advance, a screen angle d whose difference from the tilt angle c of the line region LR of the character is closest to a predetermined angle (45 degrees). This processing for selecting the screen angle d is performed for each partial region of the outline region RE. Assume here that multiple (five) screen tables SCR1 to SCR5 (see
(169) Specifically, a difference value i between the tilt angle c of the character and each of the screen angles i corresponding to the multiple screen tables SCRi is calculated. Each difference value i is calculated to be a value in the range of 0 to 90 degrees. From among these difference values i, a screen angle j of a screen table SCRj that corresponds to a difference value closest to a predetermined value e (here, 45 degrees) is determined as the screen angle d to be used, in each case. For example, if the difference value 3 is closest to 45 degrees from among the multiple difference values i, the angle 3 corresponding to the screen table SCR3 is determined as the screen angle d. To be more specific, for vertical and horizontal thin lines, the angle 3 corresponding to the screen table SCR3 is determined as the screen angle d. Also, for 45-degree thin lines, the angle 1 corresponding to the screen table SCR1 (or the angle 5 corresponding to the screen table SCR5) is determined as the screen angle d.
(170) Then, screen processing is performed using the screen table SCR corresponding to the selected screen angle d, and as a result, an output image is generated. To be specific, screen processing using the screen angle d determined for each of the multiple partial regions is performed on each of these partial regions, and as a result, an output image is generated.
(171) With the operation as described above, screen processing for the outline region RE is performed using the screen angle d whose difference from the tilt angle c of a character is close to 45 degrees. This makes it possible to resolve the problem that jaggies are noticeable in the case where the difference between the outline angle of a character and the screen angle a is relatively small (e.g., the case where the difference between both angles is approximately 20 degrees or less). In particular, screen processing is performed for each partial region of the outline of a character, using the screen angle d whose difference from the tilt angle of each partial region is close to 45 degrees. Accordingly, jaggies appearing in character outlines having various angles can be suppressed.
(172) Furthermore, screen processing is performed using the screen angle d whose difference from the tilt angle c of a character is close to 45 degrees. Accordingly, it is possible to minimize the possibility that a thin line will disappear.
(173) Meanwhile, for pixels in the inner region RN, the processing result of the screen processing SR1 using the predetermined screen angle a is utilized as is to generate an output image. The screen angle a is determined as appropriate for each plane image. Accordingly, for the inner region RN, it is possible to favorably avoid interference of multiple plane images (the occurrence of moire or the like).
(174) Note that although the angle e is 45 degrees in the third embodiment, the present invention is not limited thereto, and a predetermined value in the range of 40 to 50 degrees or a predetermined value in the range of 30 to 60 degrees, for example, may be employed as the angle e.
3. Variations
(175) While the above has been a description of embodiments of the present invention, the present invention is not intended to be limited to those described above.
(176) For example, although the above embodiments illustrate the case where part of the processing is implemented by the hardware of the image processing unit 3, the present invention is not limited thereto, and processing similar to the above may be implemented by only a program (software) executed by the controller 9. To be more specific, the controller 9 of the MFP 1 may realize the functionality similar to that of the image processing unit 3 in the above embodiments by reading out a predetermined program PG from various types of non-transitory (or portable) computer-readable storage media 91 (such as a USB memory), on which such a program has been recorded, and then executing that program PG using a CPU or the like. Note that although the above program may be supplied via a storage medium, it may also be supplied by downloading it via the Internet.
(177) The present invention may be embodied in various other forms without departing from the spirit or essential characteristics thereof. The embodiments disclosed in this application are to be considered in all respects as illustrative and not limiting. The scope of the invention is indicated by the appended claims rather than by the foregoing description, and all modifications or changes that come within the meaning and range of equivalency of the claims are intended to be embraced therein.