Image processing apparatus, image processing method and image processing program
10264157 ยท 2019-04-16
Assignee
Inventors
Cpc classification
H04N2201/0094
ELECTRICITY
H04N1/00204
ELECTRICITY
G06K17/0016
PHYSICS
H04N1/32133
ELECTRICITY
International classification
H04N1/04
ELECTRICITY
G06K17/00
PHYSICS
H04N1/32
ELECTRICITY
Abstract
An image processing apparatus includes: an extraction unit configured to extract a written image additionally written on printed matter on which a document is written from a read image read from the printed matter; and a generation unit configured to generate additionally written document data by writing written image information about the written image into document data that is the document described in a mark-up language.
Claims
1. An image processing apparatus comprising: a hardware processor configured to: extract a written image written on a printed document by reading the printed document on which the written image is written; and generate a subsequent document by writing the extracted written image into document data that was used to create the printed document, wherein the subsequent document is created by linking the extracted written image to a tag, the tag including a character string written in an adjacent image next to the extracted written image, wherein the document data is described in a mark-up language.
2. The image processing apparatus according to claim 1, wherein a predetermined tag is described in the document data, when a name of the predetermined tag is represented in the adjacent image next to the extracted written image, the hardware processor generates the subsequent document by writing the extracted written image so that the extracted written image corresponds to the predetermined tag.
3. The image processing apparatus according to claim 1, wherein the hardware processor generates the subsequent document by writing a character string written in the extracted written image as the extracted written image.
4. The image processing apparatus according to claim 1, wherein the hardware processor generates the subsequent document by writing a file name of an image file of the extracted written image as the extracted written image.
5. The image processing apparatus according to claim 1, further comprising: a print unit configured to print the printed document and an identifier used to identify the printed document on a sheet; and a storage unit configured to store the document data after linking the document data to the identifier; wherein the hardware processor generates the subsequent document by writing the extracted written image into the document data linked to the identifier extracted from the printed document.
6. The image processing apparatus according to claim 5, wherein the hardware processor generates the subsequent document by writing information about an image from which the printed document and the identifier are removed as the extracted written image.
7. An image processing method comprising: extracting a written image written on a printed document by reading the printed document on which the written image is written; and generating a subsequent document by writing the extracted written image into document data that was used to create the printed document, wherein the subsequent document is created by linking the extracted written image to a tag, the tag including a character string written in an adjacent image next to the extracted written image, wherein the document data is described in a mark-up language.
8. A non-transitory recording medium storing a computer readable image processing program for causing a computer (hardware processor) to execute a process, the process comprising: an extracting step of extracting a written image written on a printed document by reading the printed document on which the written image is written; and a generating step of generating a subsequent document by writing the extracted written image into document data that was used to create the printed document, wherein the subsequent document is created by linking the extracted written image to a tag, the tag including a character string written in an adjacent image next to the extracted written image, wherein the document data is described in a mark-up language.
9. The non-transitory recording medium storing a computer readable image processing program according to claim 8, wherein a predetermined tag is previously described in the document data, and when a name of the predetermined tag is written in the adjacent image next to the written image, the extracted written image is written so that the written image information corresponds to the predetermined tag, and thus the subsequent document is generated in the generating step.
10. The non-transitory recording medium storing a computer readable image processing program according to claim 8, wherein a character string written in the written image is written as the extracted written image, and thus the subsequent document is generated in the generating step.
11. The non-transitory recording medium storing a computer readable image processing program according to claim 8, wherein a file name of an image file of the written image is written as the extracted written image, and thus the subsequent document is generated in the generating step.
12. The non-transitory recording medium storing a computer readable image processing program according to claim 8, wherein the process further comprises: a printing step of printing the printed document and an identifier used to identify the printed document on a sheet; and a storing step of storing the document data in a memory after linking the document data to the identifier; and the document data linked to the identifier extracted from the printed document is read from the memory and the extracted written image is written to the document data, and thus the subsequent document is generated in the generating step.
13. The non-transitory recording medium storing a computer readable image processing program according to claim 12, wherein information about an image from which the document and the identifier are removed is written as the extracted written image, and thus the subsequent document is generated in the generating step.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The above and other objects, advantages and features of the present invention will become more fully understood from the detailed description given hereinbelow and the appended drawings which are given by way of illustration only, and thus are not intended as a definition of the limits of the present invention, and wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
DESCRIPTION OF THE PREFERRED EMBODIMENTS
(16) Hereinafter, an embodiment of the present invention will be described with reference to the drawings. However, the scope of the invention is not limited to the illustrated examples.
(17)
(18) As illustrated in
(19) The image forming apparatus 1 can communicate with the terminal device 2 via the communication line 3. For example, a Local Area Network (LAN) line, the Internet, a public line, or a dedicated line is used as the communication line 3.
(20) The image forming apparatus 1 is an apparatus in which the functions to provide service including copying, PC printing, faxing, scanning, or box are consolidated. The image forming apparatus 1 is sometimes referred to as Multi Function Peripherals, or MFP.
(21) The PC printing service is the service for printing an image on a sheet in accordance with the image data received from the terminal device 2. The PC printing service is sometimes referred to as network printing or network print.
(22) The box service is service in which a storage region referred to, for example, as a box or a personal box is provided to each user so that the user saves and manages, for example, image data in the user's storage region. The box corresponds to a folder or a directory of a personal computer.
(23) The terminal device 2 is a client used to remotely use the service by the image forming apparatus 1. The terminal device 2 stores an application program that creates a document, and generates a file in which the document is described in a mark-up language (a source file). For example, Extensible Markup Language (XML) or Hypertext Markup Language (HTML) is used as the mark-up language.
(24) For example, a personal computer, a smartphone, or a tablet computer can be used as the terminal device 2.
(25) As illustrated in
(26) The touch panel display 10f displays, for example, a screen on which a message to the user is displayed, a screen on which the user inputs a command or information, and a screen on which a result of the process performed by the CPU 10a is displayed. The touch panel display 10f transmits a signal indicating the touched position to the CPU 10a.
(27) The VRAM 10c is used to store the data of the screen to be displayed on the touch panel display 10f.
(28) The operation key panel 10g is a so-called hardware keyboard, and includes, for example, a numeric keypad, a start key, a stop key, and a function key.
(29) The NIC 10h communicates with another device in a protocol such as Transmission Control Protocol/Internet Protocol (TCP/IP).
(30) The modem 10i exchanges image data with a facsimile terminal in a protocol such as G3.
(31) The scan unit 10j generates image data by reading the image on the sheet set on a glass platen.
(32) The print unit 10k prints not only the image read by the scan unit 10j but also the image that the NIC 10h or modem 10i receives from another device on a sheet.
(33) The finisher 10m fastens the sheets on which the image is printed by the print unit 10k, namely, printed matter with a stapler or punches holes on the sheets.
(34) The ROM 10d or the large-capacity storage device 10e stores programs to provide the exemplary service described above, and additionally stores the document structuring program 10P.
(35) The document structuring program 10P generates the printed matter of a document represented in the source file described in a mark-up language by printing the document on a sheet. Additionally, the image written on the printed matter by the user can be reflected on the source file.
(36) The programs are loaded on the RAM 10b as necessary and executed by the CPU 10a. For example, a hard disk drive or a Solid State Drive (SSD) is used as the large-capacity storage device 10e.
(37) Executing the document structuring program 10P causes the image forming apparatus 1 to implement the functions, for example, of the source file acquisition unit 101, the source file storage unit 102, the printed matter generation unit 103, the written image extraction unit 104, the item name acquisition unit 105, and the source file generation unit 106 illustrated in
(38) The process that each of the source file acquisition unit 101 to the source file generation unit 106 performs will be described hereinafter.
(39)
(40) The source file acquisition unit 101 obtains the source file in which a document is described. An example in which the source file 60 describing the document 50 illustrated in
(41) An example in which the source file 60 is described in XML as illustrated in
(42) The source file acquisition unit 101 acquires the source file 60 as described below. The user creates the document 50 using an application program of the terminal device 2 and generates the source file 60. Meanwhile, the user puts a file name on the source file 60. Then, the user transmits the source file 60 to the image forming apparatus 1. Then, the source file acquisition unit 101 receives the source file 60.
(43) When the source file acquisition unit 101 acquires the source file 60, the source file storage unit 102 stores the source file 60.
(44) When the user designates the file name of the source file 60 and instructs the printed matter generation unit 103 to print the document 50, the printed matter generation unit 103 controls the print unit 10k to print the document 50 on a sheet in accordance with the source file 60. Meanwhile, a two-dimensional bar code 51 representing the file name of the source file 60 is added to the document 50.
(45) This provides the printed matter 4 on which the document 50 and the two-dimensional bar code 51 are printed as illustrated in
(46) The user writes the user's name and address as the contractor on the printed matter 4 with a pen as illustrated in
(47) Then, the user sets the printed matter 4 on the scan unit 10j of the image forming apparatus 1 so that the printed side of the printed matter 4 is read. Hereinafter, the image read from the printed side will be referred to as a read image 53.
(48) This reading inputs the image data of the read image 53 to the image forming apparatus 1. Hereinafter, the image data will be referred to as an image data 61.
(49) When the image data 61 is acquired, the written image extraction unit 104 extracts the written image 52 from the read image 53 in the process illustrated in
(50) The written image extraction unit 104 detects the two-dimensional bar code 51 from the read image 53 (#701 in
(51) However, the extracted image still includes the two-dimensional bar code 51 as illustrated in
(52) Note that the printed matter generation unit 103 preferably processes the read image 53 in a process for converting the read image 53 into an binary image, a process for removing noise from the read image 53, or a process for correcting the inclination of the read image 53 before performing the process illustrated in
(53) The item name acquisition unit 105 acquires the text data 62 indicating the name of an item (the item name) corresponding to the written image 52 as described below.
(54) The item name acquisition unit 105 extracts the image of a character string on a predetermined side next to the written image 52 from the read image 53.
(55) For example, when the side is left, the item name acquisition unit 105 extracts the image of the character string Name: as the image of a character string at the left of the written image 521. Similarly, the item name acquisition unit 105 extracts the image of the character string Address: as the image of a character string at the left of the written image 522.
(56) Then, the item name acquisition unit 105 converts the extracted images into text data. The text data is the text data 62.
(57) Note that the process for extracting the images of the character strings and converting the images into the text data can be performed with a publicly known Optical Character Recognition (OCR) technique.
(58) The source file generation unit 106 generates anew source file 63 by reflecting the written image 52 on the source file 60. The process for the generation is performed as illustrated in
(59) The source file generation unit 106 reads the source file 60 from the source file storage unit 102 (#711 in
(60) Then, the source file generation unit 106 writes a code <character string_1>text_2</character string_1> including tags <> and </> just after the determined position in the source file 60 (#714).
(61) Note that the character string_1 is a character string of the item name of the written image 52 from which predetermined characters (for example, :, ;, and are deleted. The text_2 is the character string recognized in step #721.
(62) For example, when the written image 52 is the written image 521, the source file generation unit 106 searches for the Name: and determines the position. Then, the source file generation unit 106 writes the code <Name>John William</Name> just after the position. When the written image 52 is the written image 522, the source file generation unit 106 searches for the Address: and determines the position. Then, the source file generation unit 106 writes the code <Address>123 ABC street, NY</Address> just after the position.
(63) As a result of the process, the source file illustrated in
(64) Alternatively, the source file generation unit 106 preferably generates an image file 64 of the written image 52, saves the image file 64 in a predetermined directory so as to write the character string indicating the place in which the image file 64 is saved and the file name as the text 2.
(65) In such a case, the source file generation unit 106 generates the image file 64 of the written image 52 in step #712 instead of converting the written image 52 into the text data.
(66) For example, when the image file 64 of the written image 521 is saved in a directory dir01 and the file name of the image file 64 is image001.gif, the code <Name>dir01/image001.gif</Name> is written. The code of the written image 522 is generated and written in a similar manner.
(67) As a result, the source file illustrated in
(68) The source file 63 is saved in a predetermined directory, and used to search for, display, or print the document 50. The source file 63 is preferably transmitted to another device.
(69)
(70) Next, the flow of the whole process for reflecting the image hand-written by the user (the written image 52) on the source file 60 will be described with reference to the flowchart.
(71) By the document structuring program 10P, the process is performed as illustrated in
(72) When receiving the source file 60 from the terminal device 2 (Yes in #11), the image forming apparatus 1 saves the source file 60 in the source file storage unit 102 (#12).
(73) Alternatively, when the file name of the source file 60 is designated and a command for printing the source file 60 is generated (Yes in #13), the image forming apparatus 1 generates the two-dimensional bar code 51 representing the file name (#14), and prints a document represented by the source file 60, namely, the document 50 and the generated two-dimensional bar code 51 on the sheet (#15). This generates the printed matter 4 (see
(74) The user writes the user's name and address on the printed matter 4 with a pen. Then, the user sets the printed matter 4 on the scan unit 10j so that the scan unit 10j scans the printed matter 4. This reads the read image 53 and generates the image data 61.
(75) When the printed matter 4 is scanned and the image data 61 is generated with the scan unit 10j (Yes in #16), the image forming apparatus 1 extracts the written image 52 from the image represented by the image data 61, namely, the read image 53 (#17). The process for the extraction has been described above with reference to
(76) Furthermore, the image forming apparatus 1 acquires the data representing the item name of each image in the written image 52 (the text data 62) (#18).
(77) Then, the image forming apparatus 1 writes the code about the written image 52 into the source file 60. The process for the writing has been described above with reference
(78) The image forming apparatus 1 properly performs the processes in step #12, in steps #14 and #15, and in steps #17 to #19 until the document structuring program 10P is completed.
(79) According to the present embodiment, the information about the written image 52 is written into the source file 60. Using the source file 60 (namely, the source file 63) in which the information is written enables the user to use an existing search system without any change to search for the document 50 based on the written image 52. Additionally, describing the source files 60 and 63 in a language preferable to structure texts, such as XML, enables the user to more flexibly search for the file than the search for the data in a mere text format.
(80) According to the present embodiment, even when an underline or frame surrounding a region in which the user is to hand-write characters is not printed on the printed matter 4, the written image 52 can be extracted, linked to a tag, and recorded.
(81)
(82) According to the present embodiment, the source file generation unit 106 generates the tags <Name> and </Name>, namely, the tags for the written image 52. However, as illustrated in
(83) In such a case, the source file generation unit 106 can generate the source file 63 as illustrated in
(84) As illustrated in
(85) Furthermore, the source file generation unit 106 searches the source file 60 for the two tags indicating the character string of the item name corresponding to the written image 52 from which predetermined characters (for example, :, ;, and ) is deleted (#723). This search determines the positions of the two tags. Note that the latter tag of the two tags includes a slash.
(86) Then, the source file generation unit 106 writes the character string recognized in step #722 between the two tags.
(87) The process described above brings about the same result as the present embodiment. In other words, the source file 63 as illustrated in
(88) In the exemplary embodiment, instead of the recognized character string, the image file 64 of the written image 52 is generated so that the place in which the image file 64 is saved and the name of the image file 64 can be written. In such as case, the source file 63 as illustrated in
(89) In the present embodiment, a QR code (registered trademark) is used as the two-dimensional bar code 51. However, a two-dimensional bar code in another format can be used. As long as the document 50 can be distinguished, a bar code or a character string can be used instead of the two-dimensional bar code 51.
(90) The whole configurations of the document management system 100 and the image forming apparatus 1 or the configuration of each unit, the contents of each process, the order of the processes, data structures can properly be changed within the gist of the present invention.
(91) Although the present invention has been described and illustrated in detail, it is clearly understood that the same is by way of illustrated and example only and is not to be taken by way of limitation, the scope of the present invention being interpreted by terms of the appended claims.