Text recognition system with feature recognition and method of operation thereof
09582727 ยท 2017-02-28
Assignee
Inventors
- Golnaz Abdollahian (San Jose, CA, US)
- Alexander Berestov (San Jose, CA)
- Hiromasa Naganuma (Ichikawa, JP)
- Hiroshige Okamoto (Yokohama, JP)
Cpc classification
G06V30/196
PHYSICS
G06V30/224
PHYSICS
International classification
Abstract
A text recognition system and method of operation thereof including: a storage unit for storing a text unit; and a processing unit, connected to the storage unit, the processing unit including: a communication interface for receiving the text unit, a feature detection module for determining an isolated feature of the text unit, an angle detection module for determining angle features of the text unit, a feature vector module for generating a feature vector for the text unit based on the isolated feature and the angle features, and a text recognition module for determining recognized text using the feature vector for display on a display interface.
Claims
1. A method of operation of a text recognition system comprising: receiving a text unit; determining an isolated feature of the text unit; determining boundary points of the text unit; determining angle features of the text unit based on the boundary points; generating a feature vector for the text unit based on the isolated feature and the angle features; and determining recognized text using the feature vector for display on a display interface, wherein determining the angle features includes: selecting three of the boundary points which are adjacent to one another; drawing lines between each of the boundary points and its nearest neighbor; and determining an angle between the lines.
2. The method as claimed in claim 1 wherein determining the angle features of the text unit includes: determining an angle feature; and determining an angle feature.
3. The method as claimed in claim 1 wherein receiving the text unit includes receiving the text unit having a background region and a text region.
4. A method of operation of a text recognition system comprising: receiving a text unit having a background region and a text region; determining an isolated feature of the text unit; determining boundary points of the text unit; determining angle features based on the boundary points; generating a feature vector for the text unit based on the isolated feature and the angle features; and determining recognized text using the feature vector for display on a display interface, wherein determining the angle features includes: selecting two of the boundary points which are adjacent to one another; determining a text horizontal; drawing a line between the boundary points; and determining an angle between the text horizontal and the line.
5. The method as claimed in claim 4 wherein determining the boundary points of the text unit includes determining a boundary between the background region and the text region.
6. The method as claimed in claim 4 wherein generating the feature vector includes generating a spatial pyramid feature vector.
7. A text recognition system comprising: a storage unit for storing a text unit; and a processing unit, connected to the storage unit, the processing unit including: a communication interface for receiving the text unit, a feature detection module, coupled to the communication unit, for determining an isolated feature of the text unit, an angle detection module, coupled to the communication unit, for determining angle features of the text unit, a boundary determination module, coupled to the angle detection module, for determining boundary points of the text unit, a feature vector module, coupled to the feature detection module and the angle detection module, for generating a feature vector for the text unit based on the isolated feature and the angle features, and a text recognition module, coupled to the feature vector module, for determining recognized text using the feature vector for display on a display interface, wherein the angle detection module is for: selecting three of the boundary points which are adjacent to one another; drawing lines between each of the boundary points and its nearest neighbor; and determining an angle between the lines.
8. The system as claimed in claim 7 further comprising an imaging device connected to the processing unit or the storage unit.
9. The system as claimed in claim 7 further comprising a light source for providing light for an imaging device.
10. The system as claimed in claim 7 wherein the processing unit includes the boundary determination module for detecting a background region and a text region of the text unit.
11. The system as claimed in claim 7 further comprising: an imaging device connected to the processing unit or the storage unit; a light source for providing light for the imaging device; and wherein the processing unit includes: the boundary determination module, coupled to the angle detection module, for detecting a background region and a text region of the text unit.
12. The system as claimed in claim 11 wherein the boundary determination module is for determining a boundary between the background region and the text region.
13. The system as claimed in claim 11 wherein the feature vector module is for generating a spatial pyramid feature vector.
14. A text recognition system comprising: a storage unit for storing a text unit; and a processing unit, connected to the storage unit, the processing unit including: a communication interface for receiving the text unit, a feature detection module, coupled to the communication unit, for determining an isolated feature of the text unit, an angle detection module, coupled to the communication unit, for determining angle features of the text unit, a boundary determination module, coupled to the angle detection module, for determining boundary points of the text unit, a feature vector module, coupled to the feature detection module and the angle detection module, for generating a feature vector for the text unit based on the isolated feature and the angle features, and a text recognition module, coupled to the feature vector module, for determining recognized text using the feature vector for display on a display interface, wherein the angle detection module is for: selecting two of the boundary points which are adjacent to one another; determining a text horizontal; drawing a line between the boundary points; and determining an angle between the text horizontal and the line.
15. The system as claimed in claim 14 further comprising an imaging device connected to the processing unit or the storage unit.
16. The system as claimed in claim 14 further comprising a light source for providing light for an imaging device.
17. The system as claimed in claim 14 wherein the processing unit includes the boundary determination module for detecting a background region and a text region of the text unit.
18. The system as claimed in claim 14 further comprising: an imaging device connected to the processing unit or the storage unit; a light source for providing light for the imaging device; and wherein the processing unit includes: the boundary determination module, coupled to the angle detection module, for detecting a background region and a text region of the text unit.
19. The system as claimed in claim 18 wherein the boundary determination module is for determining a boundary between the background region and the text region.
20. The system as claimed in claim 18 wherein the feature vector module is for generating a spatial pyramid feature vector.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
BEST MODE FOR CARRYING OUT THE INVENTION
(13) The following embodiments are described in sufficient detail to enable those skilled in the art to make and use the invention. It is to be understood that other embodiments would be evident based on the present disclosure, and that system, process, or mechanical changes may be made without departing from the scope of the present invention.
(14) In the following description, numerous specific details are given to provide a thorough understanding of the invention. However, it will be apparent that the invention may be practiced without these specific details. In order to avoid obscuring the present invention, some well-known circuits, system configurations, and process steps are not disclosed in detail.
(15) The drawings showing embodiments of the system are semi-diagrammatic and not to scale and, particularly, some of the dimensions are for the clarity of presentation and are shown exaggerated in the drawing FIGs. Similarly, although the views in the drawings for ease of description generally show similar orientations, this depiction in the FIGs. is arbitrary for the most part. Generally, the invention can be operated in any orientation.
(16) Where multiple embodiments are disclosed and described having some features in common, for clarity and ease of illustration, description, and comprehension thereof, similar and like features one to another will ordinarily be described with similar reference numerals.
(17) For expository purposes, the term horizontal as used herein is defined as a plane parallel to the proper reading plane, regardless of its orientation. The term vertical refers to a direction perpendicular to the horizontal as just defined. Terms, such as above, below, bottom, top, side (as in sidewall), higher, lower, upper, over, and under, are defined with respect to the horizontal plane, as shown in the figures.
(18) Referring now to
(19) The image capture device 102 can be a camera, scanner, or other device capable of capturing still frames. The image capture device 102 is connected to the processing unit 104, which is connected to the display interface 106 and a storage unit 108. The display interface 106 can display identified text which has been imaged with the image capture device 102. Also connected to the processing unit 104 is a light source 110 for illuminating objects in view of the image capture device 102. The processing unit 104 is shown as connected to the light source 110 for illustrative purposes, but it is understood that the light source 110 can also be separate from the processing unit 104. Furthermore, it is understood that the light source 110 can be ambient natural or artificial light.
(20) The processing unit 104 can be any of a variety of semiconductor devices such as a desktop or laptop computer, a specialized device, embedded system, or simply a computer chip integrated with the image capture device 102 and/or the display interface 106. The display interface 106 can utilize a variety of display technologies such as LCD, LED-LCD, plasma, holographic, OLED, front and rear projection, CRT, or other display technologies.
(21) The processing unit 104 can contain many modules capable of performing various functions. For example, the processing unit 104 can have a communication interface coupled to a feature detection module and an angle detection module, a boundary determination module coupled to the angle detection module, a feature vector module coupled to the angle detection module and the feature detection module, and a text recognition module coupled to the feature vector module. The processing unit can run some or all of the modules simultaneously.
(22) For example, the image capture device 102 can be used in conjunction with the light source 110 in order to capture an image for text extraction and identification by the text recognition system 100. The image captured by the image capture device 102 and the light source 110 can be stored in the storage unit 108. The processing unit 104 can process the image and identify text for display of the identified text isolated from the image on the display interface 106. The image capture device 102, the processing unit 104, and the display interface 106 can be connected in various ways to operate the text recognition system 100. For example, the text recognition system 100 can be integrated into a handheld camera, phone, tablet, or operated as a camera or scanner attached to a desktop computer or laptop. Also for example, the image capture device 102 can be remote from the processing unit 104, and can be connected through a wired or wireless networking protocol.
(23) Referring now to
(24) A single character is shown in the text unit 250 for illustrative purposes, but it is understood that the text recognition system 100 can operate on larger textual units. For example, the text unit 250 can include individual characters, entire words, phrases, or full sentences. A double border is shown around the text unit 250 for clarity purposes only and is not meant to limit the invention in any way. The communication interface of the processing unit 104 can receive the text unit 250 from the storage unit 108, for example.
(25) Referring now to
(26) A single hole is shown in
(27) Referring now to
(28) The boundary points 420 can be determined in various ways. For example, first some of the text features 316 that can be considered self-contained such as dots or holes can be excluded. Continuing the example, the boundary points 420 can be spaced along the boundary between the background region 214 and the text region 212 to provide coverage of the shape of the character, word, or phrase in question, but leaving enough space to avoid imaging defects (this can be seen in the not-quite-straight lines of
(29) Referring now to
(30) An angle 522, for example, is defined as the angle found when a line is drawn between three of the boundary points 420 in sequence. More specifically, in this example, a line is drawn from one of the boundary points 420 which has been designated as point 1 (shown in
(31) An angle 524, for example, is defined as the angle between a line drawn between two of the boundary points 420 in sequence and a line drawn through one of the points which follows a text horizontal 526. The text horizontal 526 is included as a part of the text unit 250. The text horizontal 526 (shown in
(32) Arrows shown are for illustrative purposes only, and there is no directionality implied with the lines used to determine the angle 522 or the angle 524. Groups of the boundary points 420 overlap so that every sequential combination of the boundary points 420 can be covered. More detail on this follows in the description for
(33) Referring now to
(34) The numbering for the x-axis on both plots can be based on the lowest numbered of the boundary points 420 of
(35) Referring now to
(36) Referring now to
(37) This example uses the isolated feature 318 and two angle features to determine the feature vector 828, but it is understood that this is for illustrative purposes only. The feature vector 828 can use other kinds of the text features 316 to further refine the feature vector 828.
(38) The feature vector 828 can be used by the text recognition module to determine what character, word, or phrase has been detected within the text unit 250, for example. The determination of the content of the text unit 250 can be done by matching the feature vector 828 with a dictionary of previously trained feature vectors. The feature vector 828 can be different for each possible character, word, or phrase of the text unit 250. Once a good match has been found, recognized text can be output for display on the display interface 106 of
(39) It has been discovered that the use of the feature vector 828 in the text recognition system 100 allows for greater detection and recognition quality among a greater variation of text. The feature vector 828 does not require segmentation, nor does it require all letters in a word to be separated from each other. The feature vector 828 allows a robust and quantitative analysis to be done on entire words in any language without segmentation, even if all letters in a word are connected (such as in cursive writing or in a script such as Arabic), because the feature vector 828 can be generated to assure that the feature vector 828 for different words will be easily distinguishable.
(40) It has also been discovered that the approach taken in the text recognition system 100 of combining many different types of the text features 316 to determine the feature vector 828 allows for simple scaling to encompass words, entire phrases, and beyond. When greater precision is necessary, additional types of the text features 316 aside from the isolated features 318, the angle feature 722, and the angle feature 724 can be added to the feature vector 828, increasing precision. Alternatively, the resolution of the text features 316 can be increased by decreasing the spacing between the boundary points 420 of
(41) It has also been found that the approach taken by the text recognition system 100 of combining many different types of the text features 316 to determine the feature vector 828 allows for faster detection of recognized text. Because no segmentation is required, and the text recognition system 100 can operate on words and even entire phrases, significantly less processing power is required to identify text.
(42) Referring now to
(43) Referring now to
(44) Referring now to
(45) Referring now to
(46) The resulting method, process, apparatus, device, product, and/or system is straightforward, cost-effective, uncomplicated, highly versatile, accurate, sensitive, and effective, and can be implemented by adapting known components for ready, efficient, and economical manufacturing, application, and utilization.
(47) Another important aspect of the present invention is that it valuably supports and services the historical trend of reducing costs, simplifying systems, and increasing performance.
(48) These and other valuable aspects of the present invention consequently further the state of the technology to at least the next level.
(49) While the invention has been described in conjunction with a specific best mode, it is to be understood that many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the aforegoing description. Accordingly, it is intended to embrace all such alternatives, modifications, and variations that fall within the scope of the included claims. All matters hithertofore set forth herein or shown in the accompanying drawings are to be interpreted in an illustrative and non-limiting sense.