Patent classifications
G06K9/34
METHOD AND DEVICE FOR INPUTTING HANDWRITING CHARACTER
A method and an electronic device for inputting handwriting character are provided. The electronic device comprises a touch screen, a memory, and a processor. The processor is configured to perform the functions of the method. The method comprises steps of: adding a handwriting input on the touch screen; detecting a position of an initial point of the handwriting input; determining an input area for the handwriting input among the plurality of input areas of the touch screen based on the position of the initial point of the handwriting input; determining an operation of the handwriting input based on the position of the initial point of the handwriting input and performing the determined operation; and upon completion of the handwriting input, recognizing the input as a character and displaying the recognized character in the determined input area on the touch screen.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
According to an embodiment, an information processing apparatus includes an attribute determiner and a setter. Each of acquired first sets indicates a combination of observation information indicating a result of observation of an area surrounding a moving body and position information. The attribute determiner is configured to determine, based on the observation information, an attribute of each of areas into which the area surrounding the moving body is divided, and to generate second sets, each indicating a combination of attribute information indicating the attribute of each area and the position information. The setter is configured to set, based on the second sets, reliability of the attribute of the area of a target second set, from correlation between the attribute of the area of the target second set and the attribute of corresponding areas each indicating the area corresponding to the target area in the areas of the other second sets.
DOCUMENT OPTICAL CHARACTER RECOGNITION
Vehicles and other items often have corresponding documentation, such as registration cards, that includes a significant amount of informative textual information that can be used in identifying the item. Traditional OCR may be unsuccessful when dealing with non-cooperative images. Accordingly, features such as dewarping, text alignment, and line identification and removal may aid in OCR of non-cooperative images. Dewarping involves determining curvature of a document depicted in an image and processing the image to dewarp the image of the document to make it more accurately conform to the ideal of a cooperative image. Text alignment involves determining an actual alignment of depicted text, even when the depicted text is not aligned with depicted visual cues. Line identification and removal involves identifying portions of the image that depict lines and removing those lines prior to OCR processing of the image.
System for automated text and halftone segmentation
A method and system for segmenting text from non-text portions of a digital image using the size, solidity, and run length characteristics of connected components within the image data. For a connected component comprising a rectangular group of pixels enclosing a set of connected pixels having the same binary state, the size characteristic may be based on a ratio of height to width of the connected component and the total number of pixels within the connected component, the solidity characteristic may be based on a ratio of pixels within the connected component to a total number of pixels within a convex hull of the set of connected pixels, and the run length characteristic may be based on a number of transitions within the connected component.
Object detection in images using distance maps
There is described herein a method and system for detecting, in a segmented image, the presence and position of objects with a dimension greater than or equal to a minimum dimension. The objects exhibit a property whereby a distance map of the object at a first scale and a distance map of the object at a second scale greater than the first scale differ by a constant value over a domain of the distance map of the object at the first scale. A distance map of a model object is compared to a distance map of a target object using a similarity score that is invariant to an offset.
Systems and methods of extracting text from a digital image
A method of extracting text from a digital image is provided. The method of extracting text includes receiving a digital image at an image processor where the digital image includes a textual object and a graphical object. A mask is generated based on the digital image. The mask includes a pattern having a first pattern area associated with the textual object and a second pattern area associated with the graphical object. The mask is applied to the digital image creating a transformed digital image. The transformed digital image includes a portion of the digital image associated with the textual object. Character recognition is performed on the portion of the digital image associated with the textual object of the transformed digital image to create a recognized text output.
LOCATION BASED OPTICAL CHARACTER RECOGNITION (OCR)
A method of adapting an optical character recognition (OCR) process, comprising: capturing an image of a text presenting object by an imaging sensor of a mobile device; sending the image and a location of the mobile device to a main server; selecting a reference model from a plurality of reference models according to the location of the mobile device, each of the plurality of reference models contains character recognition information associated with a location; and recognizing characters in the image using the reference model.
Method for decomposing complex objects into simpler components
Method for decomposing a complexly shaped object in a data set, such as a geobody (31) in a seismic data volume, into component objects more representative of the true connectivity state of the system represented by the data set. The geobody is decomposed using a basis set of eigenvectors (33) of a connectivity matrix (32) describing the state of connectivity between voxels in the geobody. Lineal subspaces of the geobody in eigenvector space are associated with likely component objects (34), either by a human interpreter (342) cross plotting (341) two or more eigenvectors, or in an automated manner in which a computer algorithm (344) detects the lineal sub-spaces and the clusters within them.
Method and apparatus for fingerprint identification
The present disclosure applies to the field of biometric identification technologies and provides a method and an apparatus for fingerprint identification. The method includes: extracting a minutia of the input fingerprint image by using a statistical method; performing fingerprint matching according to the extracted minutia to obtain a fingerprint identification result. According to the method provides in embodiments of the present disclosure, the direction of the minutia is calculated by using statistical information, a descriptor with statistical significance is added for the minutia, and during the matching process, calculation of the similarity of the minutia by using the descriptor and region matching by using information of the direction field and the gradient field of the overlapping region are added, therefore, instability and weak specificity of expression of fingerprint characteristics in a conventional algorithm are avoided, and accuracy of the fingerprint identification is improved.
METHOD AND APPARATUS FOR UPDATING A BACKGROUND MODEL USED FOR BACKGROUND SUBTRACTION OF AN IMAGE
There is provided a method and an apparatus for updating a background model used for background subtraction of an image. The method comprises: receiving an image (220), and classifying a region (226) in the image as foreground by performing background subtraction using a background model (240). The background model comprises a collection of background samples (248, 248b, 248c, 248d) for each pixel (228) in the image. The collection of background samples is arranged in a list of background images (242a, 242b, 242c, 242d). The method further comprises: replacing image contents in the region (226) of the image which is classified as foreground by image contents of a corresponding region (246) in a background image in the list, and adding the image to the list of background images by rearranging a collection of pointers (244a, 244b, 244c, 244d) which each points to one of the background images in the list, such that one of the pointers instead points to the image.