Patent classifications
G06V30/155
SCORING METHOD AND SYSTEM
A method to score a round of golf using a golf scoring system. The method includes capturing a photograph of a physical scorecard including handwritten text using a camera of a user device, wherein the physical scorecard includes handwritten characters disposed at least partially within rectilinear boxes. The method also includes accessing the photograph in an application of the user device. The method also includes identifying the rectilinear boxes by using a color contrast between the rectilinear boxes of the physical scorecard and a background color of the physical scorecard. The method also includes removing the rectilinear boxes. The method also includes after removing the rectilinear boxes, extracting at least the handwritten characters from the physical scorecard. The method also includes after extracting at least the handwritten characters, calculating a score of the round of golf using the extracted handwritten characters on the application.
COMPUTER-IMPLEMENTED METHOD FOR EXTRACTING CONTENT FROM A PHYSICAL WRITING SURFACE
A computer-implemented method (300) for extracting content (302) from a physical writing surface (304), the method (300) comprising the steps of:
(a) receiving a reference frame (306) including image data relating to at least a portion of the physical writing surface (304), the image data including a set of data points;
(b) determining an extraction region (308), the extraction region (308) including a subset of the set of data points from which content (302) is to be extracted;
(c) extracting content (302) from the extraction region (308) and writing the content (302) to a display frame (394);
(d) receiving a subsequent frame (406) including subsequent image data relating to at least a portion of the physical writing surface (304), the subsequent image data including a subsequent set of data points;
(e) determining a subsequent extraction region (408), the subsequent extraction region (408) including a subset of the subsequent set of data points from which content (402) is to be extracted; and
(f) extracting subsequent content (402) from the subsequent extraction region (408) and writing the subsequent content (402) to the display frame (394).
Utilizing a machine learning model to predict metrics for an application development process
A device receives historical application creation data that includes data associated with creation of a plurality of applications, and processes the historical application creation data, with one or more data processing techniques, to generate processed historical application creation data. The device trains a machine learning model, with the processed historical application creation data, to generate a trained machine learning model, and receives new application data associated with a new application to be created. The device processes the new application data, with the trained machine learning model, to generate one or more predictions associated with the new application, and performs one or more actions based on the one or more predictions associated with the new application.
Image processing apparatus, image processing method, and non-transitory computer-readable storage medium
An image processing apparatus includes a determination unit configured to determine a region of the image on which to perform character recognition processing, a decision unit configured to decide, based on a number of black pixels in contact with the region determined by the determination unit, whether to perform the character recognition processing on an expanded region obtained by expanding the region determined by the determination unit rather than on the region determined by the determination unit, and a character recognition unit configured to perform the character recognition processing on that region of the image decided by the decision unit.
Information processing apparatus and non-transitory computer readable medium storing program
An information processing apparatus includes a character recognition section that performs character recognition of an input image to output a character recognition result, a receiving section that receives an input of a character recognition result by a person on the input image, a detection section that detects a strikethrough from the input image, a matching section that matches the character recognition result output by the character recognition section with the character recognition result by the person, which is received by the receiving section, and a control section that performs control for causing the matching section to perform matching so as to obtain a final character recognition result based on a result of the matching, in a case where the detection section detects the strikethrough.
Optical character recognition support system
A computer-implemented method for increasing a recognition rate of an optical character recognition (OCR) system is provided. The method includes preprocessing by receiving an image, and extracting all vertical lines from the image. The method includes adding vertical lines at character areas of the image, extracting all horizontal lines from the image, and creating an unlined image removing all the vertical/horizontal lines from the image. The method further includes determining a border of a vertical direction of the unlined image based on the total of pixels of rows in each column, and adding vertical/horizontal auxiliary lines between characters of the unlined image. The method also includes postprocessing by receiving garbled words of OCR output, removing noise after morphologically analyzing, replacing garbled letters with correct ones based on a frequent edit operation, and outputting the correct word, weighting results of image distance calculations based on machine learning.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
An information processing apparatus includes a processor configured to acquire, from a read image, a predetermined item, and a value corresponding to the item, the read image being obtained by reading a document and being subjected, prior to acquisition of the item and the value, to preprocessing and character recognition. Further, the processor is configured to, in response to not successfully acquiring at least one of the item and the value, change a setting on the preprocessing or a setting on the character recognition in accordance with the acquisition or non-acquisition state of the item and the value, and then perform the preprocessing or the character recognition.
AUTOMATIC IMAGE FEATURE REMOVAL
Apparatus and methods are described including receiving, via a computer processor, at least one image of a portion of a subject's body. One or more features that are present within the image of the portion of the subject's body, and that were artificially added to the image subsequent to acquisition of the image, are identified. In response thereto, an output is generated on an output device.
APPARATUS FOR PROCESSING IMAGE, STORAGE MEDIUM, AND IMAGE PROCESSING METHOD
In the present disclosure, a candidate area is determined based on a pixel having a specific color included in an input image, and an area is determined to be a processing target from the candidate area based on a pixel having a predetermined color different from the specific color included in the candidate area. Further, a second binary image in which a pixel corresponding to the pixel having the specific color is converted into a white pixel is generated by converting, in a first binary image obtained by the input image being binarized, a pixel that is included in the area determined to be the processing target and corresponds to the pixel having the specific color, into a white pixel.
Automatic image feature removal
Apparatus and methods are described including receiving, via a computer processor, at least one image of a portion of a subject's body. One or more features that are present within the image of the portion of the subject's body, and that were artificially added to the image subsequent to acquisition of the image, are identified. In response thereto, an output is generated on an output device.