G06K9/18

Detecting a label from an image

Determining a label from an image is disclosed, including: obtaining an image; determining a first portion of the image associated with a special mark; determining a second portion of the image associated with a label based at least in part on the first portion of the image associated with the special mark; and applying character recognition to the second portion of the image associated with the label to determine a value associated with the label.

INTERACTIVE VIRTUAL AQUARIUM SIMULATION SYSTEM AND ASSOCIATED METHODS
20170270712 · 2017-09-21 ·

An interactive virtual aquarium simulation system includes a two-dimensional (2D) fish image having a unique identifier associated therewith, with the unique identifier corresponding to predefined fish movements. A scanner scans the 2D fish image and converts to a digital image. A three-dimensional (3D) mapping processor is coupled to the scanner to generate a 3D fish image based on the digital image. A virtual simulation processor is coupled to the 3D mapping processor to generate simulation video of a virtual aquarium including a plurality of fish and the 3D fish image. The 3D fish image swims within the virtual aquarium based on the predefined fish movements. The simulation video of the virtual aquarium with the 3D fish image is provided to a display.

Chinese, Japanese, or Korean language detection

Disclosed are systems, computer-readable mediums, and methods for determining that text contains Chinese, Japanese, or Korean characters. One method includes determining a language hypothesis for each text fragment in a plurality of text fragments identified from connected components in a document image. The method further includes selecting a first subset of text fragments from the plurality of text fragments based on ratings for the language hypothesis of each text fragment in the plurality of text fragments. The method further includes verifying, by a processor, the language hypothesis of one or more text fragments in the first subset of text fragments based on optical character recognition of the one or more text fragments. The method further includes determining, by the processor, that Chinese, Japanese, or Korean (CJK) characters are present in the document image based on the verification of the language hypothesis of each of the one or more text fragments.

Drawing apparatus, drawing method, and recording medium for use in displaying a character defined in a predetermined outline font format
09767376 · 2017-09-19 · ·

A drawing apparatus that displays a character rendered in an outline method includes a number-of-commands identification unit configured to identify a number of drawing commands required for the character based on outline data that corresponds to a shape of the character, a level determination unit configured to determine a level of an antialiasing process to be performed on the character based on the number of the drawing commands found by the number-of-commands identification unit, and a drawing unit configured to execute the antialiasing process of the level determined for the character by the level determination unit, when the character is drawn based on the outline data of the character.

Delivery processing apparatus and method for recognizing information provided on delivery target item

A delivery processing apparatus has a recognition unit and a determination unit. The recognition unit executes recognition processing for recognizing information that is provided on a delivery target item based on an image obtained by shooting an image of the delivery target item. The determination unit determines whether or not to extend the recognition processing performed by the recognition unit, based on a degree of progress of the recognition processing that has been performed by the recognition unit in a period from when the information recognition processing was started by the recognition unit until a predetermined time has elapsed, and an extension rate, which indicates a ratio of the number of times that the recognition unit has extended the recognition processing to the number of times that the recognition unit has performed the recognition processing.

Global geographic information retrieval, validation, and normalization

According to one embodiment, a computer-implemented method includes: capturing an image of a document using a camera of a mobile device; performing optical character recognition (OCR) on the image of the document; extracting an identifier of the document from the image based at least in part on the OCR; comparing the identifier with content from one or more reference data sources, wherein the content from the one or more reference data sources comprises global address information; and determining whether the identifier is valid based at least in part on the comparison. The method may optionally include normalizing the extracted identifier, retrieving additional geographic information, correcting OCR errors, etc. based on comparing extracted information with reference content. Corresponding systems and computer program products are also disclosed.

Non-transitory computer readable medium, information processing apparatus, and information processing method setting character recognition accuracy
09766840 · 2017-09-19 · ·

A non-transitory computer readable medium stores a program causing a computer to execute a process for information processing. The process includes determining a risk of information leakage by a user having indicated image processing, and controlling character recognition, the character recognition performing character recognition processing on an image subjected to the image processing, such that recognition accuracy of the character recognition processing that is performed on the image increases as the risk of information leakage determined in the determining increases.

CARTRIDGE COMPRISING AN AUTO-DESTRUCT FEATURE

A cartridge including a carrier storing an authenticity verification code readable by a scan device other than a host device and an auto-destruct feature to render the authenticity verification code unreadable at installation.

Mobile document detection and orientation based on reference object characteristics

In various embodiments, methods, systems, and computer program products for detecting, estimating, calculating, etc. characteristics of a document based on reference objects depicted on the document are disclosed. In one approach, a computer-implemented method for processing a digital image depicting a document includes analyzing the digital image to determine one or more of a presence and a location of one or more reference objects; determining one or more geometric characteristics of at least one of the reference objects; defining one or more region(s) of interest based at least in part on one or more of the determined geometric characteristics; and detecting a presence or an absence of an edge of the document within each defined region of interest. Additional embodiments leverage the type of document depicted in the image, multiple frames of image data, and/or calculate or extrapolate document edges rather than locating edges in the image.

ELECTRONIC DEVICE
20170255352 · 2017-09-07 ·

At least one processor extracts one or more characters included in an image presented on a display without the user's operation, and stores the characters in a memory. At least one processor transfers the extracted characters to a location specified by the user, based on the character information stored in the memory.