Patent classifications
G06K9/72
Address recognition apparatus, sorting apparatus, integrated address recognition apparatus and address recognition method
An address recognition apparatus has an address recognition section, and a non-addressee determination section. The address recognition section acquires, based on an image of an object, address information described on the object. The non-addressee determination section determines, based on a comparison result of information relating to first address information that is the address information acquired in the address recognition section at a desired timing, and information relating to second address information that is the address information acquired in the address recognition section before the first address information, whether or not the first address information is a non-destination address that is not an address of an addressee.
CONTEXT-BASED AUTONOMOUS PERCEPTION
A method of performing context-based autonomous perception is provided. The method includes acquiring perception sensor data as an image by an autonomous perception system that includes a processing system coupled to a perception sensor system. Feature extraction is performed on the image by the autonomous perception system. The feature extraction identifies one or more features in the image. Contextual information associated with one or more conditions present upon acquiring the perception sensor data is determined. One or more labeled reference images are retrieved from at least one of a contextually-indexed database based on the contextual information, a feature-indexed database based on at least one of the features extracted, and a combined contextually- and feature-indexed database. The image is parsed, and one or more semantic labels are transferred from the one or more labeled reference images to form a semantically labeled version of the image.
ELECTRONIC INFORMATION BOARD APPARATUS, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
An electronic information board apparatus includes: a guide generating unit configured to display a handwriting region on a screen; a coordinate detecting unit configured to detect coordinates of an indication body moving in the handwriting region on the screen; an image drawing unit configured to generate a stroke image based on the coordinates and display the generated stroke image in the handwriting region on a first layer of the screen; a character recognizing unit configured to execute character recognition based on a hand-written image that is hand-written inside the handwriting region and outputs text data; and a display superimposing unit configured to display the text data acquired from the character recognizing unit at a position that is approximately the same as that of the hand-written image that is hand-written inside the handwriting region on the screen, and on a second layer of the screen different from the first layer.
Character recognition device, image display device, image retrieval device, character recognition method, and computer program product
According to an embodiment, a device includes a detector, first and second recognizers, an estimator, a second recognizer, and an output unit. The detector is configured to detect a visible text area including a visible character from an image. The first recognizer is configured to perform character pattern recognition on the visible text area, and calculate a recognition cost according to a likelihood of a character pattern. The estimator is configured to estimate a partially-hidden text area into which a hidden text area estimated to have a hidden character and the visible text area are integrated. The second recognizer is configured to calculate an integrated cost into which the calculated cost and a linguistic cost corresponding to a linguistic likelihood of a text that fits in the entire partially-hidden text area are integrated. The output unit is configured to output a text selected or ranked based on the integrated cost.
SYSTEMS AND METHODS FOR AUTHENTICATION BASED ON HUMAN TEETH PATTERN
An automated system and method to authenticate one or more users based on capturing one or more images of a set of teeth, obtaining a selected image from the one or more captured images and extracting a portion of the selected image to obtain an extracted image. Each extracted image is converted into a grayscale image and stored in a database along with the username and the user keyword of the one or more users. A unique signature matrix and a pattern vector is generated by processing the grayscale image and stored in the database along with the username. One or more images comprising a set of teeth of at least one user is captured and a unique signature matrix obtained from the same is compared with a set of unique signature matrices previously stored in the database and at least one action is triggered based on the comparison.
Environment recognition system
Provided is a system capable of further reducing risk such as a contact between a moving body such as a vehicle and a traffic participant present around the moving body. According to an environment recognition system (1) of the present invention, a database (10) stores each of a plurality of reference symbol strings describing the state of an environmental element constituting each of a plurality of scenes assumed to be around the moving body. A first arithmetic processing element (11) detects a scene around the moving body and generates a symbol string describing the state of the environmental element constituting the detected scene. A second arithmetic processing element (12) evaluates similarity between the symbol string and each of the plurality of reference symbol strings stored in the database (10).
Method and apparatus for searching an image, and computer-readable recording medium for executing the method
The present disclosure relates to a method and apparatus for searching an image, and to a computer-readable recording medium for executing the method. The apparatus for searching an image of the present disclosure obtains features of an input image; and obtains words that correspond to the features respectively and an adjacent word that is adjacent to the words corresponding to the features. When a word is assigned to a first word cell of a plurality of word cells that are included in a visual feature space, an adjacent word is assigned to at least one second word cell that is adjacent to the first word cell, where the plurality of word cells is assigned to different words, and at least one word being within a predetermined distance from a word is designated as the adjacent word. The apparatus is further configured to search for an image that is identical or similar to the input image based on information associated with a first group of images corresponding to the word and information associated with a second group of images corresponding to the adjacent word, the information on the first and second groups of images being stored in a database.
Smart optical input/output (I/O) extension for context-dependent workflows
Systems, methods, and computer program products for smart, automated capture of textual information using optical sensors of a mobile device are disclosed. The capture and provision is context-aware, and determines context of the optical input, and invokes a contextually-appropriate workflow based thereon. The techniques also provide capability to normalize, correct, and/or validate the captured optical input and provide the corrected, normalized, validated, etc. information to the contextually-appropriate workflow. Other information necessary by the workflow and available to the mobile device optical sensors may also be captured and provided, in a single automatic process. As a result, the overall process of capturing information from optical input using a mobile device, invoking an appropriate workflow, and providing captured information to the workflow is significantly simplified and improved in terms of accuracy of data transfer/entry, speed and efficiency of workflows, and user experience.
Gesture recognition using gesture elements
Aspects of the present disclosure provide a gesture recognition method and an apparatus for capturing gesture. The apparatus categorizes the raw data of a gesture into gesture elements, and utilizes the contextual dependency between the gesture elements to perform gesture recognition with a high degree of accuracy and small data size. A gesture may be formed by a sequence of one or more gesture elements.
Capturing contextual information on a device
An approach is disclosed that captures, at a digital camera of a first information handling system, a digital image of a display of a second information handling system. The approach analyzes the captured digital image with the analysis resulting in an identification of a network location that corresponds to the captured digital image. Data from the identified network location is retrieved via a network connection from the first information handling system and this data is displayed on a display that is accessible by the first information handling system.