Patent classifications
G06V10/23
PROGRAMMABLE OVERLAY FOR NEGOTIABLE INSTRUMENT ELECTRONIC IMAGE PROCESSING
Embodiments of the invention include systems, methods, and computer-program products for generating an overlay of a highlight or mask of a negotiable instrument on a representative's display for keying the instrument. The system overlays and changes the display of a representative's computer screen in real-time for improved keying instruments by generating highlighting or masking of specified portions of the instrument. The invention generates a grid including an X and Y axis on the instrument and identifies the parameter coordinates for the various indicia on the instrument. The invention may contain code for highlighting or masking various indicia on the instrument using the parameter coordinates for mapping. A programmed overlay may be performed on an image of the instrument in real-time as it is queued onto a representative's display. Upon completion of the representative's keying of the instrument, the overlay is removed for storage of the image of the instrument.
INFORMATION PROCESSING DEVICE, COMPUTER READABLE MEDIUM AND INFORMATION PROCESSING METHOD
An information processing device includes a processor, and the processor is configured to collate images of plural positions on a surface of an object to be inspected with images of plural positions, which corresponds to the plural positions on the surface of the object to be inspected, on a surface of at least one reference object that is a reference, and present the number of successful collations among the plural positions and identification information on a reference object that matches the object to be inspected in a position where the collation is successful.
INFORMATION EXTRACTION FROM IMAGES USING NEURAL NETWORK TECHNIQUES AND ANCHOR WORDS
Scene text information extraction of desired text information from an image can be performed and managed. An information management component (IMC) can determine an anchor word based on analysis of an image. To facilitate determining desired text information in the image, WIC can re-orient the image to zero or substantially zero degrees if it determines that the orientation is skewed. IMC can utilize a neural network to determine and apply bounding boxes to text strings in the image. Using a rules-based approach or machine learning techniques, employing a trained machine learning component, IMC can utilize the anchor word along with inline grouping of textual information in the image, deep text recognition analysis, or bounding box prediction to determine or predict the desired text information in the image. IMC can facilitate presenting the desired text information, anchor word, or other information obtained from the image in an editable format.
IMAGE SENSOR FOR OPTICAL CODE RECOGNITION
A CMOS image sensor for a code reader in an optical code recognition system incorporates a digital processing circuit that applies a calculation process to the capture image data as said data acquired by the sequential readout circuit of the sensor, in order to calculate a macro-image from the capture image data, which corresponds to location information of code(s) in the capture image, and transmit this macro-image in the image frame following the capture image data, in the footer of the frame.
IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS
A processor determines a cause of an image defect based on a test image that is obtained through an image reading process performed on an output sheet output from an image forming device. The processor extracts a vertical stripe part extending along a sub scanning direction in the test image. Furthermore, the processor determines which of predetermined two types of cause candidates is a cause of the vertical stripe part, based on a distribution of a pixel value sequence along a main scanning direction that crosses the sub scanning direction, in a target part including the vertical stripe part in the test image.
Systems and methods for augmenting a displayed document
In an aspect, a method includes: receiving, via a communications module and from a computing device, a signal comprising image data representing a first document; automatically analyzing text in the first document based on stored classification data to identify a first parameter from the text in the first document; comparing the first parameter to a second parameter, the second parameter being obtained from a data store and being associated with a second document; determining annotation data based on the comparison, the annotation data determined based on the first parameter and the second parameter; and providing a signal that includes an instruction to cause the annotation data to be overlaid on a display of the computing device, the instruction including marker data identifying a location associated with the first document and influencing a location of the annotation in the display.
METHOD AND APPARATUS WITH OBJECT TRACKING USING DYNAMIC FIELD OF VIEW
A method with object tracking includes: determining a first target tracking state by tracking a target from a first image frame with a first field of view (FoV); determining a second FoV based on the first FoV and the first target tracking state; and generating a second target tracking result by tracking a target from a second image frame with the second FoV.
Action-Object Recognition in Cluttered Video Scenes Using Text
A mechanism is provided to implement an action-object interaction detection mechanism for recognizing actions in cluttered video scenes. An object hounding box is computed around an object of interest identified in a corresponding label in an initial frame where the object of interest appears in the frame. The object bounding box is propagated from the initial frame to a subsequent frame. For the initial frame and the subsequent frame: the object bounding boxes of the initial frame and the subsequent frame are refined and cropped based on the associated refined object bounding boxes. The set of cropped frames are processed to determine a probability that an action that is to be verified from the corresponding label is being performed. Responsive to determining the probability is equal to or exceeds a verification threshold, a confirmation is provided that the action-object interaction video performs the action that is to be verified.
METHODS, SYSTEMS, ARTICLES OF MANUFACTURE, AND APPARATUS TO RECALIBRATE CONFIDENCES FOR IMAGE CLASSIFICATION
Methods, systems, articles of manufacture, and apparatus to recalibrate confidences for image classification are disclosed. An example apparatus to classify an image includes an image crop detector to detect a first image crop from the image, the first image crop corresponding to a first object, a grouping controller to select a second image crop corresponding to a second object at a location of the first object, a prediction generator to, in response to executing a trained model, determine a label corresponding to the first object and a confidence level associated with the label, and a confidence recalibrator to recalibrate the confidence level based on a probability of the first object having a first attribute based on the second object having a second attribute, the confidence level recalibrated to increase an accuracy of the image classification.
MONITORING DEVICE, MONITORING SYSTEM, METHOD, COMPUTER PROGRAM AND MACHINE-READABLE STORAGE MEDIUM
The invention relates to a monitoring device (10) for recognizing persons in a monitoring region (2), the monitoring region (2) being video-monitored by means of at least one camera (6) and the camera (6) being designed to provide monitoring images (7) to the monitoring device (10) as video data, the monitoring device comprising: —a feature determination apparatus (13), the feature determination apparatus (13) being designed to determine a feature vector (19) for each object in at least one of the monitoring images (7); —a person recognition apparatus (16), the person recognition apparatus (16) being designed to detect in the monitoring images (7) a person to be recognized (11), on the basis of the determined feature vector and/or the determined feature vectors (19) of the feature determination apparatus (13) and/or a combined feature vector (18); —an association apparatus (14), the association apparatus (14) being designed to determine a feature vector (19) for each person to be recognized (11) and each associated environment object of the person to be recognized (11), the association apparatus (14) being designed to determine the combined feature vector (18) on the basis of the feature vector (19) of the person to be recognized (11) and the feature vector or the feature vectors (20) of the associated environment objects.