Patent classifications
G06K9/34
MOTION-BASED IMAGE SEGMENTATION SYSTEMS AND METHODS
Methods and systems are disclosed for segmenting an image. First and second frames of image data are generated at different times. A first portion of the first frame is compared to image data of the second frame, and a second portion of the second frame is selected based on the comparison. A displacement vector between the first portion and the second portion is calculated, where the displacement vector represents relative movement over time between the image data represented by the first portion and the image data represented by the second portion. An image is output with an indicator, and the location of the indicator on the image is determined by using the calculated displacement vector. The indicator can serve to distinguish between items in an imaging view.
SYSTEM AND METHOD FOR DETECTING FORGERIES
A document forgery detection method comprising using at least one processor for providing at least one histogram of gray level values occurring in at least a portion of at least one channel of an image assumed to represent a document including text, the histogram having been generated by image processing at least a portion of at least one channel of an image assumed to represent a document including text, the image having been sent by a remote end user to an online service over a computer network, evaluating monotony of at least a portion of the at least one histogram; and determining whether the image is authentic or forged based on at least one output of the evaluating.
VISION-AIDED AERIAL NAVIGATION
An aerial vehicle is navigated using vision-aided navigation that classifies regions of acquired still image frames as featureless or feature-rich, and thereby avoids expending time and computational resources attempting to extract and match false features from the featureless regions. The classification may be performed by computing a texture metric as by testing widths of peaks of the autocorrelation function of a region against a threshold, which may be an adaptive threshold, or by using a model that has been trained using a machine learning method applied to a training dataset comprising training images of featureless regions and feature-rich regions. Such machine learning method can use a support vector machine. The resultant matched feature observations can be data-fused with other sensor data to correct a navigation solution based on GPS and/or IMU data.
A SUPPLEMENTARY DEVICE FOR ATTACHMENT TO A DRUG INJECTION DEVICE FOR MONITORING INJECTION DOSES HAVING OCR IMAGING SYSTEM WITH GLARE REDUCTION
The present disclosure relates to a supplementary device for attachment to an injection device including an imaging arrangement configured to capture an image of a moveable number sleeve of the injection device, a plurality of light sources, and a processor arrangement configured to control operation of the imaging arrangement and the plurality of light sources and to receive image data from the imaging arrangement. In some instances, the processor arrangement is configured to activate the plurality of light sources sequentially and to combine multiple images captured by the imaging apparatus under different illumination conditions into a single image.
Scanning camera-based video surveillance system
A video surveillance system may include at least one sensing unit capable of being operated in a scanning mode and a video processing unit coupled to the sensing unit, the video processing unit to receive and process image data from the sensing unit and to detect scene events and target activity.
Image processing apparatus, method, and storage medium
A binary image of an input image is generated, and a character region within the binary image and a region surrounding each character are acquired as character segmentation rectangle information. A thinning process is executed on a region within the binary image which is identified based on the character segmentation rectangle information to acquire a thinned image. An edge detected image of the region identified based on the character segmentation rectangle information is acquired. Whether each character identified based on the character segmentation rectangle information is a character to be separated from a background by the binarization process or not is determined based on a result of a logical AND of the thinned image and the edge detected image.
CHARACTER INFORMATION RECOGNITION METHOD BASED ON IMAGE PROCESSING
The present invention relates to a character information recognition method based on image processing. The method comprises: collecting images to obtain a target character image; then sequentially comparing the target character image with character template images in a character template library to find a maximum of a coincidence area of the character in the target character image with the character templates in the character template images; and when the coincidence area meets a preset condition, determining the target character to be recognized as the character in the corresponding character template image. The character templates are designed to include not only a coincidence-permitted region but also a coincidence-restricted region. The coincidence-restricted region is set, so that the direct comparing and matching of the character templates can be more accurately carried out, thereby improving the recognition speed.
SYSTEM AND METHOD FOR INTELLIGENT RECEIPT PROCESSING
A system and method for document management, such as receipts, includes a device having a processor and associated memory, a wireless data interface and a digital imager. The device is configured to selectively generate image data corresponding to captured images of associated receipts acting in connection with a touchscreen display. Price data is extracted from multiple areas of the image data and an image of the receipts on the touchscreen display is generated. The processor determines a position of the price data on the image and highlights at least one user-selectable portion of the image on the touchscreen display. Aggregate costs are calculated and displayed in accordance with user selection.
FORM RECOGNITION METHOD, FORM RECOGNITION DEVICE, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM
A method includes: extracting a first line segment pair including a combination of line segments selected from a plurality of line segments included in an image of a form to be recognized; calculating a first feature amount which represents a relationship between the line segments in the extracted first line segment pair; extracting a candidate for a form identifier of the form to be recognized based at least on the calculated first feature amount and a second feature amount in line segments in a second line segment pair correlated with a form identifier which is registered in advance; extracting corresponding line segment pairs which include a line segment correlated with the candidate for the form identifier and a line segment of the form to be recognized; and specifying the form identifier of the form to be recognized based at least on the degree of overlapping.
Method and apparatus for image processing
Identifying objects in images is a difficult problem, particularly in cases an original image is noisy or has areas narrow in color or grayscale gradient. A technique employing a convolutional network has been identified to identify objects in such images in an automated and rapid manner. One example embodiment trains a convolutional network including multiple layers of filters. The filters are trained by learning and are arranged in successive layers and produce images having at least a same resolution as an original image. The filters are trained as a function of the original image or a desired image labeling; the image labels of objects identified in the original image are reported and may be used for segmentation. The technique can be applied to images of neural circuitry or electron microscopy, for example. The same technique can also be applied to correction of photographs or videos.