Patent classifications
G06V10/242
Propensity model based optimization
Apparatuses, systems, methods, and computer program products are presented for a propensity module based optimization. An apparatus comprises a processor and a memory that stores code executable by the processor to receive an electronic submission for a pass/fail interface, identify information from the electronic submission to suggest to a user for entering into an input field for the pass/fail interface prior to submitting the electronic submission to the pass/fail interface to reduce a likelihood that the electronic submission will be rejected at the pass/fail interface, determine the likelihood that the electronic submission will be accepted by the pass/fail interface, and submit the electronic submission to the pass/fail interface in response to the likelihood satisfying a threshold.
System for detecting surface type of object and artificial neural network-based method for detecting surface type of object
An artificial neural network-based method for detecting a surface type of an object includes: receiving a plurality of object images, wherein a plurality of spectra of the plurality of object images are different from one another and each of the object images has one of the spectra; transforming each object image into a matrix, wherein the matrix has a channel value that represents the spectrum of the corresponding object image; and executing a deep learning program by using the matrices to build a predictive model for identifying a target surface type of the object. Accordingly, the speed of identifying the target surface type of the object is increased, further improving the product yield of the object.
METHOD FOR PREPARING A REPRESENTATION OF A GEOGRAPHICAL POLYGON
A computer-implemented method including receiving a first representation of a geographical polygon defining a parcel of land, which first representation includes latitude and longitude coordinates that represent at least the corners of the geographical polygon; based on the first representation, determining various features of the geographical polygon; and preparing a second representation of the geographical polygon, which second representation includes the geometric centre of geographical polygon, first two alphanumeric characters, second two alphanumeric characters, a fifth alphanumeric character, and a sixth alphanumeric character.
DOCUMENT PROCESSING DEVICE, METHOD OF PROCESSING AN IMAGE THEREOF AND COMPUTER PROGRAM PRODUCT
A document processing device, a method of processing an image thereof, and a computer program product are disclosed. A set of display area information is preset on the document processing device. When the document processing device receives an input image, a set of reference information corresponding to the input image is obtained. The document processing device adjusts the input image according to the set of display area information and the set of reference information to generate an improved preview image. Because the user does not need to consider the direction in which the document is put into the document processing device, the document processing device directly and automatically generates the preview image that is convenient for the user to view based on the input image, thereby achieving the purpose of improving work efficiency, the convenience in use, and the user experience.
INSPECTION APPARATUS, CONTROL METHOD, AND INSPECTION METHOD
An inspection apparatus selects at least one character area, in a first preview image obtained by reading and previewing a print product, sets a direction, for a character in the selected character area, registers the set direction and the character in the selected character area in association with each other, selects at least one character inspection area, in a second preview image obtained by reading and previewing a print product as an inspection target, sets a direction, for a character in the selected character inspection area, rotates the character inspection area to match the set direction, with the direction set for the character in the selected character area, performs character recognition, for the character in the rotated character inspection area, and inspects the character inspection area, based on a result of the character recognition and a result of recognizing the character in the selected character area.
FACE RECOGNITION NETWORK MODEL WITH FACE ALIGNMENT BASED ON KNOWLEDGE DISTILLATION
A method for training a deep learning network for face recognition includes: utilizing a face landmark detector to perform face alignment processing on at least one captured image, thereby outputting at least one aligned image; inputting the at least one aligned image to a teacher model to obtain a first output vector; inputting the at least one captured image a student model corresponding to the teacher module to obtain a second output vector; and adjusting parameter settings of the student model according to the first output vector and the second output vector.
EXTRINSIC CAMERA CALIBRATION USING CALIBRATION OBJECT
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for extrinsic camera calibration using a calibration object. One of the methods includes: determining physical locations of interest points of a calibration object in a calibration object centered coordinate system; determining pixel locations of the interest points in an image of the calibration object captured by a camera; determining, using the pixel locations and the physical locations, a transformation from the calibration object centered coordinate system to a camera centered coordinate system; and determining, using the transformation, a camera tilt angle and a camera mount height of the camera for use in analyzing images captured by the camera.
Dictionary learning device, dictionary learning method, and program storage medium
A reference data extraction unit extracts, from a photographic image from an imaging device that captures an image of an object to be recognized, an image of a reference image region serving as a reference and containing a detection subject in the object. A expanded data extraction unit extracts from the photographic image an image of an expanded-image region, which is an image region that includes the reference image region and is larger than the reference image region. A reduced data extraction unit extracts from the photographic image an image of a reduced-image region, which is an image region that includes the detection subject and is smaller than the reference image region, with the result that a portion of the object is outside of the region. A learning unit uses the extracted images of the image region to learn a dictionary.
Data extraction from form images
An image processing system accesses an image of a completed form document. The image of the form document includes one or more features, such as form text, at particular locations within the image. The image processing system accesses a template of the form document and computes a rotation and zoom of the image of the form document relative to the template of the form document based on the locations of the features within the image of the form document relative to the locations of the corresponding features within the template of the form document. The image processing system performs a rotation operation and a zoom operation on the image of the form document, and extracts data entered into fields of the modified image of the form document. The extracted data can be then accessed or stored for subsequent use.
Meter text detection and recognition
Techniques for meter text detection and recognition are described herein. In an example, an application receives a first image, depicting information displayed by the meter, from an imaging device. One or more qualities of the first image may be assessed, such as focus or lighting. A setting of the imaging device may be adjusted. The adjusting may be based at least in part on the assessed quality of the first image and one or more characteristics of an optical character recognition (OCR) algorithm. Accordingly, the settings of the image-capture device are tuned to the needs of the OCR algorithm. A second image may be captured, depicting information displayed by the meter, using the imaging device adjusted according to the adjusted setting. The OCR algorithm may be applied to the second image to obtain an alphanumeric value associated with the second image. The alphanumeric value is obtained from the OCR algorithm.