G06V30/1607

DECONVOLUTION OF DIGITAL IMAGES

A method for deconvolution of digital images includes obtaining a degraded image from a digital sensor, a processor accepting output from the digital sensor and recognizing a distorted element within the image. The distorted element is compared with a true shape of the element to produce a degrading function. The degrading function is deconvolved from at least a portion of the image to improve image quality of the image. A method of indirectly decoding a barcode includes obtaining an image of a barcode using an optical sensor in a mobile computing device, the image comprising barcode marks and a textual character. The textual character is optically recognized and an image degrading characteristic is identified from the textual character. Compensating for the image degrading characteristic renders previously undecodable barcode marks decodable. A system for deconvolution of digital images is also included.

VEHICLE LICENSE PLATE RECOGNITION METHOD, DEVICE, TERMINAL AND COMPUTER-READABLE STORAGE MEDIUM

A vehicle license plate recognition method includes: performing the vehicle license plate recognition on the obtained fisheye image to obtain the vehicle plate region; in response to the vehicle license plate region being in the non-reference direction, enlarging the vehicle license plate region based on all pixels of the vehicle license plate region to obtain the deformed image of the vehicle license plate; the resolution of the deformed image of the vehicle license plate being higher than the resolution of the vehicle license plate region; reference direction correction is performed on the deformed image of the vehicle license plate to obtain the to-be-detected image; and recognizing the to-be-detected image to obtain the output character corresponding to the vehicle license plate region.

Training of neural networks in which deformation processing of training data is adjusted so that deformed character images are not too similar to character images of another class
12300010 · 2025-05-13 · ·

In a scene where a pseudo character image is generated by performing deformation processing for a character image, a character image that impedes training is suppressed from being generated. Based on a condition relating to a parameter that is used for the deformation processing and associated with a first class, a parameter of the deformation processing is determined and the deformation processing is performed for a character image belonging to the first class using the determined parameter. Then, whether or not the deformed character image generated by the deformation processing is similar to a character image belonging to a class different from the first class is determined and in a case where similarity is determined, the condition associated with the first class is updated.

PROCESSING IMAGES OF DEFORMED INDICIA-BEARING SURFACES
20250166403 · 2025-05-22 ·

An example method of processing images of deformed indicia-bearing surfaces includes: detecting, within a document image, a plurality of image fragments, wherein each image fragment of the plurality of image fragment contains a respective sequence of alphabet symbols; grouping the plurality of image fragments by lines of text to be reconstructed in the document image; generating a map of isolines associated with the document image, wherein an isoline identifies a set of points that lie on a straight line of an undistorted image corresponding to the document image; generating a reverse transformation matrix that defines a set of transformations to be applied to the document image in order to remove image distortions caused by deformations of an indicia bearing surface; and generating an undistorted document image by applying the reverse transformation matrix to the document image.

METHOD OF EXTRACTING INFORMATION FROM AN IMAGE OF A DOCUMENT

The present disclosure provides a method of extracting information from an image of a document in which the document image is properly aligned to be processed, the regions containing the desired information are detected and extracted from the document image, a text machine-learning model is performed in which the handwritten text in multiple languages may be extracted and stored, and a user may review, edit, and translate the extracted information to create a standardized digital format of the information contained in the document.

Text-based information extraction from images

A method for extracting text information from images includes obtaining an extraction request associated with live data comprising an image; generating, using a prediction model, rotational variant features and rotational invariant features associated with the live data; generating, using the prediction model, text embeddings associated with the rotational variant features using overlapping kernel-based embedding on the live data; generating, using the prediction model, attention values for each pixel in the live data using context attention; applying a trained language model to the text embeddings, attention values, and the live data to generate predictions; and performing extraction actions based on the predictions.

Method of rectifying text image, training method, electronic device, and medium

A method of rectifying a text image, a training method, an electronic device, and a medium, which relate to a field of an artificial intelligence technology, in particular to fields of computer vision, deep learning technology, intelligent transportation and high-precision maps. An exemplary implementation includes: performing, based on a gating strategy, a plurality of first layer-wise processing on a text image to be rectified, so as to obtain respective feature maps of a plurality of layer levels, wherein each of the feature maps includes a text structural feature related to the text image to be rectified, and the gating strategy is configured to increase an attention to the text structural feature; and performing a plurality of second layer-wise processing on the respective feature maps of the plurality of layer levels, so as to obtain a rectified text image corresponding to the text image to be rectified.

Installation information acquisition method, correction method, program, and installation information acquisition system

A projector in an installation information acquisition method is installed in a real space, has a changeable projection direction, and projects a projection image based on a virtual image. The virtual image is an image in a case where an image arranged at a display position in a virtual space is viewed from a virtual installation position. The method includes first acquisition processing for acquiring positional information of three or more first adjustment points in the virtual space, projection processing for projecting, by the projector, an index image onto the real space, second acquisition processing for acquiring angle information of the projection direction with respect to a reference direction in a state where the index image matches three or more second adjustment points respectively corresponding to the three or more first adjustment points, and third acquisition processing for acquiring installation information based on the positional information and the angle information.

Image quality assessment for text recognition in images with projectively distorted text fields

Image quality assessment for text recognition in images with projectively distorted text fields. A projective transformation is calculated from a restored rectangle, representing a restored text field, to a source quadrangle, representing a projectively distorted text field in a source image. An approximation of a curve of a minimal scaling coefficient level on a plane corresponding to the restored rectangle is constructed, based on calculations of a discriminant of the curve. When the approximation intersects a representation of the restored rectangle, a restoration of the source image is determined to have insufficient image quality for reliable text recognition. When the approximation does not intersect the representation of the restored rectangle, a minimal scaling coefficient is calculated at a point inside the restored rectangle, and a determination of whether or not the restoration of the source image has sufficient image quality is made based on the minimal scaling coefficient.