Patent classifications
G06T2207/30176
DOCUMENT AUTHENTICITY VERIFICATION IN REAL-TIME
A method for determining authenticity of a document in real-time is disclosed. The method being performed by a processor includes receiving image data of a document. The image data corresponds to at least two images of the document taken simultaneously using at least two cameras. The method includes analyzing the image data to determine a plurality of measurements corresponding to the document along three dimensions. The method includes a thickness at a plurality of location points on the document based on the plurality of measurements, and determining authenticity of the document in real-time based on the determined thickness of the document at the plurality of location points.
Method for generating web code for UI based on a generative adversarial network and a convolutional neural network
Provided is a method for generating web codes for a user interface (UI) based on a generative adversarial network (GAN) and a convolutional neural network (CNN). The method includes steps described below. A mapping relationship between display effects of a HyperText Markup Language (HTML) element and source codes of the HTML element is constructed. A location of an HTML element in an image I is recognized. Complete HTML codes of the image I are generated. The similarity between manually-written HTML codes and the generated complete HTML codes and the similarity between the image I and an image I.sub.1 generated by the generated complete HTML codes are obtained. After training, an image-to-HTML-code generation model M is obtained. A to-be-processed UI image is input into the model M so as to obtain corresponding HTML codes. According to the method of the present disclosure, an image-to-HTML-code generation model M can be obtained.
Computerized Technical Authentication and Grading System for Collectible Objects
The disclosure described herein is directed to a computerized system and method of grading and authenticating collectibles utilizing digital imaging devices and processes to provide an objective, standardized, consistent high-resolution grading of collectible objects, such as but not limited to sport and non-sport trading cards. The disclosure eliminates the subjectivity present in the human grading process and overcomes the inherent limitations of the human eye.
Machine-learning for enhanced machine reading of non-ideal capture conditions
Implementations of the present disclosure include receiving a training image, providing a hash pattern that is representative of the training image, applying a plurality of filters to the training image to provide a respective plurality of filtered training images, identifying a filter to be associated with the hash pattern based on the plurality of filtered training images, and storing a mapping of the filter to the hash pattern within a set of mapping in a data store.
FREQUENCY-ADAPTIVE DESCREENING METHOD AND DEVICE FOR PERFORMING SAME
A frequency adaptive descreening method includes obtaining a scan image of an original document, dividing a region of the scan image by analyzing frequency characteristics of the obtained scan image, estimating a resolution with respect to each of regions resulting from dividing the region according to the analyzed frequency characteristics, and adaptively performing filtering on the regions resulting from dividing the region by using the estimated resolution.
Image processing system for verification of rendered data
An image processing system for verifying that embedded digital content satisfies a predetermined criterion associated with display of the content, the image processing system a content embedding engine that embeds content in a resource provided by a content provider and that configures the resource for rendering, a rendering engine that renders the content embedded in the resource; an application interface engine that interfaces with the rendering engine and that generates a visualization of the resource and of the embedded content rendered in the resource; and an image processing engine that processes one or more pixels of the generated visualization of the resource and of the embedded content and the resource to verify that the specified visual element satisfies the predetermined criterion; and transmits verification data comprising an indication of whether the predetermined criterion is satisfied.
ENHANCING DOCUMENTS PORTRAYED IN DIGITAL IMAGES
The present disclosure is directed toward systems and methods that efficiently and effectively generate an enhanced document image of a displayed document in an image frame captured from a live image feed. For example, systems and methods described herein apply a document enhancement process to a displayed document in an image frame that result in an enhanced document image that is cropped, rectified, un-shadowed, and with dark text against a mostly white background. Additionally, systems and method described herein determine whether a stored digital content item includes a displayed document. In response to determining that a stored digital content item does include a displayed document, systems and methods described herein generate an enhanced document image of a displayed document included in the stored digital content item.
Image processing apparatus and non-transitory computer readable medium
An image processing apparatus includes a first image generator and a second image generator. The first image generator generates a first image, including a predetermined ruled-line image and an inscription image, from a second sheet in a sheet group. The sheet group is obtained by stacking multiple sheets including a single first sheet and the second sheet. The first sheet has inscription information inscribed thereon. The second sheet has the inscription image corresponding to the inscription information transferred thereon and includes the ruled-line image. The second image generator generates a second image in which a surplus image is removed from the first image generated by the first image generator in accordance with a learning model that has learned to remove the surplus image different from the ruled-line image and the inscription image.
DESIGN OPTIMIZATION AND USE OF CODEBOOKS FOR DOCUMENT ANALYSIS
A method of generating and optimizing a codebooks for document analysis comprises: receiving a first set of document images; extracting a plurality of keypoint regions from each document image of the first set of document images; calculating local descriptors for each keypoint region of the extracted keypoint regions; clustering the local descriptors such that each center of a cluster of local descriptors corresponds to a respective visual word; generating a codebook containing a set of visual words; and optimizing the codebook by maximizing mutual information (MI) between a target field of a second set of document images and at least one visual word of the set of visual words.
System, device, and method for determining color ambiguity of an image or video
Systems, devices, and methods for determining color ambiguity of images or videos. A system includes a color ambiguity score generator, which analyzes an image and determines a color ambiguity score that quantitively indicates a level of color ambiguity that the image is estimated to cause when viewed by a user having color vision deficiency. A local color ambiguity score is generated to quantitively indicate a level of local color ambiguity between (i) an in-image object and (ii) an in-image foreground of that in-image object. A global color ambiguity score is generated to quantitively indicate a level of global color ambiguity between (I) a first in-image object within the image and (II) a second in-image object within that image. The color ambiguity score generator generates the color ambiguity score by utilizing a formula that uses both (A) the local color ambiguity score and (B) the global color ambiguity score.