Patent classifications
G06V30/1801
Systems and methods for generating typographical images or videos
This disclosure involves automatically generating a typographical image using an image and a text document. Aspects of the present disclosure include detecting a region of interest from the image and generating an object template from the detected region of interest. The object template defines the areas of the image, in which words of the text document are inserted. A text rendering protocol is executed to iteratively insert the words of the text document into the available locations of the object template. The typographical image is generated by rendering each word of the text document onto the available location assigned to the word.
OBJECT MANAGEMENT SYSTEM
An object management system includes an acquisition means for acquiring an image in which a surface of a registration target object, having a circle and a handwritten character drawn thereon, is captured, a generation means for detecting an ellipse corresponding to the circle from the image and generating a registration image in which the image is applied with projective transformation such that the ellipse becomes a circle, and a registration means for writing the registration image into a storage means as data for determining the sameness of the registration target object.
System and method for collaborative ink management
A system, method and computer program product for use in managing collaboration on documents having digital ink on a network of computing devices is disclosed. Each computing device has a processor and at least one system application for processing handwriting input under control of the processor. The system application displays, on a display associated with one of the computing devices, a document having digital ink based on a journal of the document, defines the journal to have journal entries associated with at least handwriting input to the document represented by the digital ink, and communicates the journal entries of the journal with one or more of the other networked computing devices displaying the document. The handwriting input associated with the journal entries is handwriting input to the document via the input interface of any of the computing devices displaying the document based on the communicated journal entries.
SPATIAL PARKING PLACE DETECTION METHOD AND DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT
The present disclosure provides a spatial parking place detection method and device, a storage medium and a program product, which relate to the field of data processing and, in particular, to the fields of computer vision, autonomous parking and autonomous driving. A specific implementation lies in: acquiring ultrasonic data around a vehicle collected by an ultrasonic sensor on the vehicle, and image data around the vehicle collected by an image collection apparatus; determining a first spatial parking place around the vehicle according to the ultrasonic data, and determining a second spatial parking place around the vehicle according to the image data; fusing the first spatial parking place and the second spatial parking place that are located at an identical position to determine a spatial parking place at that position; and checking the availability of the detected spatial parking place, and determining available spatial parking places of the vehicle.
PROGRESS DETERMINATION SYSTEM, PROGRESS DETERMINATION METHOD, AND STORAGE MEDIUM
According to one embodiment, a progress determination system includes a first acquisition part and a second acquisition part. The first acquisition part acquires area data relating to area values of a plurality of colors from an image of an article, the article relating to a task. The second acquisition part acquires a classification result from a classifier by inputting the area data to the classifier. The classification result indicates a progress amount.
OVERLAP-AWARE OPTICAL CHARACTER RECOGNITION
Solutions for more efficient and effective optical character recognition with respect to an input text segment are disclosed. In one example, a method includes processing an input text image using a deep character overlap detection machine learning model in order to generate a character map for the input text image, an overlap map for the input text image, and an affinity map for the input text image; generating an overlap-aware word boundary recognition output based at least in part on the character map, the overlap map, and the affinity map, wherein the overlap-aware word boundary recognition output describes one or more inferred word regions of the input text image; and performing one or more prediction-based actions based at least in part on the overlap-aware word boundary recognition output.
SYSTEMS AND METHODS FOR THE EFFICIENT DETECTION OF IMPROPERLY REDACTED ELECTRONIC DOCUMENTS
A method is provided for identifying improperly redacted information in documents. The documents are analyzed to detect redacted areas and text elements and to identify an intersection between a redacted area and a text element. When an area of the intersection is greater than an intersection threshold, the document is identified as containing improperly redacted information.
SYSTEM AND METHOD TO DETERMINE THE AUTHENTICITY OF A SEAL
In one aspect, a computerized method for anti-counterfeiting solution using a machine learning (ML) model includes the step of providing a pre-defined set of feature detection rules, a pre-defined set of edge detection rules, a pre-defined threshold percentage, an original seal, an original fingerprint of the original seal, and a pre-trained fingerprint identification model. The pre-trained fingerprint identification model is trained by a specified ML algorithm using one or more digital images of the original seal. With a digital camera of a scanning device, the method scans a seal whose authenticity is to be determined. The seal is used to secure a transportation container. The method uses the pre-defined set of feature detection rules to detect and extract an extracted feature image at a specified position on the seal. The method breaks down the extracted feature image of the seal into a ‘kn’ number of sub-images by forming a ‘k’ rows x ‘n’ columns of a grid of the extracted feature image. The method implements the pre-defined set of edge detection rules to extract an edge structure of at least one object in each of the ‘kn’ number of sub-images. The method generates a set of unique fingerprints by specified steps. The method includes generating a unique fingerprint corresponding to a unique number or a feature based on each extracted edge structure. For the set of unique fingerprints, the method generates a match percentage for the set of unique fingerprints using the pre-trained fingerprint identification model. The match percentage corresponds to a matching proportion between each unique fingerprint generated for the seal being verified and the original fingerprint of the original seal on which the pre-trained fingerprint identification model is trained.
CHARACTER RECOGNITION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
A method, apparatus, electronic device, and storage medium for character recognition are provided. The method may perform image processing on an acquired original image to obtain a region to be recognized. The region may include a character. The method may determine an area ratio of the region to be recognized on the original image. The method may determine an angle between the region to be recognized and a preset direction. The method may determine a character density of the region to be recognized. The method may perform character recognition on the character in the region to be recognized in response to determining that the area ratio is greater than a ratio threshold, the angle is less than an angle threshold, and the character density is less than a density threshold.
COMPUTER-IMPLEMENTED METHOD FOR EXTRACTING CONTENT FROM A PHYSICAL WRITING SURFACE
A computer-implemented method (300) for extracting content (302) from a physical writing surface (304), the method (300) comprising the steps of:
(a) receiving a reference frame (306) including image data relating to at least a portion of the physical writing surface (304), the image data including a set of data points;
(b) determining an extraction region (308), the extraction region (308) including a subset of the set of data points from which content (302) is to be extracted;
(c) extracting content (302) from the extraction region (308) and writing the content (302) to a display frame (394);
(d) receiving a subsequent frame (406) including subsequent image data relating to at least a portion of the physical writing surface (304), the subsequent image data including a subsequent set of data points;
(e) determining a subsequent extraction region (408), the subsequent extraction region (408) including a subset of the subsequent set of data points from which content (402) is to be extracted; and
(f) extracting subsequent content (402) from the subsequent extraction region (408) and writing the subsequent content (402) to the display frame (394).