Patent classifications
G06V30/19007
LEARNING DEVICE ESTIMATING APPARATUS, LEARNING DEVICE ESTIMATING METHOD, RISK EVALUATION APPARATUS, RISK EVALUATION METHOD, AND PROGRAM
A learning device estimating apparatus aims at a learning device as an attack target, and comprises a recording part, an inquiring part, a capturing part and a learning part. A predetermined plurality of pieces of observation data are recorded. The inquiring part inquires of the attack target learning device for each of the pieces of observation data recorded in the recording part to acquire label data and records the acquired label data to the recording part in association with observation data. The capturing part inputs the observation data and the label data associated with the observation data that have been recorded to the recording part, to the learning part. The learning part is characterized by using an activation function that outputs a predetermined ambiguous value in a process for determining a classification prediction result, and the learning part performs learning using the inputted observation data and label data.
Multi-dimensional table reproduction from image
Embodiments facilitate selection and assignment of a known user model, based upon input comprising table images of original data. A table engine receives the image and performs pre-processing (e.g., rasterization, Optical Character Recognition, coordinate representation) thereupon to identify image entities. After filtering original numerical data, a similarity (e.g., a distance) is calculated between an image entity and a dimension member of the known user model. Based upon this similarity, the table engine selects and assigns the known user model to the incoming tables images, generating a file representing table columns and rows. This file is received at the UI of an analytics platform, which in turn populates the model with data of the user (rather than the original data) via an API. Embodiments may be particularly valuable in allowing a user to rapidly generate multi-dimensional tables comprising their own data, based upon raw table images received from an external party.
SYSTEMS AND METHODS FOR PROVIDING EXTRACTION ON INDUSTRIAL DIAGRAMS AND GRAPHICS
A method to facilitate extraction of display objects for industrial diagrams are disclosed herein. The method comprises: receiving user input indicating a first display object within an industrial diagram; extracting the first display object to generate a first graphic extraction template; identifying one or more regions within the first graphic extraction template; masking the text information; linking each of the one or more regions with at least a portion of an object name of the first display object; extracting all the display objects, from the industrial diagram, that are of the type of the first display object using the first graphic extraction template to generate a first set of extracted graphic objects; and for each of the first set of extracted graphic objects, matching text information within each of the one or more regions with at least a portion of an object name.
Systems and methods for context-aware text extraction
Systems and methods are provided to perform context-aware text extraction.
PROCESSING FORMS USING ARTIFICIAL INTELLIGENCE MODELS
An application server may receive an input document including a set of input text fields and an input key phrase querying a value for a key-value pair that corresponds to one or more of the set of input text fields. The application server may extract, using an optical character recognition model, a set of character strings and a set of two-dimensional locations of the set of character strings on a layout of the input document. After extraction, the application server may input the extracted set of character strings and the set of two-dimensional locations into a machine learned model that is trained to compute a probability that a character string corresponds to the value for the key-value pair. The application server may then identify the value for the key-value pair corresponding to the input key phrase and may out the identified value.
Image pattern matching to robotic process automations
Disclosed herein is a computing system. The computing system includes a memory and a processor. The memory stores processor executable instructions for a workflow recommendation assistant engine. The processor is coupled to the memory. The processor executes the workflow recommendation assistant engine to cause the computing device to analyze images of a user interface corresponding to user activity, execute a pattern matching of the images with respect to existing automations, and provide a prompt indicating that an existing automation matches the user activity.
IMAGE BASED POS SYSTEM FOR THE FAST FOOD INDUSTRY
An image based POS system for the fast food industry, wherein the visualized order information is processed through an OCR, the HDMI port is used for video processing, data from a separate database (a server) is retrieved, and the products served by the premises are matched by one or more OCR images.
SYSTEM AND METHOD FOR TRACKING WINE IN A WINE-CELLAR AND MONITORING INVENTORY
A wine bottle tracking system (100, 500) is described herein, comprising: a plurality of wine bottle storage locations (216) within a wine storage area (214); at least one or more wine bottles (218) stored in any of the wine bottle locations; at least one optical detector (202, 502, 504) adapted to generate image information that includes image data (212) of a first wine bottle as it is being stored into, or removed from, the wine bottle storage location, and wherein the at least one optical detector is further adapted to output the image information as a camera output signal (220); at least one processor (206) communicatively coupled to the at least one optical detector; and a memory (208) operatively connected with the at least one processor, wherein the memory stores computer-executable instructions (102, 104, 118) that, when executed by the at least one processor, causes the at least one processor to execute a method (300, 400, 600) that comprises: receiving, via the at least one optical detector, the camera output signal within a wine tracker application executing on the at least one processor; analyzing the camera output signal; and extracting wine information pertaining to the wine stored in the first wine bottle on the basis of the analysis of the camera output signal by the at least one processor and wine tracking application. The systems and methods described herein can further recognize wine bottles and obtain information related to the wine in the wine bottles, and can further predict consumer behavior and/or consumption of wine.
USING MODEL UNCERTAINTY FOR CONTEXTUAL DECISION MAKING IN OPTICAL CHARACTER RECOGNITION
A system recognizes text in an input image. The system provides the input image to one or more optical character recognition (OCR) models to obtain predicted texts. The system determines a set of candidate text predictions by performing text recognition on each transformed image of the set of transformed images. The system generates a regular expression based on the predicted characters of the candidate text predictions and confidence score corresponding to each predicted character. The system matches the regular expression against text values in a database. The system selects one or more text values from the database based on the matching and returns the one or more text values as results of recognition of text of the input image.
METHOD AND SYSTEM FOR DETECTING AND EXTRACTING PRICE REGION FROM DIGITAL FLYERS AND PROMOTIONS
This disclosure relates generally to method and system for detecting and extracting price region from digital flyers and promotions. In retail business, extracting price information from digital flyers is crucial for complex nature of flyers having large variety of formats, color scheme, font styles, variable text information and thereof. The method of the present disclosure detects a text region comprising a price information from a set of digital flyers and promotions received as input images. Further, each text region is converted into a two-color text comprising of a set of white pixels and a set of black pixels. Further, underlying price from the price region of the two-color text is detected and price is extracted from the price region of each input image. Additionally, the price region detection function detects price region accurately and extracts price values having an irregular font size.