Patent classifications
G06V30/24
Artificial intelligence for robust drug dilution detection
Techniques are provided detecting diluted drugs using machine learning. Measurements and images corresponding to a product are obtained, wherein the product is formulated as a liquid, and wherein the measurements and images capture physical, spectral, optical, and/or chemical properties of the product. The measurements and images are provided to a machine learning model, wherein the machine learning model is trained using data generated from interactive learning modules (e.g., a generative adversarial network). The machine learning model detects whether the product or chemical is a real or counterfeit product. In addition, these techniques may be used by practitioners (e.g., medical personnel dispensing a prescribed dosage of a drug with a specific dilution level) to detect prescription errors at the point of administration.
SYSTEMS, METHODS, AND APPARATUSES FOR IMAGE-TO-TEXT CONVERSION AND DATA STRUCTURING
A computer system for image-to-text conversion and data structuring may include one or more processors, one or more computer-readable memories, and one or more computer-readable storage devices, and program instructions stored on at least one of the one or more storage devices for execution by at least one of the one or more processors via at least one of the one or more memories. The stored program instructions may include receiving, a screenshot of a source; storing the screenshot on the one or more computer-readable storage devices; converting the screenshot, via OCR, into at least one string of computer-readable text; building a dataset; or flagging at least one string of computer-readable text, based upon one or more configured parameters. The dataset may include each of the at least one string of computer-readable text, sorted into at least one bucket. The at least one bucket may correspond to a variable type.
SYSTEMS, METHODS, AND APPARATUSES FOR IMAGE-TO-TEXT CONVERSION AND DATA STRUCTURING
A computer system for image-to-text conversion and data structuring may include one or more processors, one or more computer-readable memories, and one or more computer-readable storage devices, and program instructions stored on at least one of the one or more storage devices for execution by at least one of the one or more processors via at least one of the one or more memories. The stored program instructions may include receiving, a screenshot of a source; storing the screenshot on the one or more computer-readable storage devices; converting the screenshot, via OCR, into at least one string of computer-readable text; building a dataset; or flagging at least one string of computer-readable text, based upon one or more configured parameters. The dataset may include each of the at least one string of computer-readable text, sorted into at least one bucket. The at least one bucket may correspond to a variable type.
Recognition and indication of discrete patterns within a scene or image
A method of image analysis is provided for recognition of a pattern in an image. The method includes receiving a plurality of images acquired by a camera, where the plurality of images include a plurality of optical patterns in an arrangement. The method also includes matching the arrangement to a pattern template, wherein the pattern template is a predefined arrangement of optical patterns. The method also includes identifying an optical pattern of the plurality of optical patterns as a selected optical pattern based on a position of the selected optical pattern in the arrangement. The method also includes decoding the selected optical pattern to generate an object identifier and storing the object identifier in a memory device.
Systems and methods for identifying data processing activities based on data discovery results
Aspects of the present invention provide methods, apparatuses, systems, computing devices, computing entities, and/or the like for identifying data processing activities associated with various data assets based on data discovery results. In accordance various aspects, a method is provided comprising: identifying and scanning data assets to detect a subset of the data assets, wherein each asset of the subset is associated with a particular data element used for target data; generating a prediction for each pair of data assets of the subset on the target data flowing between the pair; identifying a data flow for the target data based on the prediction generated for each pair; and identifying a data processing activity associated with handling the target data based on a correlation identified for the particular data element, the subset, and/or the data flow with a known data element, subset, and/or data flow for the data processing activity.
Dynamically representing a changing environment over a communications channel
In accordance with certain implementations of the present approach, a reduced, element-by-element, data set is transmitted between a robot having a sensor suite and a control system remote from the robot that is configured to display a representation of the environment local to the robot. Such a scheme may be useful in allowing a human operator remote from the robot to perform an inspection using the robot while the robot is on-site with an asset and the operator is off-site. In accordance with the present approach, an accurate representation of the environment in which the robot is situated is provided for the operator to interact with.
Identifying versions of a form
Disclosed are a method and apparatus for identifying versions of a form. In an example, clients of a medical company fill out many forms, and many of these forms have multiple versions. The medical company operates in 10 states, and each state has a different version of a client intake form, as well as of an insurance identification form. In order to automatically extract information from a particular filled out form, it may be helpful to identify a particular form template, as well as the version of the form template, of which the filled out form is an instance. A computer system evaluates images of filled out forms, and identifies various form templates and versions of form templates based on the images.
UNIFIED FRAMEWORK FOR ANALYSIS AND RECOGNITION OF IDENTITY DOCUMENTS
Unified framework for analysis and recognition of identity documents. In an embodiment, an image is received. A document is located in the image and an attempt is made to identify one or more of a plurality of templates that match the document. When template(s) that match the document are identified, for each of the template(s) and for each of one or more zones in the template, a sub-image of the zone is extracted from the image. For each extracted sub-image, one or more objects are extracted from the sub-image. For each extracted object, object recognition is performed. This may be done over one iteration (e.g., for a scanned image or photograph) or a plurality of iterations (e.g., for a video). Document recognition is performed based on the one or more templates and the results of the object recognition, and a final document-recognition result is output.
Method and apparatus for video super resolution using convolutional neural network with two-stage motion compensation
A method and an apparatus are provided. The method includes receiving a video with a first plurality of frames having a first resolution; generating a plurality of warped frames from the first plurality of frames based on a first type of motion compensation; generating a second plurality of frames having a second resolution, wherein the second resolution is of higher resolution than the first resolution, wherein each of the second plurality of frames having the second resolution is derived from a subset of the plurality of warped frames using a convolutional network; and generating a third plurality of frames having the second resolution based on a second type of motion compensation, wherein each of the third plurality of frames having the second resolution is derived from a fusing a subset of the second plurality of frames.
Generating digital images utilizing high-resolution sparse attention and semantic layout manipulation neural networks
This disclosure describes one or more implementations of a digital image semantic layout manipulation system that generates refined digital images resembling the style of one or more input images while following the structure of an edited semantic layout. For example, in various implementations, the digital image semantic layout manipulation system builds and utilizes a sparse attention warped image neural network to generate high-resolution warped images and a digital image layout neural network to enhance and refine the high-resolution warped digital image into a realistic and accurate refined digital image.