Patent classifications
G06V30/40
ALGORITHMIC SUGGESTIONS BASED ON A UNIVERSAL DATA SCAFFOLD
User information is protected by providing a protective layer between a provider and a user device. A server receives a suggestion to present to the user device from a third party, such as a provider of goods or services that wants to push the suggestion to the user device. The suggestion includes a request for user information. The server then determines a likelihood that the request for user information is a necessary component of the suggestion. When the likelihood is low, the request is removed from the suggestion. When the likelihood is high, the server creates an executable computer code that includes the request. The executable computer code can be transmitted to the user device to present the suggestion to the user device without disclosing the user's information to the server.
WEARABLE DEVICE FOR PROVIDING MULTI-MODALITY AND OPERATION METHOD THEREOF
Provided are a wearable device for providing a multi-modality, and an operation method of the wearable device. The operation method of the wearable device including obtaining source data including at least one of image data, text data, or sound data, determining whether the image data, the text data, and the sound data are included in the source data, based on determining that at least one of the image data, the text data, or the sound data is not included in the source data, generating the image data, the text data, and the sound data, which are not included in the source data, by using a generator of an generative adversarial network (GAN), which receives the source data as an input, generating a pulse-width modulation (PWM) signal based on the sound data, and outputting the multi-modality based on the image data, the text data, the sound data, and the PWM signal.
Methods and systems for data retrieval from an image
Various embodiments illustrated herein disclose a method that includes receiving a plurality of images from an image capturing unit. Thereafter, an image evaluation process is executed on each of plurality of sections in each of the plurality of images. The image evaluation process includes performing optical character recognition (OCR) on each of the plurality of sections in each of the plurality of images to generate text corresponding to the plurality of respective sections. Further, the image evaluation process includes querying a linguistic database to identify one or more errors in the generated text. Further, the method includes modifying one or more image characteristics of each of the plurality of images and repeating the execution of the image evaluation process on the modified plurality of images until at least the calculated statistical score is less than a pre-defined statistical score threshold.
Methods and systems for data retrieval from an image
Various embodiments illustrated herein disclose a method that includes receiving a plurality of images from an image capturing unit. Thereafter, an image evaluation process is executed on each of plurality of sections in each of the plurality of images. The image evaluation process includes performing optical character recognition (OCR) on each of the plurality of sections in each of the plurality of images to generate text corresponding to the plurality of respective sections. Further, the image evaluation process includes querying a linguistic database to identify one or more errors in the generated text. Further, the method includes modifying one or more image characteristics of each of the plurality of images and repeating the execution of the image evaluation process on the modified plurality of images until at least the calculated statistical score is less than a pre-defined statistical score threshold.
Deep feature extraction and training tools and associated methods
Deep feature extraction and training tools and processes may facilitate extraction and understanding of deep features utilized by deep learning models. For example, imaging data may be tessellated and masked to generate a plurality of masked images. The masked images may be processed by a deep learning model to generate a plurality of masked outputs. The masked outputs may be aggregated for each cell of the tessellated image and compared to an original output for the imaging data from the deep learning model. Individual cells and associated image regions having masked outputs that correspond to the original output may comprise deep features utilized by the deep learning model.
NOTARIZATION MOBILE APPLICATION SYSTEM AND METHOD
A notarization system for use in notarizing a document, the system comprising: a handheld notarization device having a printer; and a notarization application configured to be accessible only by a primary user and to operate on a mobile device having a camera, the notarization application being programmed to digitally read identification information from an identification card of a secondary user received via the camera, such that to verify the identity of the secondary user and authenticate the identification card, digitally scan and generate a digital line drawing of a fingerprint of the secondary user received via the camera, digitally generate a notarization endorsement and cause a printing of the notarization endorsement onto a tamper-resistant sticker via the printer, wherein, when the notarization application is being accessed by the primary user, and upon signing of the document by the secondary user, the notarization application causes a printing of the notarization endorsement.
NOTARIZATION MOBILE APPLICATION SYSTEM AND METHOD
A notarization system for use in notarizing a document, the system comprising: a handheld notarization device having a printer; and a notarization application configured to be accessible only by a primary user and to operate on a mobile device having a camera, the notarization application being programmed to digitally read identification information from an identification card of a secondary user received via the camera, such that to verify the identity of the secondary user and authenticate the identification card, digitally scan and generate a digital line drawing of a fingerprint of the secondary user received via the camera, digitally generate a notarization endorsement and cause a printing of the notarization endorsement onto a tamper-resistant sticker via the printer, wherein, when the notarization application is being accessed by the primary user, and upon signing of the document by the secondary user, the notarization application causes a printing of the notarization endorsement.
INTELLIGENT IMAGE SEGMENTATION PRIOR TO OPTICAL CHARACTER RECOGNITION (OCR)
A medical device monitoring system and method extract information from screen images from medical device controllers, with a single OCR process invocation per screen image, despite critical information appearing in different screen locations, depending on which medical device controller's screen image is processed. For example, different software versions of the medical device controllers might display the same type of information in different screen locations. Copies of the critical screen information, one copy from each different screen location, are made in a mosaic image, and then the mosaic image is OCR processed to produce text results. Text is selectively extracted from the OCR text results, depending on contents of a selector field on the screen image, such as a software version number or a heart pump model identifier.
INTELLIGENT IMAGE SEGMENTATION PRIOR TO OPTICAL CHARACTER RECOGNITION (OCR)
A medical device monitoring system and method extract information from screen images from medical device controllers, with a single OCR process invocation per screen image, despite critical information appearing in different screen locations, depending on which medical device controller's screen image is processed. For example, different software versions of the medical device controllers might display the same type of information in different screen locations. Copies of the critical screen information, one copy from each different screen location, are made in a mosaic image, and then the mosaic image is OCR processed to produce text results. Text is selectively extracted from the OCR text results, depending on contents of a selector field on the screen image, such as a software version number or a heart pump model identifier.
Artificial intelligence (AI) based document processor
An Artificial Intelligence (AI) based document processing system receives a request including one or more of a message and documents related to a process to be automatically executed. A process identifier is extracted and used for retrieving guidelines for the automatic execution of the document processing task. Machine Learning (ML) models, each corresponding to a guideline, are used to extract data responsive to the guidelines. Based on the responsive data meeting the approval threshold and the automatic document processing task executed, one or more of a recommendation to accept or reject the request, and a corresponding letter can be automatically generated.