Patent classifications
G06V30/1456
Approach for cloud EMR communication via a content parsing engine and a storage service
An approach provides sending captured Superbill image data and output data generated based on results of parsing the captured Superbill image data to an external system which manages Superbill data via a cloud system and a storage service. The cloud system creates parsing rule data for parsing a captured Superbill image in accordance with user operation at a client device. The cloud system obtains captured Superbill image data from the storage service, in response to receiving a notification indicating that the captured Superbill image data has been stored in the storage service. The cloud system parses the obtained image data based on the created parsing rule and generates output data based on results of parsing. The cloud system sends the generated output data and the obtained image data to the external system via the one or more computer networks.
SYSTEMS AND METHODS FOR OBSCURING RESTRICTED TEXT AND/OR IMAGES IN A VIDEO CONFERENCING SESSION
Systems and methods for obscuring images and/or text during a screen sharing operation in a video conferencing session are described herein. In some embodiments, a client device detects a screen sharing operation. As part of the screen sharing operation, the client device captures an image of a display. The client device recognizes images and/or text in the image of the display and determines whether any of the images and/or text are restricted. If the images and/or text are determined to be restricted, the client device obscures the images and/or text prior to encoding of the image of the display for transmission.
Method and terminal for recognizing text
Provided is a method of recognizing text in a terminal, the method including generating first tag information about a kind of language set in a user interface (UI) for inputting text and a location of a cursor at a time point when a text input has started, when the UI for inputting text displayed on the terminal is executed; when a language switch request that requests the terminal to switch the kind of language set in the UI is received, generating second tag information about a kind of switched language and a location of the cursor at a time point of receiving the language switch request; when the text input is finished, storing a screen image of the terminal; and recognizing the text input to the terminal based on at least one piece of tag information and the screen image.
Apparatus for setting file name and the like for scan image, control method thereof, and storage medium
By using a character recognition result of a scan image, a user can set supplementary information such as a file name for the scan image with simple operation. There is provided an apparatus for performing a predetermined process on a scan image obtained by scanning a document, including: a display control unit configured to display a UI screen for performing the predetermined process, the UI screen displaying a character area in the scan image in a selectable manner to a user; and a setting unit configured to perform OCR processing on a character area selected by a user via the UI screen and set supplementary information for the predetermined process by using a character string extracted in the OCR processing, wherein, in a case where a user selects a plurality of character areas, the setting unit determines whether a delimiter should be inserted between the extracted character strings based on a positional relation between the plurality of selected character areas and, if it is determined that a delimiter should be inserted, inserts a delimiter between the extracted character strings.
Multi Receipt Detection
An information processing method and apparatus is provided for obtaining a captured image; detecting a character region from the captured image; performing association processing between expense type information specified from each of one or more receipts which are identified by using a detection result of the character region from the captured image and expense amount information specified from each of the one or more receipts in the captured image; and outputting an expense report obtained based on the association processing between the merchant information of each of one or more receipts and the one or more pieces of expense amount information of each of the one or more receipts.
Information processing device and method performing character recognition on document image data masked or not based on text image count
An information processing device performs processing on document image data including first image data to undergo character recognition processing and second image data not to undergo character recognition processing. The information processing device includes a detecting section which detects the first image data, an extracting section which extracts the first image data, and a processing section. The processing section includes a counting section which counts first images, a determining section which determines whether the number of the first images exceeds a threshold, a first performing section which performs first processing when the threshold is exceeded, and a second performing section which performs second processing when the threshold is not exceeded. Through the first processing, the second image is masked with a background color of the document image and character recognition is then performed on the document image. Through the second processing, character recognition is performed on the first images.
Systems and methods for obscuring restricted text and/or images in a video conferencing session
Systems and methods for obscuring images and/or text during a screen sharing operation in a video conferencing session are described herein. In some embodiments, a client device detects a screen sharing operation. As part of the screen sharing operation, the client device captures an image of a display. The client device recognizes images and/or text in the image of the display and determines whether any of the images and/or text are restricted. If the images and/or text are determined to be restricted, the client device obscures the images and/or text prior to encoding of the image of the display for transmission.
Presenting captured data
For presenting data captured from a first user interface while the user is looking at a second user interface, methods, apparatus, and systems are disclosed. One apparatus includes a processor and a memory that stores code executable by the processor. Here, the processor detects a switch from a first user interface to a second user interface and captures data from the first user interface. Additionally, the processor presents the captured data in the second user interface.
HANDWRITING INPUT APPARATUS, HANDWRITING INPUT METHOD, PROGRAM, AND INPUT SYSTEM
A handwriting input apparatus that displays stroke data handwritten based on a position of an input unit contacting a touch panel, includes circuitry configured to implement a handwriting recognition control unit for recognizing stroke data and converting the stroke data into text data, and an authentication control unit for authenticating a user based on the stroke data, and a display unit for displaying a display component for receiving a signature together with the text data when the authentication control unit determines that a user has been successfully authenticated.
Method and apparatus for recognizing characters
A method and an apparatus for recognizing characters using an image are provided. A camera is activated according to a character recognition request and a preview mode is set for displaying an image photographed through the camera in real time. An auto focus of the camera is controlled and an image having a predetermined level of clarity is obtained for character recognition from the images obtained in the preview mode. The image for character recognition is character-recognition-processed so as to extract recognition result data. A final recognition character row is drawn that excludes non-character data from the recognition result data. A first word is combined including at least one character of the final recognition character row and a predetermined maximum number of characters. A dictionary database that stores dictionary information on various languages using the first word is searched, so as to provide the user with the corresponding word.