G06V30/1444

INTERFACE INFORMATION PROCESSING METHOD AND APPARATUS, STORAGE MEDIUM, AND DEVICE

An interface information processing method and apparatus, a storage medium, and a device are provided. The method includes: displaying, based on a trigger operation on a floating translation component in a first display interface, a trigger progress in the floating translation component, the first display interface including a character of a first language type, the trigger progress being associated with trigger duration, and the trigger duration being a duration of the trigger operation on the floating translation component; and switching the first display interface to a second display interface based on the trigger progress in the floating translation component satisfying a full-screen translation start progress, the second display interface including a character of a second language type, and the character of the second language type being obtained by translating the character of the first language type.

GRADING SUPPORT DEVICE, GRADING SUPPORT SYSTEM, AND GRADING SUPPORT METHOD
20220130270 · 2022-04-28 · ·

To reduce the burden on the grader while minimizing variations in grading, a grading support device 30 comprises: an acquisition unit 31; an extraction unit 32; and a grading unit 33. The acquisition unit 31 acquires an answer file that includes information associated with the answer to a question. The extraction unit 32 extracts the answer from the answer file acquired. The grading unit 33 grades the extracted answer using learning results obtained through machine learning of the relationship between the answer and the grading results.

Question correction method, device, electronic equipment and storage medium for oral calculation questions

The present disclosure provides a question correction method and device for oral calculation questions. The feature vector of the question to be searched is obtained according to the content of token in the stem of each question to be searched, and then the feature vector of each question to be searched is used to search for the target test paper that matches the test paper to be searched in the question bank. For the question to be searched in the form of oral calculation question, a second search is performed in the target test paper based on the feature vector of the question, and the search criterion is the minimum shortest editing distance. If the question type of the matched target question is also an oral calculation question, it is determined that the question to be searched is the oral calculation question to be corrected, then a preset oral calculation engine is used to calculate the oral calculation question to be corrected and the calculation result is output as the answer to the oral calculation question to be corrected. By applying the solution provided by the present disclosure, the accuracy of correction on oral calculation questions can be improved.

CONTENT EXTRACTION BASED ON GRAPH MODELING
20220129688 · 2022-04-28 ·

Methods and systems are presented for extracting categorizable information from an image using a graph that models data within the image. Upon receiving an image, a data extraction system identifies characters in the image. The data extraction system then generates bounding boxes that enclose adjacent characters that are related to each other in the image. The data extraction system also creates connections between the bounding boxes based on locations of the bounding boxes. A graph is generated based on the bounding boxes and the connections such that the graph can accurately represent the data in the image. The graph is provided to a graph neural network that is configured to analyze the graph and produce an output. The data extraction system may categorize the data in the image based on the output.

Text feature guided visual based document classifier
11720605 · 2023-08-08 · ·

A visual-based classification model influenced by text features as a result of the outputs of a text-based classification model is disclosed. A system receives one or more documents to be classified based on one or more visual features and provides the one or more documents to a student classification model, which is a visual-based classification model. The system also classifies, by the student classification model, the one or more documents into one or more document types based on one or more visual features. The one or more visual features are generated by the student classification model that is trained based on important text identified by a teacher classification model for the one or more document types, with the teacher classification model being a text-based classification model. Generating training data and training the student classification model based on the training data are also described.

INFORMATION OBTAINING METHOD, INFORMATION PROVISION DEVICE, INFORMATION OBTAINING DEVICE, AND STORAGE MEDIUM
20230306220 · 2023-09-28 ·

An information obtaining method includes obtaining an image of a design printed on a printing target. The design includes an identifier corresponding to specific information. The information obtaining method further includes identifying the identifier included in the design based on the image; and obtaining the specific information corresponding to the identifier based on the identifier.

Identifying regions of visible media data that belong to a trigger content type
11769465 · 2023-09-26 · ·

A computing system includes a storage device and processing circuitry. The processing circuitry is configured to obtain an image frame that comprises a plurality of pixels that form a pixel array. Additionally, the processing circuitry is configured to determine that a region of the image frame belongs to a trigger content type. Based on determining that the region of the image frame belongs to the trigger content type, the processing circuitry is configured to modify the region of the image frame to adjust a luminance of pixels of the region of the image frame based on part on an ambient light level in a viewing area of the user; and output, for display by a display device in the viewing area of the user, a version of the image frame that contains the modified region.

Automated license plate recognition system and related method

Systems, methods, devices and computer readable media for determining a geographical location of a license plate are described herein. A first image of a license plate is acquired by a first image acquisition device of a camera unit and a second image of the license plate is acquired by a second image acquisition device of the camera unit. A three-dimensional position of the license plate relative to the camera unit is determined based on stereoscopic image processing of the first image and the second image. A geographical location of the camera unit is obtained. A geographical location of the license plate is determined from the three-dimensional position of the license plate relative to the camera unit and the geographical location of the camera unit. Other systems, methods, devices and computer readable media for detecting a license plate and identifying a license plate are described herein.

AUTOMATICALLY PREDICTING TEXT IN IMAGES
20220019834 · 2022-01-20 · ·

Systems and methods for detecting and predicting text within images. An image is passed to a feature-extraction module. Each image typically contains at least one text object, and each text object contains at least one character. Based on the image, the feature-extraction module generates at least one feature map indicating text object(s) in the image. The feature map(s) is then passed to a decoder module. In son implementations, the decoder module applies a weighted mask to the feature map(s). Based on the feature map(s), the decoder module predicts a sequence of characters in the text object(s). In some embodiments, that prediction is based on previous known data. The decoder module is directed by a query that indicates at least one desired characteristic of the text object(s). An output module then refines the predicted content. At least one neural network may be used.

SIGNATURE-BASED UNIQUE IDENTIFIER
20210357715 · 2021-11-18 ·

The technology described herein generates a unique identifier for a visual media that comprises pre-printed visual indications on the visual media and a user's handwritten signature. The location of the signature on the visual media can be determined by including preprinted fiducial marks on the visual media. The fiducial markers act as landmarks that allow the size and location of the signature to be determined in absolute terms. The unique identifier is then stored in computer memory on a user-experience server. The user-experience server can associate the unique identifier with a digital asset, such as an image or video, designated by the user. When the unique identifier is provided to the user-experience server a second time, the digital asset can be retrieved and output to the computing device that provided the unique identifier.