G06V30/1831

METHOD AND SYSTEM THAT DETERMINE THE SUITABILITY OF A DOCUMENT IMAGE FOR OPTICAL CHARACTER RECOGNITION AND OTHER IMAGE PROCESSING

The current document is directed to a computationally efficient method and system for assessing the suitability of a text-containing digital image for various types of computational image processing, including optical-character recognition. A text-containing digital image is evaluated by the disclosed methods and systems for sharpness or, in other words, for the absence of, or low levels of, noise, optical blur, and other defects and deficiencies. The sharpness-evaluation process uses computationally efficient steps, including convolution operations with small kernels to generate contour images and intensity-based evaluation of pixels within contour images for sharpness and proximity to intensity edges in order to estimate the sharpness of a text-containing digital image for image-processing purposes.

Photograph-based assessment of dental treatments and procedures

The current document is directed to methods and systems for monitoring a dental patient's progress during a course of treatment. A three-dimensional model of the expected positions of the patient's teeth can be projected, in time, from a three-dimensional model of the patient's teeth prepared prior to beginning the treatment. A digital camera is used to take one or more two-dimensional photographs of the patient's teeth, which are input to a monitoring system. The monitoring system determines virtual-camera parameters for each two-dimensional input image with respect to the time-projected three-dimensional model, uses the determined virtual-camera parameters to generate two-dimensional images from the three-dimensional model, and then compares each input photograph to the corresponding generated two-dimensional image in order to determine how closely the three-dimensional arrangement of the patient's teeth corresponds to the time-projected three-dimensional arrangement.

HANDWRITING RECOGNITION METHOD AND APPARATUS, AND ELECTRONIC DEVICE AND STORAGE MEDIUM
20230245483 · 2023-08-03 · ·

A handwriting recognition method and apparatus, and an electronic device and a storage medium are provided. The method includes: acquiring a text image containing handwritten text; inputting the text image into a convolutional neural network, and extracting a CNN feature and a HOG feature of the text image; and extracting the handwritten text in text image according to the CNN feature and the HOG feature.

PHOTOGRAPH-BASED ASSESSMENT OF DENTAL TREATMENTS AND PROCEDURES

The current document is directed to methods and systems for monitoring a dental patient's progress during a course of treatment. A three-dimensional model of the expected positions of the patient's teeth can be projected, in time, from a three-dimensional model of the patient's teeth prepared prior to beginning the treatment. A digital camera is used to take one or more two-dimensional photographs of the patient's teeth, which are input to a monitoring system. The monitoring system determines virtual-camera parameters for each two-dimensional input image with respect to the time-projected three-dimensional model, uses the determined virtual-camera parameters to generate two-dimensional images from the three-dimensional model, and then compares each input photograph to the corresponding generated two-dimensional image in order to determine how closely the three-dimensional arrangement of the patient's teeth corresponds to the time-projected three-dimensional arrangement.

IMAGE BASED ASSESSMENT OF DENTAL TREATMENTS MONITORING

Systems and methods for monitoring a dental patient's progress during treatment. A camera coordinate system of a virtual camera is aligned to be coincident with a world coordinate system of a 3D) model representing a expected configuration of the patient's teeth at a particular time during treatment. One or more expected 2D images generated by mapping points from the 3D model to points on an image plane of the virtual camera. One or more 2D images of the patient's teeth taken at the particular time during treatment are compared to the expected 2D images to determine whether a configuration of the patient's teeth is within a threshold level of correspondence to the expected configuration of the patient's teeth. An indication about whether the dental treatment is proceeding as expected based on whether the configuration of the patient's teeth is within the threshold level can be provided.

PHOTOGRAPH-BASED ASSESSMENT OF DENTAL TREATMENTS AND PROCEDURES

The current document is directed to methods and systems for monitoring a dental patient's progress during a course of treatment. A three-dimensional model of the expected positions of the patient's teeth can be projected, in time, from a three-dimensional model of the patient's teeth prepared prior to beginning the treatment. A digital camera is used to take one or more two-dimensional photographs of the patient's teeth, which are input to a monitoring system. The monitoring system determines virtual-camera parameters for each two-dimensional input image with respect to the time-projected three-dimensional model, uses the determined virtual-camera parameters to generate two-dimensional images from the three-dimensional model, and then compares each input photograph to the corresponding generated two-dimensional image in order to determine how closely the three-dimensional arrangement of the patient's teeth corresponds to the time-projected three-dimensional arrangement.

PHOTOGRAPH-BASED ASSESSMENT OF DENTAL TREATMENTS AND PROCEDURES

The current document is directed to methods and systems for monitoring a dental patient's progress during a course of treatment. A three-dimensional model of the expected positions of the patient's teeth can be projected, in time, from a three-dimensional model of the patient's teeth prepared prior to beginning the treatment. A digital camera is used to take one or more two-dimensional photographs of the patient's teeth, which are input to a monitoring system. The monitoring system determines virtual-camera parameters for each two-dimensional input image with respect to the time-projected three-dimensional model, uses the determined virtual-camera parameters to generate two-dimensional images from the three-dimensional model, and then compares each input photograph to the corresponding generated two-dimensional image in order to determine how closely the three-dimensional arrangement of the patient's teeth corresponds to the time-projected three-dimensional arrangement.

Handwriting recognition method and apparatus, and electronic device and storage medium
12380721 · 2025-08-05 · ·

A handwriting recognition method and apparatus, and an electronic device and a storage medium are provided. The method includes: acquiring a text image containing handwritten text; inputting the text image into a convolutional neural network, and extracting a CNN feature and a HOG feature of the text image; and extracting the handwritten text in text image according to the CNN feature and the HOG feature.

Image based assessment for dental treatment monitoring

Systems and methods for monitoring a dental patient's progress during treatment. A first teeth mask for a captured 2D image of teeth at a particular time during treatment and a second teeth mask for an expected 2D image of the teeth may be generated. The expected 2D image may be generated from an expected 3D model representing an expected configuration of the teeth at the particular time. The captured 2D image and the expected 2D image may be compared, with the first and second teeth masks aligned, to determine whether the teeth are within a threshold level of correspondence to the expected configuration. An indication as to whether the dental treatment is proceeding as expected based on whether the configuration of the teeth is within the threshold level of correspondence may be provided.

IMAGE BASED ASSESSMENT FOR DENTAL TREATMENT MONITORING

Dental treatment monitoring systems and methods may include accessing an input image of teeth taken at a particular time during dental treatment, and determining virtual-camera parameters that represent an estimated position and orientation of a virtual camera for producing a generated image from a time-projected 3D model of the teeth. The virtual-camera parameters may be iteratively adjusted by: generating a first generated image by modifying the virtual-camera parameters based on a first jaw in the generated image; determining a pixel-associated cost based on a comparison of the first generated image to the input image; generating a second generated image by modifying the first virtual-camera parameters based a second jaw in the first generated image; and determining a pixel-associated cost based on a comparison of the second generated image and the input image. The generated image may be generated from the time-projected 3D model using the adjusted virtual-camera parameters.