Patent classifications
G06T5/002
License plate detection and recognition system
A license plate detection and recognition system receives training data comprising images of license plates. The system prepares ground truth data from the training data based predefined parameters. The system trains a first machine learning algorithm based on the ground truth data to generate a license plate detection model. The license plate detection model is configured to detect one or more regions in the images. The one or more regions contains a candidate for a license plate. The LPDR system generates a bounding box for each region. The LPDR system trains a second machine learning algorithm based on the ground truth data and the license plate detection model to generate a license plate recognition model. The license plate recognition model generates a sequence of alphanumeric characters with a level of recognition confidence for the sequence.
Method of image processing based on plurality of frames of images, electronic device, and storage medium
A method of image processing based on a plurality of frames of images, an electronic device, and a storage medium are provided. The method includes: capturing a plurality of frames of original images; obtaining a high dynamic range (HDR) image by performing image synthesis on the plurality of frames of original images; performing artificial intelligent-based denoising on the HDR image to obtain a target denoised image.
Image denoising model training method, imaging denoising method, devices and storage medium
A training method for an image denoising model that can include collecting multiple sample image groups through a shooting device, each sample image group including multiple frames of sample images with a same photographic sensitivity and sample images in different sample image groups having different photographic sensitivities. The method can further include acquiring a photographic sensitivity of each sample image group, determining a noise characterization image corresponding to each sample image group based on the photographic sensitivity, determining a training input image group and a target image associated with each sample image group, each training input image group including all or part of sample images in a corresponding sample image group and a corresponding noise characterization image, constructing multiple training pairs each including a training input image group and a target image, and training the image denoising model based on the multiple training pairs until the image denoising model converges.
Method for processing image, electronic device, and storage medium
An image processing method for identifying text on production line components obtains an image to be recognized and a standard image for reference and extracts a first text area of the image to be recognized. A second text area of the standard image is obtained, and a text window is extracted based on the second text area. The method further obtains a target text area of the image to be recognized based on the first text area and the text window, and obtains a first set of first text sub-areas, and obtains a second set of second text sub-areas, by dividing the second text area into sub-windows of the text window. The method further marks the image to be recognized as a qualifying image when each first text sub-area of the first set is the same as a corresponding second text sub-area of the second set.
Image processing device
An image processing device includes a rotation processor and an image processor. The rotation processor receives an input image and generates a temporary image according to the input image. The image processor is coupled to the rotation processor and outputs a processed image according to the temporary image, wherein the image processor has a predetermined image processing width, a width of the input image is larger than the predetermined image processing width, and a width of the temporary image is less than the predetermined image processing width.
Navigation device capable of estimating contamination and denoising image frame
There is provided an optical navigation device including an image sensor and a processing unit. The image sensor outputs successive image frames. The processing unit calculates a contamination level and a motion signal based on filtered image frames, and determines whether to update a fixed pattern noise (FPN) stored in a frame buffer according to a level of FPN subtraction, the calculated contamination level and the calculated motion signal to optimize the update of the fixed pattern noise.
3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network
A 3-D convolutional autoencoder for low-dose CT via transfer learning from a 2-D trained network is described, A machine learning method for low dose computed tomography (LDCT) image correction is provided. The method includes training, by a training circuitry, a neural network (NN) based, at least in part, on two-dimensional (2-D) training data. The 2-D training data includes a plurality of 2-D training image pairs. Each 2-D image pair includes one training input image and one corresponding target output image. The training includes adjusting at least one of a plurality of 2-D weights based, at least in part, on an objective function. The method further includes refining, by the training circuitry, the NN based, at least in part, on three-dimensional (3-D) training data. The 3-D training data includes a plurality of 3-D training image pairs. Each 3-D training image pair includes a plurality of adjacent 2-D training input images and at least one corresponding target output image. The refining includes adjusting at least one of a plurality of 3-D weights based, at least in part, on the plurality of 2-D weights and based, at least in part, on the objective function. The plurality of 2-D weights includes the at least one adjusted 2-D weight.
INFORMATION PROCESSING DEVICE AND INFORMATION PROCESSING METHOD
An information processing device (200A, 200B, and 200C) according to the present disclosure includes a control unit (220, 220B, and 220C). The control unit (220, 220B, and 220C) acquires a captured image of a target imaged by a sensor. The captured image is an image obtained from reflected light of light emitted to the target from a plurality of light sources arranged at different positions, respectively. The control unit (220, 220B, and 220C) extracts a flat region from the captured image based on a luminance value of the captured image. The control unit (220, 220B, and 220C) calculates shape information regarding a shape of a surface of the target based on information regarding the sensor and the flat region of the captured image.
IMAGE PROCESSING USING FILTERING FUNCTION COVARIANCE
An image processing method and an image processing unit for performing image processing determines a set of one or more filtered pixel values, wherein the one or more filtered pixel values represent a result of processing image data using a set of one or more filtering functions. A total covariance of the set of one or more filtering functions is identified. A refinement filtering function is applied to the set of one or more filtered pixel values to determine a set of one or more refined pixel values, wherein the refinement filtering function has a covariance that is determined based on the total covariance of the set of one or more filtering functions.
SYSTEMS AND METHODS FOR REAL-TIME VIDEO ENHANCEMENT
A computer-implemented method is provided for improving live video quality. The method comprises: acquiring, using a medical imaging apparatus, a stream of consecutive image frames of a subject, and the stream of consecutive image frames are acquired with reduced amount of radiation dose; applying a deep learning network model to the stream of consecutive image frames to generate an image frame with improved quality; and displaying the image frame with improved quality in real-time on a display.