Patent classifications
G06T2207/10024
Enhanced Illumination-Invariant Imaging
Devices, systems, and methods for generating illumination-invariant images are disclosed. A method may include activating, by a device, a camera to capture first image data; while the camera is capturing the first image data, activating of a first, light source; receiving the first image data, the first image data having pixels having first color values; identifying first light generated by the first light source while the camera is capturing the first image data; identifying, based on the first image data, second light generated by a second light source; generating, based on the first light and the second light, second image data that are illumination-invariant; and presenting the second image data.
CONTOUR SHAPE RECOGNITION METHOD
Provided is a contour shape recognition method, including: sampling and extracting salient feature points of a contour of a shape sample; calculating a feature function of the shape sample at a semi-global scale by using three types of shape descriptors; dividing the scale with a single pixel as a spacing to acquire a shape feature function in a full-scale space; storing feature function values at various scales into a matrix to acquire three types of feature grayscale map representations of the shape sample in the full-scale space; synthesizing the three types of grayscale map representations of the shape sample, as three channels of RGB, into a color feature representation image; constructing a two-stream convolutional neural network by taking the shape sample and the feature representation image as inputs at the same time; and training the two-stream convolutional neural network, and inputting a test sample into a trained network model to achieve shape classification.
DEEP PALETTE PREDICTION
Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to generate a color palette based on an input image. The color palette can then be used to generate, using the input image, a quantized, reduced color depth image that corresponds to the input image. Differences between a plurality of such input images and corresponding quantized images are used to train the encoder. Encoders trained in this manner are especially suited for generating color palettes used to convert images into different reduced color depth image file formats. Such an encoder also has benefits, with respect to memory use and computational time or cost, relative to the median-cut algorithm or other methods for producing reduced color depth color palettes for images.
MEDICAL IMAGE PROCESSING METHOD AND APPARATUS, DEVICE, STORAGE MEDIUM, AND PRODUCT
A computer device obtains a medical image set. The device identifies a difference between the reference medical image and the target medical image to obtain a candidate non-lesion region in the target medical image. The device determines area size information of the candidate non-lesion region as candidate area size information. The device adjusts the candidate non-lesion region according to the annotated area size information when the candidate area size information does not match the annotated area size information, so as to obtain a target non-lesion region in the target medical image.
METHODS FOR CONVERTING AN IMAGE AND CORRESPONDING DEVICES
The invention concerns a method for converting an input image comprising an input luminance component made of elements into an output image comprising an output luminance component made of elements, the respective ranges of the output luminance component values and input luminance component element values being of different range extension. the method comprises for the input image: computing a value of a general variable representative of at least two input luminance component element values; transforming each input luminance component element value into a corresponding output luminance component element value according to the computed general variable value; and converting the input image using the determined output luminance component element values. The transforming step uses a set of pre-determined output values organized into a 2D Look-Up-Table (2D LUT) comprising two input arrays indexing a set of chosen input luminance component values and a set of chosen general variable values respectively, each pre-determined output value matching a pair of values made of an indexed input luminance component value and an indexed general variable value, the input luminance component element value being transformed into the output luminance component element value using at least one predetermined output value.
METHOD, DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT FOR DETECTING IMAGE FRAME LOSS
An image frame loss detection method is performed by a computer device, including: acquiring first coded data respectively corresponding to a plurality of first image frames and a color signal corresponding to at least one second image frame; obtaining second coded data corresponding to at least one second image frame generated by a terminal device through image rendering of a color signal based on the coded data respectively corresponding to the plurality of first image frames; and comparing the first coded data respectively corresponding to the plurality of first image frames with the second coded data corresponding to the at least one second image frame to determine whether a frame loss occurs. The first coded data and the second coded data each include color-coded data respectively corresponding to M image blocks of a correspond image frame, and each of the M image blocks has a color in the image frame.
DEVICE FOR DETECTING SURFACE DEFECTS IN AN OBJECT
The present invention relates to a device (1) for detecting surface defects in an object (100), for example an industrial gasket. The detection device comprises lighting means (2) configured to illuminate said object with a first light radiation (L.sub.1) having a first lighting direction (D.sub.1) or with a second light radiation (L.sub.2) having a second lighting direction (D.sub.2). According to the invention, the detection device comprises acquisition means (30) configured to acquire a plurality of B/W images of said object, when illuminated by said lighting means.
METHOD FOR TRAINING IMAGE PROCESSING MODEL
This disclosure relates to a model training method and apparatus and an image processing method and apparatus. The model training method includes: obtaining a first sample image and a first standard region proportion corresponding to a first object in the first sample image; obtaining a standard region segmentation result corresponding to the first sample image based on the first standard region proportion; and training a first initial segmentation model based on the first sample image and the standard region segmentation result, to obtain a first target segmentation model.
METHOD OF PROCESSING IMAGE, ELECTRONIC DEVICE, AND MEDIUM
The present disclosure provides a method of processing an image, a device, and a medium. The method of processing the image includes: performing an image processing on an original image to obtain a component image for brightness of the original image; determining at least one of the original image and the component image as an image to be processed; classifying a pixel in the image to be processed, so as to obtain a classification result; processing the image to be processed according to the classification result, so as to obtain a target image; and determining an image quality of the original image according to the target image.
METHODS AND SYSTEMS FOR GENERATING END-TO-END DE-SMOKING MODEL
The disclosure herein relates to methods and systems for generating an end-to-end de-smoking model for removing smoke present in a video. Conventional data-driven based de-smoking approaches are limited mainly due to lack of suitable training data. Further, the conventional data-driven based de-smoking approaches are not end-to-end for removing the smoke present in the video. The de-smoking model of the present disclosure is trained end-to-end with the use of synthesized smoky video frames that are obtained by source aware smoke synthesis approach. The end-to-end de-smoking model localize and remove the smoke present in the video, using dynamic properties of the smoke. Hence the end-to-end de-smoking model simultaneously identifies the regions affected with the smoke and performs the de-smoking with minimal artifacts. localized smoke removal and color restoration of a real-time video.