Patent classifications
G06V10/56
DEEP PALETTE PREDICTION
Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to generate a color palette based on an input image. The color palette can then be used to generate, using the input image, a quantized, reduced color depth image that corresponds to the input image. Differences between a plurality of such input images and corresponding quantized images are used to train the encoder. Encoders trained in this manner are especially suited for generating color palettes used to convert images into different reduced color depth image file formats. Such an encoder also has benefits, with respect to memory use and computational time or cost, relative to the median-cut algorithm or other methods for producing reduced color depth color palettes for images.
DEEP PALETTE PREDICTION
Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to generate a color palette based on an input image. The color palette can then be used to generate, using the input image, a quantized, reduced color depth image that corresponds to the input image. Differences between a plurality of such input images and corresponding quantized images are used to train the encoder. Encoders trained in this manner are especially suited for generating color palettes used to convert images into different reduced color depth image file formats. Such an encoder also has benefits, with respect to memory use and computational time or cost, relative to the median-cut algorithm or other methods for producing reduced color depth color palettes for images.
INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, METHOD FOR PROCESSING INFORMATION, AND INFORMATION PROCESSING SYSTEM
An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a subject image of a detection target from the observation data, calculates a plurality of individual indices indicating degrees of reliability, each of which relates to at least identification information or measurement information regarding the detection target, and also calculates an integrated index, which is obtained by integrating a plurality of calculated individual indices. The output interface outputs the integrated index.
INFORMATION PROCESSING APPARATUS, SENSING APPARATUS, MOBILE OBJECT, METHOD FOR PROCESSING INFORMATION, AND INFORMATION PROCESSING SYSTEM
An information processing apparatus includes an input interface, a processor, and an output interface. The input interface obtains observation data obtained from an observation space. The processor detects a subject image of a detection target from the observation data, calculates a plurality of individual indices indicating degrees of reliability, each of which relates to at least identification information or measurement information regarding the detection target, and also calculates an integrated index, which is obtained by integrating a plurality of calculated individual indices. The output interface outputs the integrated index.
METHOD AND SYSTEM FOR AUTOMATED PLANT IMAGE LABELING
The invention relates to a computer-implemented method comprising:—acquiring (406) first training images (108) using a first image acquisition technique (104), each first training image depicting a plant-related motive; —acquiring (402) second training images (106) using a second image acquisition technique (102), each second training image depicting the motive depicted in a respective one of the first training images; —automatically assigning (404) at least one label (150, 152, 154) to each of the acquired second training images; —spatially aligning (408) the first and second training images which are depicting the same one of the motives into an aligned training image pair; —training (410) a machine-learning model (132) as a function of the aligned training image pairs and the labels, wherein during the training the machine-learning model (132) learns to automatically assign one or more labels (250, 252, 254) to any test image (205) acquired with the first image acquisition technique which depicts a plant-related motive; and—providing (412) the trained machine-learning model (132).
METHOD AND SYSTEM FOR AUTOMATED PLANT IMAGE LABELING
The invention relates to a computer-implemented method comprising:—acquiring (406) first training images (108) using a first image acquisition technique (104), each first training image depicting a plant-related motive; —acquiring (402) second training images (106) using a second image acquisition technique (102), each second training image depicting the motive depicted in a respective one of the first training images; —automatically assigning (404) at least one label (150, 152, 154) to each of the acquired second training images; —spatially aligning (408) the first and second training images which are depicting the same one of the motives into an aligned training image pair; —training (410) a machine-learning model (132) as a function of the aligned training image pairs and the labels, wherein during the training the machine-learning model (132) learns to automatically assign one or more labels (250, 252, 254) to any test image (205) acquired with the first image acquisition technique which depicts a plant-related motive; and—providing (412) the trained machine-learning model (132).
METHOD OF PROCESSING IMAGE, ELECTRONIC DEVICE, AND MEDIUM
The present disclosure provides a method of processing an image, a device, and a medium. The method of processing the image includes: performing an image processing on an original image to obtain a component image for brightness of the original image; determining at least one of the original image and the component image as an image to be processed; classifying a pixel in the image to be processed, so as to obtain a classification result; processing the image to be processed according to the classification result, so as to obtain a target image; and determining an image quality of the original image according to the target image.
METHOD AND APPARATUS FOR EVALUATING THE COMPOSITION OF PIGMENT IN A COATING BASED ON AN IMAGE
A coating analyzer is configured to receive electronic image data of a physical coating and to generate information regarding the pigments of the physical coating. The coating analyzer applies a computer vision model trained on baseline image data to the electronic image data. The coating analyzer assigns color values to the pigments forming the electronic image data and generates pigment groups based on the assigned color values. The pigment groups provide color palette data regarding the pigments forming the coating.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
There is provided with an information processing apparatus. An approximate discrimination unit discriminates an approximate type of an object from a first captured image obtained by capturing the object to which identification information is added. A setting unit sets, based on the approximate type of the object, an image capturing condition for capturing an image to obtain the identification information. A detail discrimination unit identifies the identification information from a second captured image obtained by capturing the object under the image capturing condition and discriminates a detailed type of the object based on a result of the identification.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM
There is provided with an information processing apparatus. An approximate discrimination unit discriminates an approximate type of an object from a first captured image obtained by capturing the object to which identification information is added. A setting unit sets, based on the approximate type of the object, an image capturing condition for capturing an image to obtain the identification information. A detail discrimination unit identifies the identification information from a second captured image obtained by capturing the object under the image capturing condition and discriminates a detailed type of the object based on a result of the identification.