Patent classifications
G06T7/90
DEEP PALETTE PREDICTION
Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to generate a color palette based on an input image. The color palette can then be used to generate, using the input image, a quantized, reduced color depth image that corresponds to the input image. Differences between a plurality of such input images and corresponding quantized images are used to train the encoder. Encoders trained in this manner are especially suited for generating color palettes used to convert images into different reduced color depth image file formats. Such an encoder also has benefits, with respect to memory use and computational time or cost, relative to the median-cut algorithm or other methods for producing reduced color depth color palettes for images.
DEEP PALETTE PREDICTION
Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to generate a color palette based on an input image. The color palette can then be used to generate, using the input image, a quantized, reduced color depth image that corresponds to the input image. Differences between a plurality of such input images and corresponding quantized images are used to train the encoder. Encoders trained in this manner are especially suited for generating color palettes used to convert images into different reduced color depth image file formats. Such an encoder also has benefits, with respect to memory use and computational time or cost, relative to the median-cut algorithm or other methods for producing reduced color depth color palettes for images.
METHOD, DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT FOR DETECTING IMAGE FRAME LOSS
An image frame loss detection method is performed by a computer device, including: acquiring first coded data respectively corresponding to a plurality of first image frames and a color signal corresponding to at least one second image frame; obtaining second coded data corresponding to at least one second image frame generated by a terminal device through image rendering of a color signal based on the coded data respectively corresponding to the plurality of first image frames; and comparing the first coded data respectively corresponding to the plurality of first image frames with the second coded data corresponding to the at least one second image frame to determine whether a frame loss occurs. The first coded data and the second coded data each include color-coded data respectively corresponding to M image blocks of a correspond image frame, and each of the M image blocks has a color in the image frame.
METHOD, DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT FOR DETECTING IMAGE FRAME LOSS
An image frame loss detection method is performed by a computer device, including: acquiring first coded data respectively corresponding to a plurality of first image frames and a color signal corresponding to at least one second image frame; obtaining second coded data corresponding to at least one second image frame generated by a terminal device through image rendering of a color signal based on the coded data respectively corresponding to the plurality of first image frames; and comparing the first coded data respectively corresponding to the plurality of first image frames with the second coded data corresponding to the at least one second image frame to determine whether a frame loss occurs. The first coded data and the second coded data each include color-coded data respectively corresponding to M image blocks of a correspond image frame, and each of the M image blocks has a color in the image frame.
APPARATUS AND SYSTEM FOR DISPENSING COSMETIC MATERIAL
A system is provided that includes a mobile user device (300) that executes an application and determines and transmits a recipe for generating a target cosmetic material that is based on a combination of a plurality of separate ingredients that are associated with the user. The system includes a dispensing device (100) configured to receive the transmitted recipe from the mobile user device 300) and dispense each of the plurality of separate ingredients onto a common dispensing surface such that when the dispensed amounts of each of the plurality of separate ingredients is blended on the dispensing surface, the target cosmetic material is achieved.
APPARATUS AND SYSTEM FOR DISPENSING COSMETIC MATERIAL
A system is provided that includes a mobile user device (300) that executes an application and determines and transmits a recipe for generating a target cosmetic material that is based on a combination of a plurality of separate ingredients that are associated with the user. The system includes a dispensing device (100) configured to receive the transmitted recipe from the mobile user device 300) and dispense each of the plurality of separate ingredients onto a common dispensing surface such that when the dispensed amounts of each of the plurality of separate ingredients is blended on the dispensing surface, the target cosmetic material is achieved.
METHOD FOR TRAINING IMAGE PROCESSING MODEL
This disclosure relates to a model training method and apparatus and an image processing method and apparatus. The model training method includes: obtaining a first sample image and a first standard region proportion corresponding to a first object in the first sample image; obtaining a standard region segmentation result corresponding to the first sample image based on the first standard region proportion; and training a first initial segmentation model based on the first sample image and the standard region segmentation result, to obtain a first target segmentation model.
METHOD FOR TRAINING IMAGE PROCESSING MODEL
This disclosure relates to a model training method and apparatus and an image processing method and apparatus. The model training method includes: obtaining a first sample image and a first standard region proportion corresponding to a first object in the first sample image; obtaining a standard region segmentation result corresponding to the first sample image based on the first standard region proportion; and training a first initial segmentation model based on the first sample image and the standard region segmentation result, to obtain a first target segmentation model.
METHOD AND APPARATUS FOR EVALUATING THE COMPOSITION OF PIGMENT IN A COATING BASED ON AN IMAGE
A coating analyzer is configured to receive electronic image data of a physical coating and to generate information regarding the pigments of the physical coating. The coating analyzer applies a computer vision model trained on baseline image data to the electronic image data. The coating analyzer assigns color values to the pigments forming the electronic image data and generates pigment groups based on the assigned color values. The pigment groups provide color palette data regarding the pigments forming the coating.
METHOD AND APPARATUS FOR EVALUATING THE COMPOSITION OF PIGMENT IN A COATING BASED ON AN IMAGE
A coating analyzer is configured to receive electronic image data of a physical coating and to generate information regarding the pigments of the physical coating. The coating analyzer applies a computer vision model trained on baseline image data to the electronic image data. The coating analyzer assigns color values to the pigments forming the electronic image data and generates pigment groups based on the assigned color values. The pigment groups provide color palette data regarding the pigments forming the coating.