Patent classifications
G06T11/001
DEEP PALETTE PREDICTION
Example embodiments allow for training of encoders (e.g., artificial neural networks (ANNs)) to generate a color palette based on an input image. The color palette can then be used to generate, using the input image, a quantized, reduced color depth image that corresponds to the input image. Differences between a plurality of such input images and corresponding quantized images are used to train the encoder. Encoders trained in this manner are especially suited for generating color palettes used to convert images into different reduced color depth image file formats. Such an encoder also has benefits, with respect to memory use and computational time or cost, relative to the median-cut algorithm or other methods for producing reduced color depth color palettes for images.
GENERATIVE SYSTEM FOR THE CREATION OF DIGITAL IMAGES FOR PRINTING ON DESIGN SURFACES
A generative system for the creation of digital images for printing on design surfaces comprises a training dataset comprising a plurality of sample images for printing on design surfaces, a generative adversarial network comprising a generator and a discriminator, wherein the generator receives noise at input and is trained to generate at output starting from the noise a new artificially generated image adapted to be used for printing on design surfaces, and wherein the discriminator receives at input the new artificially generated image and is trained to compare and distinguish the new image generated by the sample images of the training dataset.
NOISE SYNTHESIS FOR DIGITAL IMAGES
Apparatus and methods for providing software and hardware based solutions to the problem of synthesizing noise for a digital image. According to one aspect, a probability image is generated and noise blocks are randomly placed at locations in the probability image where the locations have probability values that are compared to a threshold criterion, creating a synthesized noise image. Embodiments include generating synthesized film grain images and synthesized digital camera noise images.
APPARATUS AND SYSTEM FOR DISPENSING COSMETIC MATERIAL
A system is provided that includes a mobile user device (300) that executes an application and determines and transmits a recipe for generating a target cosmetic material that is based on a combination of a plurality of separate ingredients that are associated with the user. The system includes a dispensing device (100) configured to receive the transmitted recipe from the mobile user device 300) and dispense each of the plurality of separate ingredients onto a common dispensing surface such that when the dispensed amounts of each of the plurality of separate ingredients is blended on the dispensing surface, the target cosmetic material is achieved.
NON-TRANSITORY COMPUTER READABLE MEDIUM AND METHOD FOR STYLE TRANSFER
According to one or more embodiments, a non-transitory computer readable medium storing a program which, when executed, causes a computer to perform processing comprising acquiring image data, applying style transfer to the image data a plurality of times based on one or more style images, and outputting data after the style transfer is applied.
ADAPTIVE SUB-PIXEL SPATIAL TEMPORAL INTERPOLATION FOR COLOR FILTER ARRAY
The present disclosure describes devices and methods for generating RGB images from Bayer filter images using adaptive sub-pixel spatiotemporal interpolation. An electronic device includes a processor configured to estimate green values at red and blue pixel locations of an input Bayer frame based on green values at green pixel locations of the input Bayer frame and a kernel for green pixels, generate a green channel of a joint demosaiced-warped output RGB pixel from the input Bayer frame based on the green values at the green pixel locations, the kernel for green pixels, and an alignment vector map, and generate red and blue channels of the joint demosaiced-warped output RGB pixel from the input Bayer frame based on the estimated green values at the red and blue pixel locations, kernels for red and blue pixels, and the alignment vector map.
SYSTEM AND METHOD FOR GENERATING 3D OBJECTS FROM 2D IMAGES OF GARMENTS
A system for generating three-dimensional (3D) objects from two-dimensional (2D) images of garments is presented. The system includes a data module configured to receive a 2D image of a selected garment and a target 3D model. The system further includes a computer vision model configured to generate a UV map of the 2D image of the selected garment. The system moreover includes a training module configured to train the computer vision model based on a plurality of 2D training images and a plurality of ground truth (GT) panels for a plurality of 3D training models. The system furthermore includes a 3D object generator configured to generate a 3D object corresponding to the selected garment based on the UV map generated by a trained computer vision model and the target 3D model. A related method is also presented.
System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.
Context-aware text color recommendation system
Embodiments are disclosed for determining a context-aware text color recommendation for text at a text location on an image. In particular, in one or more embodiments, the disclosed systems and methods comprise obtaining an image and a text location on the image, identifying at least one color theme based on a color harmonic template associated with the image, modifying the at least one color theme based on characteristics of the image, determining accessibility for at least one color in the at least one color theme based on the text location on the image, and determining a color palette recommendation for text at the text location on the image based on the determined accessibility for the at least one color in the at least one color theme.
Method for simulating the rendering of a make-up product on a body area
A method for simulating a rendering of a makeup product on a body area including the steps of: acquiring an image of the body area without makeup of a subject, determining first color parameters of the pixels of the image corresponding to the body area without makeup, identifying the pixels of the body area without makeup exhibiting highest brightness or red component value, and determining second color parameters of the pixels of the image corresponding to the body area, wherein the second color parameters render a making up of the body area by the makeup product.