Patent classifications
G06T11/10
SYSTEMS AND METHODS FOR HIERARCHICAL TEXT-CONDITIONAL IMAGE GENERATION
Disclosed herein are methods, systems, and computer-readable media for generating an image corresponding to a text input. In an embodiment, operations may include accessing a text description and inputting the text description into a text encoder. The operations may include receiving, from the text encoder, a text embedding, and inputting at least one of the text description or the text embedding into a first sub-model configured to generate, based on at least one of the text description or the text embedding, a corresponding image embedding. The operations may include inputting at least one of the text description or the corresponding image embedding, generated by the first sub-model, into a second sub-model configured to generate, based on at least one of the text description or the corresponding image embedding, an output image. The operations may include making the output image, generated by the first second sub-model, accessible to a device.
DETECTION DEVICE
According to an aspect, a detection device includes: an optical sensor including photodetection elements; an object placement member having a light-transmitting property and configured such that objects to be detected are placed thereon; and a control circuit. The optical sensor is configured to acquire image data at intervals of a predetermined period. The control circuit is configured to: extract a first outline of at least one region from first image data, calculate first coordinates corresponding to the first outline, and label the first coordinates with first identification information; extract a second outline of at least one region from second image data, calculate second coordinates corresponding to the second outline not containing the first coordinates, and newly add second identification information corresponding to the second outline not containing the first coordinates; and calculate a total number of pieces of the first identification information and the second identification information.
DISPLAY METHOD, INFORMATION PROCESSING APPARATUS, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
A display method includes first displaying a path indicating a relationship diagram between services in different colors for respective types of services in a first area of a screen in a format including conditional branches, changing a display position of the path with respect to a map such that display positions of institutions providing the services included in the path are superimposed on the map, and second displaying the map and the path with the services in the different colors in a second area of the screen, by a processor.
Machine learning-based 2D structured image generation
Techniques are described for a multiple-phase process that uses machine learning (ML) models to produce a texturized version of an input image. During a first phase, using a pix2pix-based ML model, an automatically-generated image that depicts structured texture is generated based on an input image that visually identifies a plurality of image areas for the structured texture. During a second phase, a neural style transfer-based ML model is used to apply the style of a style image (e.g., a target image from the training dataset of the pix2pix-based ML model) to the texture image generated at the first phase (the content image) to produce a modified texture image. According to an embodiment, during a third phase, the generated texture image produced at the first phase and the modified texture image produced at the second phase are combined to produce a structured texture image with a moderated amount of detail.
Image generation using one or more neural networks
Apparatuses, systems, and techniques are presented to generate or manipulate digital images. In at least one embodiment, a network is trained to generate modified images including user-selected features.
Computer-based techniques for generating and applying digital color palettes that correspond to user interest digital color
The disclosed systems and methods provide for automatically generating digital color palettes associated with user interest colors. A method may include receiving, by a computer, a user interest digital color value, retrieving a color palette ruleset having rules to provide Boolean outputs indicating whether a color palette rule is satisfied by candidate color values, applying the color palette ruleset to the user interest color value, and identifying digital color values for a digital color palette associated with the user interest color value. Identifying the digital color values can include: identifying a complement color value to the user interest color, identifying a neutral gray color corresponding to the user interest color, mapping the user interest color, complement color, and neutral gray color in a 3D color plane, and applying triangulation techniques to identify a subset of color values within a triangular range according to the color palette ruleset.
Non-linear latent filter techniques for image editing
Systems and methods use a non-linear latent filter neural network for editing an image. An image editing system trains a first neural network by minimizing a loss based upon a predicted attribute value for a target attribute in a training image. The image editing system obtains a latent space representation of an input image to be edited and a target attribute value for the target attribute in the input image. The image editing system provides the latent space representation and the target attribute value as input to the trained first neural network for modifying the target attribute in the input image to generate a modified latent space representation of the input image. The image editing system provides the modified latent space representation as input to a second neural network to generate an output image with a modification to the target attribute corresponding to the target attribute value.
HEATMAP IN LOW-CODE INTEGRATION ENVIRONMENT
Conventional problem detection for integration processes in an integration platform is inefficient and requires significant expertise. Disclosed embodiments generate a heatmap as an overlay over the components of an integration process, represented on a virtual canvas. The heatmap may comprise a color map with color regions that each represents the value of one or more predicted and/or actual performance metrics for the components of the integration process overlaid with that color region. Examples of performance metrics include the number of errors, the severity of errors, data throughput, bandwidth utilization, data volume, processing time, and/or the like. The heatmap may comprise a plurality of levels of resolution that may be transitioned between by zooming in and out of the virtual canvas. Higher levels of resolution may comprise indicators conveying additional information about the performance of the corresponding areas.
INFORMATION PROCESSING APPARATUS, METHOD FOR CONTROLLING INFORMATION PROCESSING APPARATUS, AND STORAGE MEDIUM
A non-transitory computer readable storage medium storing a program which causes a computer to execute: obtaining image data; and displaying, on a basis of that a specific color included in the obtained image data is automatically extracted, a first color palette including a color object indicating the specific color, the specific color being at least one, but not all, of colors of a plurality of colors included in the obtained image data, wherein the displaying includes changing a color of editing target contents to the specific color on a basis of that the color object indicating the specific color is selected in the first color palette.
EFFECT PROCESSING METHOD, ELECTRONIC DEVICE AND NON-TRANSITORY STORAGE MEDIUM
The present disclosure relates to an effect processing method, an electronic device and a non-transitory storage medium. The effect processing method includes: in response to an effect processing request input on a mobile terminal, acquiring an image to be processed; and performing effect processing on the image to be processed by a first generation model deployed on the mobile terminal to obtain a target effect image, and displaying the target effect image; the first generation model is obtained by training a first generative adversarial network, at least part of training data of the first generative adversarial network is generated by a second generation model deployed on a server, and the second generation model is obtained by training a second generative adversarial network.