Patent classifications
G06F16/5838
Determining fine-grain visual style similarities for digital images by extracting style embeddings disentangled from image content
The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and flexibly identifying digital images with similar style to a query digital image using fine-grain style determination via weakly supervised style extraction neural networks. For example, the disclosed systems can extract a style embedding from a query digital image using a style extraction neural network such as a novel two-branch autoencoder architecture or a weakly supervised discriminative neural network. The disclosed systems can generate a combined style embedding by combining complementary style embeddings from different style extraction neural networks. Moreover, the disclosed systems can search a repository of digital images to identify digital images with similar style to the query digital image. The disclosed systems can also learn parameters for one or more style extraction neural network through weakly supervised training without a specifically labeled style ontology for sample digital images.
TEMPORAL-BASED VISUALIZED IDENTIFICATION OF COHORTS OF DATA POINTS PRODUCED FROM WEIGHTED DISTANCES AND DENSITY-BASED GROUPING
A user-selected group of data points is received. Weighted distances between further data points with the user-selected group of data points are computed, the weighted distances computed based on respective weights assigned to dimensions of data points. Density-based grouping of the further data points is performed based on the computed weighted distances, the density-based grouping producing cohorts of data points. A graphical visualization is generated including pixels representing the user-selected group of data points and the cohorts of data points. The graphical visualization provides a temporal-based visualized identification of the cohorts with the user selected group of data points.
Contextual local image recognition dataset
A contextual local image recognition module of a device retrieves a primary content dataset from a server and then generates and updates a contextual content dataset based on an image captured with the device. The device stores the primary content dataset and the contextual content dataset. The primary content dataset comprises a first set of images and corresponding virtual object models. The contextual content dataset comprises a second set of images and corresponding virtual object models retrieved from the server.
Method and system for providing a compact graphical user interface for flexible filtering of data
There is presented a method and system for providing a compact graphical user interface for flexible filtering of data. The method comprises showing a search interface on a display device for filtering a content set by a plurality of domains, including a first domain, displaying, within the search interface, a first graphical representation of a parameter set of the first domain in response to a selecting of the first domain, receiving a first parameter subset from the first graphical representation, filtering a content set using the first parameter subset to obtain a search result, and displaying the search result on a display device. The search interface includes a temporally visible menu for selecting parameter sets of the domains and a compact single line query box to display graphical representations of parameter sets or to provide a conventional text entry box.
METHOD AND SYSTEM FOR MULTI-DIMENSIONAL IMAGE MATCHING WITH CONTENT IN RESPONSE TO A SEARCH QUERY
According to one embodiment, in response to a search query received from a client, a search is performed in a content database to identify a list of one or more content items based on one or more keywords of the search query. A first search is performed in an image store to identify a first set of one or more images using a first image searching method. A second search is performed in the image store to identify a second set of one or more images using a second image searching method that is different than the first image searching method. A search result is transmitted to the client, the search result having at least a portion of the content items to the client. Each content item is associated with one of the images selected from the first set of images or the second set of images.
System and Method of Identifying Visual Objects
A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images and displays them on the display, the processor compares the information retrieved in connection with one image with information retrieved in connection with subsequent images. The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images the images as they are captured, the location of the object in an image, and information retrieved for the object.
SYSTEMS AND METHODS FOR LOCATING AND RECOVERING KEY POPULATIONS OF DESIRED DATA
A system and a method for locating populations of content-specific data portions. The method includes determining a current population of data portions to be searched based on at least one prioritization criterion; accessing the current population of data portions; examining at least one data portion of the current population of data portions and extracting content-specific data; comparing the content-specific data to at least one suspect criterion; determining whether the current population meets at least one population criterion by analyzing the content-specific data; determining at least one next population of data portions to be searched based on proximity to the current population; and determining the at least one next population of data portions to be searched based on the at least one prioritization criterion.
Systems and methods for matching color and appearance of target coatings
System and methods for matching color and appearance of a target coating are provided herein. The system includes an electronic imaging device configured to receive a target image data of the target coating. The target image data includes target coating features. The system further includes one or more feature extraction algorithms that extracts the target image features from the target image data. The system further includes a machine-learning model that identifies a calculated match sample image from a plurality of sample images utilizing the target image features. The machine-learning model includes pre-specified matching criteria representing the plurality of sample images for identifying the calculated match sample image from the plurality of sample images. The calculated match sample image is utilized for matching color and appearance of the target coating.
OBJECT RECOGNITION BASED IMAGE OVERLAYS
Systems and methods for distributing photo filters based on the location of the object in the image are described. A photo filter publication system detects that a client device in communication with the system has captured an image, identifies an object in the image, identifies a location of the object in the image, identifies an image overlay associated with the identified location and having object criteria satisfied by the identified object, and provides the identified image overlay to the client device.
IMAGE-BASED POPULARITY PREDICTION
A machine may be configured to access an image of an item described by a description of the item. The machine may determine an image quality score of the image based on an analysis of the image. A request for search results that pertain to the description may be received by the machine, and the machine may present a search result that references the item's image, based on its image quality score. Also, the machine may access images of items and descriptions of items and generate a set of most frequent text tokens included in the item descriptions. The machine may identify an image feature exhibited by an item's image and determine that a text token from the corresponding item description matches one of the most frequent text tokens. A data structure may be generated by the machine to correlate the identified image feature with the text token.