Patent classifications
G06F16/5838
System and method for selecting sponsored images to accompany text
A system for selecting an image to accompany text from a user in connection with a social media post. The system includes receiving text from the user; identifying one or more search terms based on the text; identifying candidate images from images in one or more image databases using the search terms, where the candidate images comprise a sponsored image; presenting one or more candidate images to the user, where the sponsored image is presented preferentially compared to other candidate images; receiving from the user a selected image from the one or more candidate images; generating the social media post comprising the selected image and the user-submitted text; and transmitting the social media post for display.
In-store card activation
A user having an account with a payment provider receives an unregistered payment card that is associated with the payment provider, and that includes a magnetic strip encoded with a number unique to the card and a machine readable code such as a QR/barcode embossed thereon. The user may then open an application on the user's mobile device to capture the number associated with the card by, for example, scanning the QR/barcode, capturing an image of the number, speaking the number into the device, or manually entering the number into the user's device. The user may also authenticate with the payment provider by entering login credentials. The user may then confirm a request to link the number of the card with the user's payment provider account, which activates and links the card to the user account so that the user can immediately use the card for purchases.
Similar case retrieval apparatus, similar case retrieval method, non-transitory computer-readable storage medium, similar case retrieval system, and case database
A similar case retrieval apparatus includes: a lesion portion acquirer that acquires partial images including lesion portion images, an image feature extractor that extracts image features of each of the plurality of partial images; a location information acquirer that acquires location information of each of the partial images; a lateral position determiner that determines the right organ or the left organ in which each of the lesion portions exists based on the location information; a unilateral distribution identifier that determines whether or not a distribution of the lesion portions is a unilateral distribution; and a similar case retriever that retrieves case data from a case database including both case data for the unilateral distribution in the right organ and case data for the unilateral distribution in the left organ when the unilateral distribution identifier identifies that the distribution of the lesion portions is the unilateral distribution.
METHODS AND SYSTEMS FOR GENERATING A STRING IMAGE
Methods, systems, and apparatuses are described for receiving image data and generating, based on the image data a set of instructions. The instructions may be configured to describe a method of weaving string around a loom so as to generate a string art representation of the received image.
DATA COLLECTION FOR OBJECT DETECTORS
A computer-implemented method of generating metadata from an image may comprise sending the image to an object detection service, which generates detections metadata from the image. The image may also be sent to a visual features extractor, which extracts visual features metadata from the image. The generated detections metadata may then be sent to an uncertainty score calculator, which computes an uncertainty score from the detections metadata. The uncertainty score may be related to a level of uncertainty within the detections metadata. The image, the visual features metadata, the detections metadata and the uncertainty score may then be stored in a database accessible over a computer network.
Generating sentiment metrics using emoji selections
Methods, devices and systems for measuring emotions expressed by computing emoji responses to videos are described. An example method includes receiving user input corresponding to an emoji at a selected time, assigning at least one meaning-bearing word to the emoji, wherein the at least one meaning-bearing word has an intended use or meaning that is represented by the emoji, associating a corresponding vector with the at least one meaning-bearing word, wherein the corresponding vector is a vector of a plurality of vectors in a vector space, and aggregating the plurality of vectors to generate an emoji vector that corresponds to the user sentiment.
DECOMPOSITIONAL LEARNING FOR COLOR ATTRIBUTE PREDICTION
The present disclosure describes a model for large scale color prediction of objects identified in images. Embodiments of the present disclosure include an object detection network, an attention network, and a color classification network. The object detection network generates object features for an object in an image and may include a convolutional neural network (CNN), region proposal network, or a ResNet. The attention network generates an attention vector for the object based on the object features, wherein the attention network takes a query vector based on the object features, and a plurality of key vector and a plurality of value vectors corresponding to a plurality of colors as input. The color classification network generates a color attribute vector based on the attention vector, wherein the color attribute vector indicates a probability of the object including each of the plurality of colors.
Image search using intersected predicted queries
A method for receiving a first user query from a user for searching an item, forming a first filter based on the first user query, and forming a first filtered item collection is provided. The method includes predicting a new query based on the first user query and a historical query log, forming a second filter for the new query, and applying the second filter to the first filtered item collection to form a second filtered item collection. Further, associating an item score to each of a plurality of items in the first and second filtered item collections, sorting the plurality of items in the first and second filtered item collections according to the item score associated to each of the plurality of items, and providing, to a user display, an item in the plurality of items in the first or second filtered item collections according to a sorting order.
COMPUTERIZED TECHNICAL AUTHENTICATION AND GRADING SYSTEM FOR COLLECTIBLE OBJECTS
A computerized system, apparatus, and method of grading collectibles. The system comprises a grading apparatus that receives at least one image of the collectible. The grading apparatus applies at least one processing routine to said at least one image. The grading apparatus generates a grade report of the collectible based at least on results of the at least one processing routine. The system comprises an encasing apparatus configured to encase the graded collectible within a protective slab.
Display condition analysis device, display condition analysis method, and program recording medium
Disclosed is a display condition analysis device which is capable of analyzing the display conditions of products. This display condition analysis device is provided with: a product recognition means for recognizing, from a display image taken of products on display, the products in the display image; and a display condition analysis means for analyzing, on the basis of the positions of the recognized products, the display conditions of the products on display.