Patent classifications
G06F16/5854
Sea face
Embodiments of the present invention provide a method, system and computer program product for dynamically personalizing a respiratory mask. A method for the dynamic personalization of a respiratory mask includes electronically scanning an identification of a guest on an oceangoing vessel, querying a data store of images of guest faces for an image of a face matching the identification of the guest, retrieving the matching image of the face into memory of the computing system, loading a printer with a respiratory mask such as an N95 mask or a KN95 mask, printing onto the mask a portion of the matching image of the face corresponding to a portion of the face obscured by the mask when the mask is worn by the guest and ejecting the mask with the printing from the printer.
System and method for providing similar or related products based on deep-learning
A method for providing similar or related products based on deep-learning, which is performed by a data processing unit of a shopping mall server, includes: acquiring an item image and item information for an item registered in a shopping mall; detecting bounding boxes for one or more objects by object-detecting the item image; setting a bounding box for an object associated with the item based on the item information; creating a main bounding box image by cropping a portion of the item image in the set bounding box; creating a padding image by padding-processing the main bounding box image; extracting a feature vector for the padding image; matching the feature vector with the item and storing the feature vector in a database; and creating the database for a similar or related product search service.
INDIVIDUAL IDENTIFICATION SYSTEM
A registration means for storing an image of a product as a registration image in association with information representing the passing sequence that the product passed through an upstream side process; a management means for managing the matching sequence in a downstream side process; and a matching means for performing matching between an image of a product carried into the downstream side process and the registration image according to the matching sequence, are included. Each time the matching means succeeds in matching, the management means updates the matching sequence to sequence in which registration images not having succeeded in matching with any matching image are put in order on the basis of the passing sequence that the products passed through the upstream side process.
Identifying product metadata from an item image
A metadata extraction machine accesses an image that depicts an item. The item depicted in the image may have an attribute that describes a characteristic of the item and an attribute descriptor that corresponds to the attribute of the item and specifies a value of the attribute. The metadata extraction machine performs an analysis of the image. The analysis may include identifying the attribute descriptor corresponding to the attribute based on image segmentation of the image. The metadata extraction machine transmits a communication to a device of a user based on the identifying of the attribute descriptor corresponding to the attribute of the item depicted in the image.
Information processing apparatus, information processing system, control method, and program
An information processing apparatus (2000) acquires a shelf rack image (12) in which a product shelf rack on which a product is displayed is imaged. The information processing apparatus (2000) performs image analysis on the shelf rack image (12), and generates information (actual display information) relevant to a display situation of the product on a product shelf rack (20). The information processing apparatus (2000) acquires reference display information representing a reference for display of the product on the product shelf rack (20). The information processing apparatus (2000) compares the actual display information generated by performing the image analysis on the shelf rack image (12) with the acquired reference display information, and generates comparison information representing a result.
Electronic device for generating video comprising character and method thereof
An electronic device and method are disclosed. The electronic device includes a display, a processor and memory. The processor may implement the method, including analyzing, by a processor, a first video to identify any characters included in the first video, displaying one or more icons representing one or more characters identified in the first video via a display, receiving, by input circuitry, a first user input selecting a first icon representing a first character from among the one or more icons, based on the first user input, selecting image frames of the first video that include the first character from among image frames included in the first video, and generating, by the processor, a second video including the selected image frames. A second embodiment includes automatically selecting images from a gallery including one or more characters for generation of a video.
System for object identification
A multidimensional system for generating a multimedia search engine is provided. A computer device identifies a plurality of independently separable aspects of a multimedia file. The computing device provides at least one independently separable aspect of the plurality of independently separable aspects as input into an object detection model. The computing device receives, from the object detection model, an identification of at least one object and a corresponding level of confidence that the object is present in the multimedia file. The computing device classifies the object as either confident or not confident, based on whether the level of confidence meets a threshold level of confidence. The computing device generates a multimedia search engine based, at least in part, on the object and the classification.
SCENE GRAPH EMBEDDINGS USING RELATIVE SIMILARITY SUPERVISION
Systems and methods for image processing are described. One or more embodiments of the present disclosure identify an image including a plurality of objects, generate a scene graph of the image including a node representing an object and an edge representing a relationship between two of the objects, generate a node vector for the node, wherein the node vector represents semantic information of the object, generate an edge vector for the edge, wherein the edge vector represents semantic information of the relationship, generate a scene graph embedding based on the node vector and the edge vector using a graph convolutional network (GCN), and assign metadata to the image based on the scene graph embedding.
DATA COLLECTION FOR OBJECT DETECTORS
A computer-implemented method of generating metadata from an image may comprise sending the image to an object detection service, which generates detections metadata from the image. The image may also be sent to a visual features extractor, which extracts visual features metadata from the image. The generated detections metadata may then be sent to an uncertainty score calculator, which computes an uncertainty score from the detections metadata. The uncertainty score may be related to a level of uncertainty within the detections metadata. The image, the visual features metadata, the detections metadata and the uncertainty score may then be stored in a database accessible over a computer network.
TECHNIQUES FOR IMAGE CONTENT EXTRACTION
Embodiments are directed to techniques for image content extraction. Some embodiments include extracting contextually structured data from document images, such as by automatically identifying document layout, document data, document metadata, and/or correlations therebetween in a document image, for instance. Some embodiments utilize breakpoints to enable the system to match different documents with internal variations to a common template. Several embodiments include extracting contextually structured data from table images, such as gridded and non-gridded tables. Many embodiments are directed to generating and utilizing a document template database for automatically extracting document image contents into a contextually structured format. Several embodiments are directed to automatically identifying and associating document metadata with corresponding document data in a document image to generate a machine-facilitated annotation of the document image. In some embodiments, the machine-facilitated annotation may be used to generate a template for the template database.