G06V10/7747

DIGITAL IMAGING SYSTEMS AND METHODS OF ANALYZING PIXEL DATA OF AN IMAGE OF A SKIN AREA OF A USER FOR DETERMINING DARK EYE CIRCLES

Digital imaging systems and methods are described for analyzing pixel data of an image of a skin area of a user for determining dark eye circles. A plurality of training images of a plurality of individuals are aggregated, each of the training images comprising pixel data of a respective skin area of an individual. A dark eye circles model, trained with the pixel data, is operable to output, across a range of a dark eye circles scale, dark eye circles values associated with a degree of dark eye circles. An image of a user comprising pixel data of at least a portion of a user skin area is received and analyzed, by the dark eye circles model, to determine a user-specific dark eye circles value of the user skin area. A user-specific electronic recommendation addressing at least one feature identifiable within the pixel data is generated and rendered, on a display screen of a user computing device.

LANDMARK DETECTION USING MACHINE LEARNING TECHNIQUES

Described herein are systems, methods, and instrumentalities associated with landmark detection. The detection may be accomplished by determining a graph representation of a plurality of hypothetical landmarks detected in one or more medical images. The graph representation may include nodes that represent the hypothetical landmarks and edges that represent the relationships between paired hypothetical landmarks. The graph representation may be processed using a graph neural network such a message passing graph neural network, by which the landmark detection problem may be converted and solved as a graph node labeling problem.

Systems And Methods For Improved Training Data Acquisition

This disclosure describes systems and methods for improved training data acquisition. An example method may include sending, by a processor, an indication for a user to capture data relating to a first area of interest using a first mobile device. The example method may also include determining, by the processor, that first data captured by the first mobile device would fail to satisfy a quality requirement. The example method may also include causing, by the processor, to present an indication through the first mobile device to the user to adjust the first mobile device. The example method may also include determining, by the processor, that second data captured by the first mobile device after being adjusted would satisfy the quality requirement. The example method may also include receiving, by the processor, the second data from the first mobile device. The example method may also include receiving, by the processor, third data from a second mobile device, wherein the second data and third data are used to train a neural network associated with a vehicle.

SYSTEMS AND METHODS FOR AUTOMATED PRODUCT CLASSIFICATION

A data partitioning system receives an input dataset for e-commerce products, each sample containing attributes and associated values for each product including at least an image; represents each sample as a node on a graph to provide a graph of nodes for the dataset; measures a relative similarity distance between each pair of nodes based on comparing at least image values for the attributes; determines for each pair of nodes whether they are related if the similarity distance between them is below a defined threshold, and if related, generate an edge between them on the graph; group the connected nodes into a first or a second group such that the grouped nodes have no edges connecting them to nodes in the other group and have a shortest relative similarity distance with each other. The groups are used as training dataset and testing data sets for a supervised machine learning classifier.

EXTRACTED IMAGE SEGMENTS COLLAGE

Described are systems and methods to extract image segments from an image and include those extracted image segments in a collage. The origin information, such as the source image, source image location, etc., from which the extracted image segment is generated is maintained as metadata so that interaction with the extracted image segment on the collage can be used to determine and/or return to the origin of the extracted image segment. Collages may be updated, shared, adjusted, etc., by the creator of the collage or other users.

METHOD AND SYSTEM FOR TRAINING NEURAL NETWORK FOR ENTITY DETECTION

Disclosed is a system and method for training a neural network to be implemented for detecting at least one entity in a document to derive relevant inferences therefrom. The method comprising obtaining at least one document, processing, the at least one document via a detection module to detect a widget entity, wherein the detected widget entity is classified as active or inactive based on a detected state of the widget entity, modifying, the classified widget entity into a corresponding machine-readable widget-entity based on the detected state, processing, the at least one document via an extraction module to detect a text entity in near vicinity of the classified widget entity, generating a training pair comprising the machine-readable widget entity and the corresponding text entity and training the neural network using the generated training pair.

Training apparatus, recognition apparatus, training method, recognition method, and program

Provided are a training apparatus, a recognition apparatus, a training method, a recognition method, and a program that can accurately recognize what an object represented in an image associated with depth information is. An object data acquiring section acquires three-dimensional data representing an object. A training data generating section generates a plurality of training data each representing a mutually different part of the object on the basis of the three-dimensional data. A training section trains a machine learning model using the generated training data as the training data for the object.

Machine learning with data synthesization
11682036 · 2023-06-20 · ·

In some examples, a computing device may receive data from a plurality of groups of data sources. The computing device may create a training data set from a first portion of the received data and may create a plurality of validation data sets from a second portion of the received data. For example, each validation data set may correspond to a respective one of the groups of data sources. The computing device may train, using the training data set, a plurality of machine learning models configured for synthesizing data. For instance, respective ones of the machine learning models may correspond to respective ones of the groups of data sources. Further, the computing device may validate the respective machine learning models using the respective validation data set corresponding to the respective group to which the respective machine learning model being validated corresponds.

Face detector training method, face detection method, and apparatuses

A face detector training method, a face detection method, and apparatuses are provided. In the present invention, during a training phase, a flexible block based local binary pattern feature and a corresponding second classifier are constructed, appropriate second classifiers are searched for to generate multiple first classifiers, and multiple layers of first classifiers that are obtained by using a cascading method form a final face detector; and during a detection phase, face detection is performed on a to-be-detected image by using a first classifier or a face detector that is learned during a training process, so that a face is differentiated from a non-face, and a face detection result is combined and output.

IMAGE PROCESSING METHODS AND SYSTEMS FOR LOW-LIGHT IMAGE ENHANCEMENT USING MACHINE LEARNING MODELS

The present disclosure relates to an image processing method for enhancing illumination in an input image representing a scene, said image processing method comprising: down-sampling the input image, processing the down-sampled input image with a machine learning model, wherein said machine learning model is previously trained to generate a multiplicative correction map, said multiplicative correction map comprising multiplicative correcting factors for enhancing the illumination of the down-sampled input image, up-sampling the multiplicative correction map, generating an output image by multiplying the input image by the up-sampled multiplicative correction map.