Patent classifications
G06V10/273
Apparatus, method, and storage medium
An apparatus includes an extract unit configured to extract features of a first image based on an electromagnetic wave in a first frequency band, an acquire unit configured to acquire motion information about the features, a classify unit configured to classify the features into a first group and a second group based on the motion information, and a remove unit configured to remove, from the first image, a signal corresponding to the feature belonging to the first group.
METHOD FOR TRAINING FACE RECOGNITION MODEL
A method for training a face recognition model includes: acquiring a plurality of first training images being uncovered face images, and acquiring a plurality of covering object images; generating a plurality of second training images by separately fusing the plurality of covering object images with the uncovered face images; and training the face recognition model by inputting the plurality of first training images and the plurality of second training images into the face recognition model.
SEGMENTING AND REMOVING OBJECTS FROM MEDIA ITEMS
A media application generates training data that includes a first set of media items and a second set of media items, where the first set of media items correspond to the second set of media items and include distracting objects that are manually segmented. The media application trains a segmentation machine-learning model based on the training data to receive a media item with one or more distracting objects and to output a segmentation mask for one or more segmented objects that correspond to the one or more distracting objects.
METHOD AND SYSTEM FOR IDENTIFICATION AND CLASSIFICATION OF DIFFERENT GRAIN AND ADULTERANT TYPES
State of art techniques mostly rely of computationally intensive, time consuming Neural Networks. Embodiments provide a method and system for identification and classification of different grain and adulterant types for grain grading analysis. The method analyzes input image of grain sample of elements to determine morphological features of elements, using dynamically determined calibration factor from reference object in the image. Variation in perimeter of elements is used to perform classification of elements into target grain size, low size adulterants and higher size adulterants. The aspect ratio of target grain determines grain variety and adulterants determine adulteration percentage. Elements are classified into grain colored and non-grain colored adulterants. Grain colored adulterants are further classified as Grain Like Impurities and non-GLI, using predefined ranges of standard deviation of perimeter metric. Weight of grain colored adulterants and non-grain colored adulterant is obtained using mapping of predefined weights to the aspect ratio.
System and method for occlusion correction
In variants, the method for occlusion correction can include: determining a measurement depicting an occluded object of interest (OOI), optionally infilling the occluded portion of the object of interest within the measurement, and determining an attribute of the object of interest based on the infilled measurement.
DATA SELECTION FOR IMAGE GENERATION
A method includes obtaining waveform return data including waveform return records for multiple sampling events associated with an observed area and determining a relevance score for the waveform return records of the waveform return data. The relevance score for a particular waveform return record is based, at least partially, on estimated information gain associated with the particular waveform return record. The method also includes, based on the relevance scores, selecting a first subset of waveform return records, where one or more waveform return records are excluded from the first subset of waveform return records. The method also includes generating image data based on the first subset of waveform return records.
Recovering occluded image data using machine learning
Examples disclosed herein are related to using a machine learning model to generate image data. One example provides a system, comprising one or more processors, and storage comprising instructions executable by the one or more processors to obtain image data comprising an image with unoccluded features, apply a mask to the unoccluded features in the image to form partial observation training data comprising a masked region that obscures at least a portion of the unoccluded features, and train a machine learning model comprising a generator and a discriminator at least in part by generating image data for the masked region and comparing the image data generated for the masked region to the image with unoccluded features.
Automatically removing moving objects from video streams
The present disclosure describes systems, non-transitory computer-readable media, and methods for accurately and efficiently removing objects from digital images taken from a camera viewfinder stream. For example, the disclosed systems access digital images from a camera viewfinder stream in connection with an undesired moving object depicted in the digital images. The disclosed systems generate a temporal window of the digital images concatenated with binary masks indicating the undesired moving object in each digital image. The disclosed systems further utilizes a 3D to 2D generator as part of a 3D to 2D generative adversarial neural network in connection with the temporal window to generate a target digital image with the region associated with the undesired moving object in-painted. In at least one embodiment, the disclosed systems provide the target digital image to a camera viewfinder display to show a user how a future digital photograph will look without the undesired moving object.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
There is provided an information processing method, an information processing apparatus, and a program, by which the accuracy of facial authentication can be improved even in a case where there is an occluded region. The information processing apparatus includes a determination unit and a generation unit. The determination unit determines, on the basis of an occluded region of a face in an input facial image for authentication, a trimming facial range from the input facial image for authentication and a resolution. The generation unit generates a facial image for authentication in/at the trimming facial range and the resolution determined by the determination unit.
GESTURE DETECTION APPARATUS AND GESTURE DETECTION METHOD
Provided is a gesture detection apparatus that accurately detects a hand of an occupant in gesturing. The gesture detection apparatus includes a face frame information acquisition unit, a hand candidate detection unit, and a determination unit. The face frame information acquisition unit acquires face frame information. The face frame is set so as to surround the face of the occupant detected on the basis of the video. The hand candidate detection unit detects a hand candidate on the basis of the video. The determination unit rejects the information of the hand candidate so that the hand candidate is not detected as the hand of the occupant in the gesture of the occupant as a detection target on the basis of a predetermined condition regarding the overlap between the face frame in the video and the hand candidate frame set to surround the hand candidate.