Patent classifications
G06V10/454
APPARATUS AND METHOD FOR CLASSIFYING CLOTHING ATTRIBUTES BASED ON DEEP LEARNING
Disclosed herein are an apparatus and method for classifying clothing attributes based on deep learning. The apparatus includes memory for storing at least one program and a processor for executing the program, wherein the program includes a first classification unit for outputting a first classification result for one or more attributes of clothing worn by a person included in an input image, a mask generation unit for outputting a mask tensor in which multiple mask layers respectively corresponding to principal part regions obtained by segmenting a body of the person included in the input image are stacked, a second classification unit for outputting a second classification result for the one or more attributes of the clothing by applying the mask tensor, and a final classification unit for determining and outputting a final classification result for the input image based on the first classification result and the second classification result.
METHOD AND APPARATUS FOR VIDEO RECOGNITION
Broadly speaking, the present techniques generally relate to a method and apparatus for video recognition, and in particular relate to a computer-implemented method for performing video recognition using a transformer-based machine learning, ML, model. Put another way, the present techniques provide new methods of image processing in order to automatically extract feature information from a video.
EXPLAINING A MODEL OUTPUT OF A TRAINED MODEL
The invention relates a computer-implemented method (500) of generating explainability information for explaining a model output of a trained model. The method uses one or more aspect recognition models configured to indicate a presence of respective characteristics in the input instance. A saliency method is applied to obtain a masked source representation of the input instance at a source layer of the trained model (e.g., the input layer or an internal layer), comprising those elements at the source layer relevant to the model output. The masked source representation is mapped to a target layer (e.g., input or internal layer) of an aspect recognition model, and the aspect recognition model is then applied to obtain a model output indicating a presence of the given characteristic relevant to the model output of the trained model. As explainability information, the characteristics indicated by the aspect recognition models are output.
METHOD FOR TRAINING STUDENT NETWORK AND METHOD FOR RECOGNIZING IMAGE
Disclosed are a method for training a Student Network and a method for recognizing an image. The method includes: acquiring first prediction feature information of a sample image on the first granularity and second prediction feature information of the sample image on the second granularity by inputting the sample image into a Student Network, and acquiring first feature information of the sample image on the first granularity and second feature information of the sample image on the second granularity by inputting the sample image into a Teacher Network, and acquiring a target Student Network.
ADDING AN ADAPTIVE OFFSET TERM USING CONVOLUTION TECHNIQUES TO A LOCAL ADAPTIVE BINARIZATION EXPRESSION
An apparatus comprising an interface, a structured light projector and a processor. The interface may receive pixel data. The structured light projector may generate a structured light pattern. The processor may process the pixel data arranged as video frames, perform operations using a convolutional neural network to determine a binarization result and an offset value and generate disparity and depth maps in response to the video frames, the structured light pattern, the binarization result, the offset value and a removal of error points. The convolutional neural network may perform a partial block summation to generate a convolution result, compare the convolution result to a speckle value to determine the offset value, generate an adaptive result in response to performing a convolution operation, compare the video frames to the adaptive result to generate the binarization result for the video frames, and remove the error points from the binarization result.
SENSOR TRANSFORMATION ATTENTION NETWORK (STAN) MODEL
A sensor transformation attention network (STAN) model including sensors configured to collect input signals, attention modules configured to calculate attention scores of feature vectors corresponding to the input signals, a merge module configured to calculate attention values of the attention scores, and generate a merged transformation vector based on the attention values and the feature vectors, and a task-specific module configured to classify the merged transformation vector is provided.
Agricultural pattern analysis system
A pattern recognition system including an image gathering unit that gathers at least one digital representation of a field, an image analysis unit that pre-processes the at least one digital representation of a field, an annotation unit that provides a visualization of at least one channel for each of the at least one digital representation of the field, where the image analysis unit generates a plurality of image samples from each of the at least one digital representation of the field, and the image analysis unit splits each of the image samples into a plurality of categories.
Deep learning based methods and systems for nucleic acid sequencing
Methods and systems for determining a plurality of sequences of nucleic acid (e.g., DNA) molecules in a sequencing-by-synthesis process are provided. In one embodiment, the method comprises obtaining images of fluorescent signals obtained in a plurality of synthesis cycles. The images of fluorescent signals are associated with a plurality of different fluorescence channels. The method further comprises preprocessing the images of fluorescent signals to obtain processed images. Based on a set of the processed images, the method further comprises detecting center positions of clusters of the fluorescent signals using a trained convolutional neural network (CNN) and extracting, based on the center positions of the clusters of fluorescent signals, features from the set of the processed images to generate feature embedding vectors. The method further comprises determining, in parallel, the plurality of sequences of DNA molecules using the extracted features based on a trained attention-based neural network.
Domain adaptation of deep neural networks
Disclosed herein are system, method, and computer program product embodiments for adapting machine learning models for use in additional applications. For example, feature extraction models are readily available for use in applications such as image detection. These feature extraction models can be used to label inputs (such as images) in conjunction with other deep neural network models. However, in adapting the feature extraction models to these uses, it becomes problematic to improve the quality of their results on target data sets, as these feature extraction models are large and resistant to retraining. Approaches disclosed herein include a transfer layer for providing fast retraining of machine learning models.
System and method for providing unsupervised domain adaptation for spatio-temporal action localization
A system and method for providing unsupervised domain adaption for spatio-temporal action localization that includes receiving video data associated with a source domain and a target domain that are associated with a surrounding environment of a vehicle. The system and method also include analyzing the video data associated with the source domain and the target domain and determining a key frame of the source domain and a key frame of the target domain. The system and method additionally include completing an action localization model to model a temporal context of actions occurring within the key frame of the source domain and the key frame of the target domain and completing an action adaption model to localize individuals and their actions and to classify the actions based on the video data. The system and method further include combining losses to complete spatio-temporal action localization of individuals and actions.