Patent classifications
G06V10/7747
Knowledge distillation for neural networks using multiple augmentation strategies
The present disclosure relates to systems, methods, and non-transitory computer readable media for accurately and efficiently learning parameters of a distilled neural network from parameters of a source neural network utilizing multiple augmentation strategies. For example, the disclosed systems can generate lightly augmented digital images and heavily augmented digital images. The disclosed systems can further learn parameters for a source neural network from the lightly augmented digital images. Moreover, the disclosed systems can learn parameters for a distilled neural network from the parameters learned for the source neural network. For example, the disclosed systems can compare classifications of heavily augmented digital images generated by the source neural network and the distilled neural network to transfer learned parameters from the source neural network to the distilled neural network via a knowledge distillation loss function.
METHOD OF TRAINING A MACHINE LEARNING ALGORITHM TO IDENTIFY OBJECTS OR ACTIVITIES IN VIDEO SURVEILLANCE DATA
A method of training a machine learning algorithm to identify objects or activities in video surveillance data comprises generating a 3D simulation of a real environment from video surveillance data captured by at least one video surveillance camera installed in the real environment. Objects or activities are synthesized within the simulated 3D environment and the synthesized objects or activities within the simulated 3D environment are used as training data to train the machine learning algorithm to identify objects or activities, wherein the synthesized objects or activities within the simulated 3D environment used as training data are all viewed from the same viewpoint in the simulated 3D environment.
DEVICE AND METHOD FOR DETECTING COUNTERFEIT IDENTIFICATION CARD
A device for determining an ID card includes an image input unit to acquire an initial image including an ID card image, an image pre-processing unit to generate a processed image by removing a remaining portion of the initial image except for the ID card image, and generate a first training image having a first resolution value and a second training image having a second resolution value, based on the processed image, an image determining unit to determine whether an identifying mark is present on the processed image, based on an artificial intelligence (AI) model based on a neural network trained by training data including the first training image and the second training image, and a model evaluating unit to calculate a plurality of parameters by using a determination result of the image determining unit and to evaluate the AI model based on the plurality of parameters.
Methods, Devices, and Systems for Identifying the Composition of Materials
Computer-implemented methods for determining the material composition of an object are disclosed. The method may include obtaining a target image of the object; determining a material cross-sectional image corresponding to a material to be detected and included in the target image; determining a most similar candidate image to the target image and corresponding candidate material composition information, where the most similar candidate image is selected from a target database; generating a material composition area image set comprising a plurality of material composition area images associated with the material cross-sectional image and the most similar candidate image; determining a set of material composition information corresponding to each of the plurality of material composition area images in the material composition area image set; and obtaining a material composition information set from the set of material composition information. This method improves the recognition accuracy and recognition efficiency of the material composition.
DIRECT CLASSIFICATION OF RAW BIOMOLECULE MEASUREMENT DATA
Disclosed herein are systems and methods for direct classification of biological datasets. The datasets may include raw mass spectrometry data. Some aspects include training a classifier for direct classification of raw data, and some aspects include applying the classifier.
ELECTRONIC DEVICE, METHOD, AND COMPUTER READABLE STORAGE MEDIUM FOR DETECTION OF VEHICLE APPEARANCE
According to various embodiments, an electronic device include a display, an input circuit, at least one memory and at least one processor configured to obtain a first image; display, in response to cropping an area comprising a visual object corresponding to a potential vehicle appearance from the first image, fields for inputting an attribute for the area, wherein, the fields include a first field for inputting a vehicle type as the attribute and a second field for inputting a positional relationship between a subject corresponding to the potential vehicle appearance and a camera obtained the first image as the attribute; obtain information about the attribute, by receiving a user input for each of the fields including the first field and the second field through the input circuit; store a second image configured of the area in a data set for training a computer vision model for vehicle detection.
SYSTEMS AND METHODS FOR TRAINING AND USING MACHINE LEARNING MODELS AND ALGORITHMS
Systems and methods for training a machine learning model. The methods comprise, by a computing device: obtaining a training data set comprising a collection of training examples, each training example comprising data point(s); selecting a first subset of training examples from the collection of training examples based on at least one of a derivative vector of a loss function for each training examples in the collection of training examples and an importance of each training example relative to other training examples of the collection of training examples; and training the machine learning model using the first subset of training examples. A total number of training examples in the first subset of training examples is unequal to a total number of training examples in the collection of training examples.
DICTIONARY LEARNING METHOD AND MEANS FOR ZERO-SHOT RECOGNITION
Dictionary learning method and means for zero-shot recognition can establish the alignment between visual space and semantic space at category layer and image level, so as to realize high-precision zero-shot image recognition. The dictionary learning method includes the following steps: (1) training a cross domain dictionary of a category layer based on a cross domain dictionary learning method; (2) generating semantic attributes of an image based on the cross domain dictionary of the category layer learned in step (1); (3) training a cross domain dictionary of the image layer based on the image semantic attributes generated in step (2); (4) completing a recognition task of invisible category images based on the cross domain dictionary of the image layer learned in step (3).
Edge devices utilizing personalized machine learning and methods of operating the same
Edge devices utilizing personalized machine learning and methods of operating the same are disclosed. An example edge device includes a model accessor to access a first machine learning model from a cloud service provider. A local data interface is to collect local user data. A model trainer is to train the first machine learning model to create a second machine learning model using the local user data. A local permissions data store is to store permissions indicating constraints on the local user data with respect to sharing outside of the edge device. A permissions enforcer is to apply permissions to the local user data to create a sub-set of the local user data to be shared outside of the edge device. A transmitter is to provide the sub-set of the local user data to a public data repository.
Apparatus and method of using AI metadata related to image quality
An image providing apparatus configured to generate, by using a first artificial intelligence (AI) network, AI metadata including class information and at least one class map, in which the class information includes at least one class corresponding to a type of an object among a plurality of predefined objects included in a first image and the at least one class map indicates a region corresponding to each class in the first image, generate an encoded image by encoding the first image, and output the encoded image and the AI metadata through the output interface.