G06V10/772

Method, apparatus, and system for generating synthetic image data for machine learning
11475677 · 2022-10-18 · ·

An approach is provided for generating synthetic image data for machine learning. The approach, for instance, involves determining, by a processor, a set of parameters for indicating an action by one or more objects. The action is a dynamic movement of the one or more objects through a geographic space over a period of time. The approach also involves processing the set of parameters to generate synthetic image data. The synthetic image data includes a computer-generated image sequence of the one or more objects performing the action in the geographic space over the period of time. The approach further involves automatically labeling the synthetic image data with at least one label representing the action, the set of parameters, or a combination thereof. The approach further involves providing the labeled synthetic image data for training or evaluating a machine learning model to detect the action.

Dictionary learning device, dictionary learning method, and program storage medium
11600086 · 2023-03-07 · ·

A reference data extraction unit extracts, from a photographic image from an imaging device that captures an image of an object to be recognized, an image of a reference image region serving as a reference and containing a detection subject in the object. A expanded data extraction unit extracts from the photographic image an image of an expanded-image region, which is an image region that includes the reference image region and is larger than the reference image region. A reduced data extraction unit extracts from the photographic image an image of a reduced-image region, which is an image region that includes the detection subject and is smaller than the reference image region, with the result that a portion of the object is outside of the region. A learning unit uses the extracted images of the image region to learn a dictionary.

INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING SYSTEM
20230117180 · 2023-04-20 ·

An information processing method includes: inputting sensing data into a first inference model to obtain a feature map that is a first inference result of the sensing data from the first inference model; inputting the first inference result into a third inference model to obtain, from the third inference model, model selection information indicating at least one second inference model that is selected from among a plurality of second inference models; inputting the first inference result into the at least one second inference model indicated by the model selection information to obtain, from the at least one second inference model, at least one second inference result of the first inference result, and outputting the at least one second inference result.

METHOD AND SYSTEM FOR IDENTIFICATION AND CLASSIFICATION OF DIFFERENT GRAIN AND ADULTERANT TYPES

State of art techniques mostly rely of computationally intensive, time consuming Neural Networks. Embodiments provide a method and system for identification and classification of different grain and adulterant types for grain grading analysis. The method analyzes input image of grain sample of elements to determine morphological features of elements, using dynamically determined calibration factor from reference object in the image. Variation in perimeter of elements is used to perform classification of elements into target grain size, low size adulterants and higher size adulterants. The aspect ratio of target grain determines grain variety and adulterants determine adulteration percentage. Elements are classified into grain colored and non-grain colored adulterants. Grain colored adulterants are further classified as Grain Like Impurities and non-GLI, using predefined ranges of standard deviation of perimeter metric. Weight of grain colored adulterants and non-grain colored adulterant is obtained using mapping of predefined weights to the aspect ratio.

Neural style transfer for image varietization and recognition

Systems and methods for image recognition are provided. A style-transfer neural network is trained for each real image to obtain a trained style-transfer neural network. The texture or style features of the real images are transferred, via the trained style-transfer neural network, to a target image to generate styled images which are used for training an image-recognition machine learning model (e.g., a neural network). In some cases, the real images are clustered and representative style images are selected from the clusters.

Characterization of amount of training for an input to a machine-learned network

The user is to be informed of the reliability of the machine-learned model based on the current input relative to the training data used to train the model or the model itself. In a medical situation, the data for a current patient is compared to the training data used to train a prediction model and/or to a decision function of the prediction model. The comparison indicates the training content relative to the current patient, so provides a user with information on the reliability of the prediction for the current situation. The indication deals with the variation of the data of the current patient from the training data or relative to the prediction model, allowing the user to see how well trained the predication model is relative to the current patient. This indication is in addition to any global confidence output through application of the prediction model to the data of the current patient.

Characterization of amount of training for an input to a machine-learned network

The user is to be informed of the reliability of the machine-learned model based on the current input relative to the training data used to train the model or the model itself. In a medical situation, the data for a current patient is compared to the training data used to train a prediction model and/or to a decision function of the prediction model. The comparison indicates the training content relative to the current patient, so provides a user with information on the reliability of the prediction for the current situation. The indication deals with the variation of the data of the current patient from the training data or relative to the prediction model, allowing the user to see how well trained the predication model is relative to the current patient. This indication is in addition to any global confidence output through application of the prediction model to the data of the current patient.

HIGH FIDELITY DATA-DRIVEN MULTI-MODAL SIMULATION
20230159033 · 2023-05-25 ·

Provided are methods for generating high fidelity synthetic sensor data representing hypothetical driving scenarios for the vehicle. Some methods described include accessing sensor data associated with operation of a vehicle in an environment traversing a first path. Operation of a simulated vehicle is simulated along a synthetic driving scenario in the simulated environment along a second path different from the first path and in simulation with the plurality of simulated agents. Systems and computer program products are also provided.

IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND LEARNING SYSTEM

An image processing device for generating learning data that is used for machine learning includes a processor that obtains image data. The processor specifies an unprocessable region that is a region in which a predetermined process cannot be performed or a region in which the predetermined process is not performed in an image region of the image data, and generates image data on which the predetermined process is performed in a region except the unprocessable region in the image region, as the learning data.

Input method and electronic device

Embodiments of this application provide an input method. The input method may be implemented in an electronic device that has a fingerprint collection device, and the method includes: when a text input application runs, obtaining, by the electronic device, a fingerprint of a user on a touchscreen; determining, by the electronic device when the fingerprint is a prestored registered fingerprint, a target lexicon associated with the fingerprint; and providing, by the electronic device by using the target lexicon, at least one candidate word corresponding to a current input event.