G06N3/09

APPARATUS AND METHOD FOR SPEECH-EMOTION RECOGNITION WITH QUANTIFIED EMOTIONAL STATES
20230048098 · 2023-02-16 ·

A method for training a speech-emotion recognition classifier under a continuous self-updating and re-trainable ASER machine learning model, wherein the training data is generated by: obtaining an utterance of a human speech source; processing the utterance in an emotion evaluation and rating process with normalization; extracting the features of the utterance; quantifying the feature attributes of the extracted features by labelling, tagging, and weighting the feature attributes, with their values assigned under measurable scales; and hashing the quantified feature attributes in a feature attribute hashing process to obtain hash values for creating a feature vector space. The run-time speech-emotion recognition comprising: extracting the features of an utterance; the trained recognition classifier recognizing the emotions and levels of intensity of the utterance units; and computing a quantified emotional state of the utterance by fusing recognized emotions and levels of intensity, and the quantified extracted feature attributes by their respective weightings.

METHOD OF GENERATING PRE-TRAINING MODEL, ELECTRONIC DEVICE, AND STORAGE MEDIUM

A method of generating a pre-training model, an electronic device and a storage medium, which relate to a field of an artificial intelligence technology, in particular to a computer vision and deep learning technology. The method includes: determining a performance index set corresponding to a candidate model structure set, the candidate model structure set is determined from a plurality of model structures included in a search space, and the search space is a super-network-based search space; determining, from the candidate model structure set, a target model structure corresponding to each chip according to the performance index set, each target model structure is a model structure meeting a performance index condition; and determining, for each chip, the target model structure corresponding to the chip as a pre-training model corresponding to the chip, the chip is configured to run the pre-training model corresponding to the chip.

HUMAN-OBJECT INTERACTION DETECTION

A human-object interaction detection method, a neural network and a training method therefor is provided. The human-object interaction detection method includes: extracting a plurality of first target features and one or more first motion features from an image feature of an image to be detected; fusing each first target feature and some of the first motion features to obtain enhanced first target features; fusing each first motion feature and some of the first target features to obtain enhanced first motion features; processing the enhanced first target features to obtain target information of a plurality of targets including human targets and object targets; processing the enhanced first motion features to obtain motion information of one or more motions, where each motion is associated with one human target and one object target; and matching the plurality of targets with the one or more motions to obtain a human-object interaction detection result.

DATA LABELING SYSTEM AND METHOD, AND DATA LABELING MANAGER
20230048473 · 2023-02-16 ·

Embodiments of this application disclose a data labeling system and method, and a data labeling manager. The system includes a data labeling manager, a labeling model storage repository, and a basic computing unit storage repository. The data labeling manager receives a data labeling request, obtains a target basic computing unit, allocates a hardware resource to the target basic computing unit, establishes a target computing unit, obtains first storage path information of basic parameter data of a first labeling model, and sends the first storage path information to the target computing unit. The target computing unit obtains the basic parameter data of the to-be-used labeling model by using the first storage path information, combines a target model inference framework and the basic parameter data of the first labeling model to obtain the first labeling model, and labels to-be-labeled data by using the first labeling model.

SYSTEMS AND METHODS FOR AUTOMATED X-RAY INSPECTION
20230050479 · 2023-02-16 ·

A computer-implemented method of automated X-ray inspection during the production of printed circuit board, PCB, assemblies. The method includes capturing an X-ray image of a PCB assembly, determining a first error indicator based on image processing of the captured X-ray image, determining, in case the first error indicator indicates the PCB assembly as faulty, a second error indicator based on the captured X-ray image using a trained adaptive algorithm, and outputting the second error indicator as a result of the inspection.

SEMANTIC ANNOTATION OF SENSOR DATA USING UNRELIABLE MAP ANNOTATION INPUTS

Provided are methods for semantic annotation of sensor data using unreliable map annotation inputs, which can include training a machine learning model to accept inputs including images representing sensor data for a geographic area and unreliable semantic annotations for the geographic area. The machine learning model can be trained against validated semantic annotations for the geographic area, such that subsequent to training, additional images representing sensor data and additional unreliable semantic annotations can be passed through the neural network to provide predicted semantic annotations for the additional images. Systems and computer program products are also provided.

METHOD OF FUSING IMAGE, AND METHOD OF TRAINING IMAGE FUSION MODEL

A method of fusing an image, a method of training an image fusion model, an electronic device, and a storage medium. The method of fusing the image includes: encoding a stitched image obtained by stitching a foreground image and a background image, so as to obtain a feature map; and decoding the feature map to obtain a fused image, wherein the feature map is decoded by: performing a weighting on the feature map by using an attention mechanism, so as to obtain a weighted feature map; performing a fusion on the feature map according to feature statistical data of the weighted feature map, so as to obtain a fused feature; and decoding the fused feature to obtain the fused image.

HUMAN-OBJECT INTERACTION DETECTION

A human-object interaction detection method, a neural network and a training method therefor is provided. The human-object interaction detection method includes: performing first target feature extraction on image features of an image to obtain first target features; performing first interaction feature extraction on image features to obtain first interaction features and scores thereof; determining at least some first interaction features in the first interaction features based on the score of each of the first interaction features; determining first motion features based on the at least some first interaction features and the image features; processing the first target features to obtain target information of targets in the image; processing the first motion features to obtain motion information of one or more motions in the image; and matching the targets with the motions to obtain a human-object interaction detection result.

METHOD OF TRAINING DEEP LEARNING MODEL AND METHOD OF PROCESSING NATURAL LANGUAGE

A method of training a deep learning model, a method of processing a natural language, an electronic device, and a storage medium are provided, which relate to a field of artificial intelligence, in particular to deep learning technology and natural language processing technology. The method includes: inputting first sample data into a first deep learning model to obtain a first output result; training the first deep learning model according to the first output result and a first target output result, the first target output result is obtained by processing the first sample data using a reference deep learning model; inputting second sample data into a second deep learning model to obtain a second output result; and training the second deep learning model according to the second output result and a second target output result, to obtain a trained second deep learning model.

MACHINE LEARNING FOR RF IMPAIRMENT DETECTION

Systems and methods for automatically analyzing spectral power measurements to identify abnormalities. The systems and methods may receive measurements comprising RF power measured over a contiguous range of frequencies, where at least a first portion of the contiguous range is used to transmit signals and at least a second portion of the contiguous range is unused. Respective boundaries of the unused portions may be identified and infilled to provide modified measurements. The modified measurements may be automatically analyzed to identify the abnormalities.