Patent classifications
G06F16/65
ARTIFICIAL INTELLIGENCE-BASED AUDIO PROCESSING METHOD, APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
An artificial intelligence-based audio processing method includes: obtaining an audio clip of an audio scene, the audio clip including noise; performing audio scene classification processing based on the audio clip to obtain an audio scene type corresponding to the noise in the audio clip; and determining a target audio processing mode corresponding to the audio scene type, and applying the target audio processing mode to the audio clip of the audio scene according to a degree of interference caused by the noise in the audio clip.
ARTIFICIAL INTELLIGENCE-BASED AUDIO PROCESSING METHOD, APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
An artificial intelligence-based audio processing method includes: obtaining an audio clip of an audio scene, the audio clip including noise; performing audio scene classification processing based on the audio clip to obtain an audio scene type corresponding to the noise in the audio clip; and determining a target audio processing mode corresponding to the audio scene type, and applying the target audio processing mode to the audio clip of the audio scene according to a degree of interference caused by the noise in the audio clip.
Sound recognition model training method and system and non-transitory computer-readable medium
A sound recognition model training method comprises determining a relationship between a sound event and first parameter and deciding a second parameter in response to the relationship, performing sampling on the sound event using the first parameter and the second parameter to generate training audio files, and inputting at least part of the training audio files to a sound recognition model for training the sound recognition model, wherein a length of each of the training audio files is associated with the first parameter, a time difference between every two of the training audio files is associated with the second parameter, and the sound recognition model is used for determining a sound classification.
Sound recognition model training method and system and non-transitory computer-readable medium
A sound recognition model training method comprises determining a relationship between a sound event and first parameter and deciding a second parameter in response to the relationship, performing sampling on the sound event using the first parameter and the second parameter to generate training audio files, and inputting at least part of the training audio files to a sound recognition model for training the sound recognition model, wherein a length of each of the training audio files is associated with the first parameter, a time difference between every two of the training audio files is associated with the second parameter, and the sound recognition model is used for determining a sound classification.
MUSIC AND DIGITAL RIGHTS MANAGEMENT SYSTEMS AND METHODS
A method and associated system for matching and delivering digital work metadata to one or more digital service providers including modifying one or more digital work metadata files, which includes removing non-critical data or segment-erroneous data or performing a language translation; reformatting the one or more digital work metadata files for compatibility with a transformer model-based AI matching operation; performing a block grouping operation on the one or more digital work metadata files, where data associated with the one or more digital work metadata files is grouped in blocks and analyzed for one or more pairs of data records; performing the transformer model-based AI matching operation to determine whether each pair of the one or more pairs of data records comprise a matching pair of data records; and transmitting output data from the transformer model-based artificial intelligence matching operation to the one or more digital service providers.
MUSIC AND DIGITAL RIGHTS MANAGEMENT SYSTEMS AND METHODS
A method and associated system for matching and delivering digital work metadata to one or more digital service providers including modifying one or more digital work metadata files, which includes removing non-critical data or segment-erroneous data or performing a language translation; reformatting the one or more digital work metadata files for compatibility with a transformer model-based AI matching operation; performing a block grouping operation on the one or more digital work metadata files, where data associated with the one or more digital work metadata files is grouped in blocks and analyzed for one or more pairs of data records; performing the transformer model-based AI matching operation to determine whether each pair of the one or more pairs of data records comprise a matching pair of data records; and transmitting output data from the transformer model-based artificial intelligence matching operation to the one or more digital service providers.
DATA RECOVERY METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
A data recovery method includes receiving a request to recover target audio and video behavior data generated during use of an audio and video application by a target user. The target audio and video behavior data has been deleted from a database. The method includes obtaining a target data category of the target audio and the video behavior data; searching a blockchain system for the target audio and video behavior data based on the target data category, the blockchain system being configured to store operation data generated by the audio and video application that includes first operation data of audio and video behavior data. The method includes storing the target audio and video behavior data in the database; and returning the target audio and video behavior data to the target user.
DATA RECOVERY METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
A data recovery method includes receiving a request to recover target audio and video behavior data generated during use of an audio and video application by a target user. The target audio and video behavior data has been deleted from a database. The method includes obtaining a target data category of the target audio and the video behavior data; searching a blockchain system for the target audio and video behavior data based on the target data category, the blockchain system being configured to store operation data generated by the audio and video application that includes first operation data of audio and video behavior data. The method includes storing the target audio and video behavior data in the database; and returning the target audio and video behavior data to the target user.
IDENTIFYING MUSIC ATTRIBUTES BASED ON AUDIO DATA
The present disclosure describes techniques for identifying music attributes. The described techniques comprises receiving audio data of a piece of music; determining at least one attribute of the piece of music based on the audio data of the piece of music using a model; the model comprising a convolutional neural network and a transformer; the model being pre-trained using training data, wherein the training data comprise labelled data associated with a first plurality of music samples and unlabelled data associated with a second plurality of music samples, the labelled data comprise audio data of the first plurality of music samples and label information indicative of attributes of the first plurality of music samples, and the unlabelled data comprise audio data of the second plurality of music samples.
IDENTIFYING MUSIC ATTRIBUTES BASED ON AUDIO DATA
The present disclosure describes techniques for identifying music attributes. The described techniques comprises receiving audio data of a piece of music; determining at least one attribute of the piece of music based on the audio data of the piece of music using a model; the model comprising a convolutional neural network and a transformer; the model being pre-trained using training data, wherein the training data comprise labelled data associated with a first plurality of music samples and unlabelled data associated with a second plurality of music samples, the labelled data comprise audio data of the first plurality of music samples and label information indicative of attributes of the first plurality of music samples, and the unlabelled data comprise audio data of the second plurality of music samples.