G10H2250/625

Timbre fitting method and system based on time-varying multi-segment spectrum

The disclosure discloses a timbre fitting method and system based on time-varying multi-segment spectrum, the system includes an input device for obtaining audio signals of musical instruments and a segmented multi-model compensation module. The segmented multi-model compensation module learns a timbre of a source musical instrument and a target musical instrument, and establishes a multi-segment model of the sound feature of the source musical instrument and a multi-segment model of the sound feature of the target musical instrument. The sound feature is set to be based on maximum amplitude of the audio signal played the same sequence on the target musical instrument and the source musical instrument, and the audio signal of the sequence is divided into multiple segments according to the amplitude. The sound feature includes frequency spectrums of notes respectively within each amplitude range. The segmented multi-model compensation module establishes a multi-model structure with time-varying gain.

METHOD, SYSTEM AND ARTIFICIAL NEURAL NETWORK

It is disclosed a method comprising obtaining a target spectrum, obtaining a set of non-target spectra, the set of non-target spectra comprising one or more non-target spectra, summing the target spectrum and the set of non-target spectra to obtain a mixture spectrum, and training an artificial neural network by using the mixture spectrum as input of the neural network and by using a spectrum which is based on the target spectrum as desired output of the artificial neural network.

Electronic musical instrument, electronic musical instrument control method, and storage medium

An electronic musical instrument includes: a memory that stores a machine-learning trained acoustic model mimicking voice of a singer and at least one processor. When a vocoder mode is on, prescribed lyric data and pitch data corresponding to a user operation of an operation element of the musical instrument are inputted to the trained acoustic model, and inferred singing voice data that infers a singing voice of the singer is synthesized on the basis of acoustic feature data output by the trained acoustic model and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element. When the vocoder mode is off, the inferred singing voice data is synthesized based on the acoustic feature data without using the sound waveform data.

Method, system and artificial neural network

It is disclosed a method comprising obtaining a target spectrum, obtaining a set of non-target spectra, the set of non-target spectra comprising one or more non-target spectra, summing the target spectrum and the set of non-target spectra to obtain a mixture spectrum, and training an artificial neural network by using the mixture spectrum as input of the neural network and by using a spectrum which is based on the target spectrum as desired output of the artificial neural network.

ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM

An electronic musical instrument includes: a memory that stores a machine-learning trained acoustic model mimicking voice of a singer and at least one processor. When a vocoder mode is on, prescribed lyric data and pitch data corresponding to a user operation of an operation element of the musical instrument are inputted to the trained acoustic model, and inferred singing voice data that infers a singing voice of the singer is synthesized on the basis of acoustic feature data output by the trained acoustic model and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element. When the vocoder mode is off, the inferred singing voice data is synthesized based on the acoustic feature data without using the sound waveform data.

ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM

An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of at least a portion of acoustic feature data output by the trained acoustic model, and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element.

Information processing method and information processing system for sound synthesis utilizing identification data associated with sound source and performance styles

An information processing system includes at least one memory storing a program and at least one processor. The at least one processor implements the program to input a piece of sound source data obtained by encoding a first identification data representative of a sound source, a piece of style data obtained by encoding a second identification data representative of a performance style, and synthesis data representative of sounding conditions into a synthesis model generated by machine learning, and to generate, using the synthesis model, feature data representative of acoustic features of a target sound of the sound source to be generated in the performance style and according to the sounding conditions, and to generate an audio signal corresponding to the target sound using the generated feature data.

Method, system and artificial neural network

It is disclosed a method comprising obtaining a target spectrum, obtaining a set of non-target spectra, the set of non-target spectra comprising one or more non-target spectra, summing the target spectrum and the set of non-target spectra to obtain a mixture spectrum, and training an artificial neural network by using the mixture spectrum as input of the neural network and by using a spectrum which is based on the target spectrum as desired output of the artificial neural network.