Patent classifications
G10H2250/625
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes an operation unit that receives a user performance; and at least one processor. wherein the at least one processor performs the following: in accordance with a user operation specifying a chord on the operation unit, obtaining lyric data of a lyric and obtaining a plurality of pieces of waveform data respectively corresponding to a plurality of pitches indicated by the specified chord; inputting the obtained lyric data to a trained model that has been trained and learned singing voices of a singer so as to cause the trained model to output acoustic feature data in response thereto; synthesizing each of the plurality of pieces of waveform data with the acoustic feature data so as to generate a plurality of pieces of synthesized waveform data; and outputting a polyphonic synthesized singing voice based on the generated plurality of pieces of synthesized waveform data.
ARTIFICIAL NEURAL NETWORK
A computer-implemented method of training an artificial neural network (ANN) by generating one or more learned parameters for use during a subsequent inference phase of the trained ANN, comprises providing training data representing first and second input signals, the second input signal exhibiting one or more transformations relative to the first signal selected from a set of transformations; using the ANN and in response to the one or more parameters, generating a magnitude and phase representation of each of the first and second input signals; and training the one or more parameters, in dependence upon a constraint which causes the magnitude representation of the first input signal and the magnitude representation of the second input signal to tend to become more similar to one another, the training step comprising: detecting an error signal; and updating the one or more parameters in dependence upon the error signal.
INFORMATION PROCESSING DEVICE, ELECTRONIC MUSICAL INSTRUMENT, AND INFORMATION PROCESSING METHOD
A voice synthesis device includes at least one processor, implementing a first voice model and a second voice model different from the first voice model, the at least one processor performing the following: receiving data indicating a specified pitch; and causing the first voice model to output a first data and the second voice model to output a second data, and generating and outputting a third data corresponding to the specified pitch based on the first data and second data.
INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING SYSTEM
An information processing system includes at least one memory storing a program and at least one processor. The at least one processor implements the program to input a piece of sound source data representative of a sound source, a piece of style data representative of a performance style, and synthesis data representative of sounding conditions into a synthesis model generated by machine learning, and to generate, using the synthesis model, feature data representative of acoustic features of a target sound of the sound source to be generated in the performance style and according to the sounding conditions, and to generate an audio signal corresponding to the target sound using the generated feature data.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color.
Timbre fitting method and system based on time-varying multi-segment spectrum
The disclosure discloses a timbre fitting method and system based on time-varying multi-segment spectrum, the system includes an input device for obtaining audio signals of musical instruments and a segmented multi-model compensation module. The segmented multi-model compensation module learns a timbre of a source musical instrument and a target musical instrument, and establishes a multi-segment model of the sound feature of the source musical instrument and a multi-segment model of the sound feature of the target musical instrument. The sound feature is set to be based on maximum amplitude of the audio signal played the same sequence on the target musical instrument and the source musical instrument, and the audio signal of the sequence is divided into multiple segments according to the amplitude. The sound feature includes frequency spectrums of notes respectively within each amplitude range. The segmented multi-model compensation module establishes a multi-model structure with time-varying gain.
Singing expression transfer system
Disclosed are a system and a method for singing expression transplantation. A singing expression transplantation method performed by a singing expression transplantation system according to an embodiment may comprise the steps of: synchronizing each of a first sound source and a second sound source, which include different pieces of voice information with regard to an identical song; modifying the pitch of the first sound source on the basis of pitch information extracted from each of the first sound source and the second sound source, which have been synchronized; and extracting volume information from each of the first sound source and the second sound source and adjusting the magnitude of the volume regarding the first sound source, the pitch of which has been modified, according to each piece of extracted volume information.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of at least a portion of acoustic feature data output by the trained acoustic model, and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element.
SINGING EXPRESSION TRANSPLANTATION SYSTEM
Disclosed are a system and a method for singing expression transplantation. A singing expression transplantation method performed by a singing expression transplantation system according to an embodiment may comprise the steps of: synchronizing each of a first sound source and a second sound source, which include different pieces of voice information with regard to an identical song; modifying the pitch of the first sound source on the basis of pitch information extracted from each of the first sound source and the second sound source, which have been synchronized; and extracting volume information from each of the first sound source and the second sound source and adjusting the magnitude of the volume regarding the first sound source, the pitch of which has been modified, according to each piece of extracted volume information.