Patent classifications
G10H7/004
WESTERN AND NATIONAL ENSEMBLE INSTRUMENT
The invention discloses a western and national ensemble instrument which comprises an instrument body, and a key assembly, a matrix circuit, a main board, a function control device and a timbre control device arranged on the instrument body; the key assembly is electrically connected with the matrix circuit; the matrix circuit is electrically connected with the main board; the main board is electrically connected with the function control device; the main board comprises an electrically connected timbre device packaged with a plurality of western musical instrument timbres and a plurality of national opera timbres. Due to the fact that the timbre control device is arranged on the panel of the instrument body, the timbre control device is electrically connected with the function control device by the main board to invoke timbre information corresponding control and any matching and superposition control.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes an operation unit that receives a user performance; and at least one processor. wherein the at least one processor performs the following: in accordance with a user operation specifying a chord on the operation unit, obtaining lyric data of a lyric and obtaining a plurality of pieces of waveform data respectively corresponding to a plurality of pitches indicated by the specified chord; inputting the obtained lyric data to a trained model that has been trained and learned singing voices of a singer so as to cause the trained model to output acoustic feature data in response thereto; synthesizing each of the plurality of pieces of waveform data with the acoustic feature data so as to generate a plurality of pieces of synthesized waveform data; and outputting a polyphonic synthesized singing voice based on the generated plurality of pieces of synthesized waveform data.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes: a memory that stores lyric data including lyrics for a plurality of timings, pitch data including pitches for said plurality of timings, and a trained model that has been trained and learned singing voice features of a singer; and at least one processor, wherein at each of said plurality of timings, the at least one processor: if the operation unit is not operated, obtains, from the trained model, a singing voice feature associated with a lyric indicated by the lyric data and a pitch indicated by the pitch data; if the operation unit is operated, obtains, from the trained model, a singing voice feature associated with the lyric indicated by the lyric data and a pitch indicated by the operation of the operation unit; and synthesizes and outputs singing voice data based on the obtained singing voice feature of the singer.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model so as to cause the trained acoustic model to output the acoustic feature data in response to the inputted prescribed lyric data and the inputted pitch data, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of the acoustic feature data output by the trained acoustic model.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of at least a portion of acoustic feature data output by the trained acoustic model, and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model so as to cause the trained acoustic model to output the acoustic feature data in response to the inputted prescribed lyric data and the inputted pitch data, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of the acoustic feature data output by the trained acoustic model.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of at least a portion of acoustic feature data output by the trained acoustic model, and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element.
Music control device and method of operating same
Disclosed methods may involve causing a music control device to associate a plurality of controls with respective ones of a plurality of parameters. Music control devices and computer-readable media are also disclosed.
Virtual musical instrument based on UWB chip identification
The present disclosure discloses a virtual musical instrument based on Ultra-Wide Band (UWB) chip identification, including at least one motion capture component and a conversion component, wherein a UWB transmitting chip is arranged in the at least one motion capture component, and a motion trajectory of the motion capture component is captured through the UWB transmitting chip; a UWB receiving chip is arranged in the conversion component, and a signal of the UWB transmitting chip in the at least one motion capture component is received through the UWB receiving chip; and the virtual musical instrument further includes a sound output component wirelessly connected to the conversion component, wherein the sound output component receives a signal of the conversion component and sends out an audio of a corresponding region.