Patent classifications
G10H2210/231
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model so as to cause the trained acoustic model to output the acoustic feature data in response to the inputted prescribed lyric data and the inputted pitch data, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of the acoustic feature data output by the trained acoustic model.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of at least a portion of acoustic feature data output by the trained acoustic model, and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element.
EFFECT ADDING APPARATUS, METHOD, AND ELECTRONIC MUSICAL INSTRUMENT
An effect adding apparatus includes: at least one first operation element on which a first user operation is performed; a plurality of second operation elements on which a second user operation is performed after the first user operation; and at least one processor, in which the at least one processor determines two or more effects including at least a first effect and a second effect, from a plurality of effects in which each of the effects is associated with a plurality of parameters, based on the first user operation on the at least one first operation element, and determines a parameter associated with each of the plurality of second operation elements, based on data indicating significance of each of a plurality of first parameters associated with the first effect determined and data indicating significance of each of a plurality of second parameters associated with the second effect determined.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes: a memory that stores a machine-learning trained acoustic model mimicking voice of a singer and at least one processor. When a vocoder mode is on, prescribed lyric data and pitch data corresponding to a user operation of an operation element of the musical instrument are inputted to the trained acoustic model, and inferred singing voice data that infers a singing voice of the singer is synthesized on the basis of acoustic feature data output by the trained acoustic model and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element. When the vocoder mode is off, the inferred singing voice data is synthesized based on the acoustic feature data without using the sound waveform data.
Music effect pedal
A method of coating a music effect pedal with a phosphorescent coating.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model so as to cause the trained acoustic model to output the acoustic feature data in response to the inputted prescribed lyric data and the inputted pitch data, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of the acoustic feature data output by the trained acoustic model.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes: a memory that stores a machine-learning trained acoustic model mimicking voice of a singer and at least one processor. When a vocoder mode is on, prescribed lyric data and pitch data corresponding to a user operation of an operation element of the musical instrument are inputted to the trained acoustic model, and inferred singing voice data that infers a singing voice of the singer is synthesized on the basis of acoustic feature data output by the trained acoustic model and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element. When the vocoder mode is off, the inferred singing voice data is synthesized based on the acoustic feature data without using the sound waveform data.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of at least a portion of acoustic feature data output by the trained acoustic model, and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element.
MUSIC EFFECT PEDAL
A method of coating a music effect pedal with a phosphorescent coating.
Gesture pad and integrated transducer-processor unit for use with stringed instrument
An integrated transducer-processor unit for use with a stringed instrument having one or more strings. When the instrument is played, the unit produces electrical output signals for conversion into musical sounds. A transducer converts mechanical vibrations of each of the strings into corresponding electrical signals, and a processor processes the electrical signals to produce selected analog or digital output signals for conversion into musical sounds. The unit processor is integrated with the transducer into a pickup, for mounting on the instrument in proximity to the strings without modification of the instrument. In addition, a gesture pad-processor system provides an interface for a user to send control signals to a device to control at least one function of the device. A touch pad receives positional and pressure inputs entered by the user making a selected predefined manual gesture for conversion into a control signal by the system processor.