G10H2210/231

Electronic musical instrument, electronic musical instrument control method, and storage medium

An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color.

ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM

An electronic musical instrument includes an operation unit that receives a user performance; and at least one processor. wherein the at least one processor performs the following: in accordance with a user operation specifying a chord on the operation unit, obtaining lyric data of a lyric and obtaining a plurality of pieces of waveform data respectively corresponding to a plurality of pitches indicated by the specified chord; inputting the obtained lyric data to a trained model that has been trained and learned singing voices of a singer so as to cause the trained model to output acoustic feature data in response thereto; synthesizing each of the plurality of pieces of waveform data with the acoustic feature data so as to generate a plurality of pieces of synthesized waveform data; and outputting a polyphonic synthesized singing voice based on the generated plurality of pieces of synthesized waveform data.

Electronic musical instrument, electronic musical instrument control method, and storage medium

An electronic musical instrument includes: a memory that stores lyric data including lyrics for a plurality of timings, pitch data including pitches for said plurality of timings, and a trained model that has been trained and learned singing voice features of a singer; and at least one processor, wherein at each of said plurality of timings, the at least one processor: if the operation unit is not operated, obtains, from the trained model, a singing voice feature associated with a lyric indicated by the lyric data and a pitch indicated by the pitch data; if the operation unit is operated, obtains, from the trained model, a singing voice feature associated with the lyric indicated by the lyric data and a pitch indicated by the operation of the operation unit; and synthesizes and outputs singing voice data based on the obtained singing voice feature of the singer.

Musical instrument effects processor
09812106 · 2017-11-07 ·

A method in accord with certain implementations involves, at a data interface of a musical instrument effects processor, receiving an extracted characteristic of an audible sound that is captured at a microphone; transferring the extracted characteristic to a digital signal processor residing in the musical instrument effects processor; receiving input signals at an input to the musical instrument effects processor; at the digital signal processor of the musical instrument effects processor, modifying the received input signals using the extracted characteristics to create the electronic audio effect; and outputting the modified input signals as an output signal from the musical instrument effects processor. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.

Gesture pad and integrated transducer-processor unit for use with stringed instrument
09799316 · 2017-10-24 ·

An integrated transducer-processor unit for use with a stringed instrument having one or more strings. When the instrument is played, the unit produces electrical output signals for conversion into musical sounds. A transducer converts mechanical vibrations of each of the strings into corresponding electrical signals, and a processor processes the electrical signals to produce selected analog or digital output signals for conversion into musical sounds. The unit processor is integrated with the transducer into a pickup, for mounting on the instrument in proximity to the strings without modification of the instrument. In addition, a gesture pad-processor system provides an interface for a user to send control signals to a device to control at least one function of the device. A touch pad receives positional and pressure inputs entered by the user making a selected predefined manual gesture for conversion into a control signal by the system processor.

EFFECT ADDING APPARATUS, METHOD, AND ELECTRONIC MUSICAL INSTRUMENT
20220020347 · 2022-01-20 · ·

An effect adding apparatus includes: at least one first operation element on which a first user operation is performed; a plurality of second operation elements on which a second user operation is performed after the first user operation; and at least one processor, in which the at least one processor determines two or more effects including at least a first effect and a second effect, from a plurality of effects in which each of the effects is associated with a plurality of parameters, based on the first user operation on the at least one first operation element, and determines a parameter associated with each of the plurality of second operation elements, based on data indicating significance of each of a plurality of first parameters associated with the first effect determined and data indicating significance of each of a plurality of second parameters associated with the second effect determined.

Effect adding apparatus, method, and electronic musical instrument
11170746 · 2021-11-09 · ·

An effect adding apparatus includes: at least one first operation element on which a first user operation is performed; a plurality of second operation elements on which a second user operation is performed after the first user operation; and at least one processor, in which the at least one processor determines two or more effects including at least a first effect and a second effect, from a plurality of effects in which each of the effects is associated with a plurality of parameters, based on the first user operation on the at least one first operation element, and determines a parameter associated with each of the plurality of second operation elements, based on data indicating significance of each of a plurality of first parameters associated with the first effect determined and data indicating significance of each of a plurality of second parameters associated with the second effect determined.

Electronic musical instrument, electronic musical instrument control method, and storage medium

An electronic musical instrument includes an operation unit that receives a user performance; and at least one processor. wherein the at least one processor performs the following: in accordance with a user operation specifying a chord on the operation unit, obtaining lyric data of a lyric and obtaining a plurality of pieces of waveform data respectively corresponding to a plurality of pitches indicated by the specified chord; inputting the obtained lyric data to a trained model that has been trained and learned singing voices of a singer so as to cause the trained model to output acoustic feature data in response thereto; synthesizing each of the plurality of pieces of waveform data with the acoustic feature data so as to generate a plurality of pieces of synthesized waveform data; and outputting a polyphonic synthesized singing voice based on the generated plurality of pieces of synthesized waveform data.

ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM

An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color.

ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM

An electronic musical instrument includes: a memory that stores lyric data including lyrics for a plurality of timings, pitch data including pitches for said plurality of timings, and a trained model that has been trained and learned singing voice features of a singer; and at least one processor, wherein at each of said plurality of timings, the at least one processor: if the operation unit is not operated, obtains, from the trained model, a singing voice feature associated with a lyric indicated by the lyric data and a pitch indicated by the pitch data; if the operation unit is operated, obtains, from the trained model, a singing voice feature associated with the lyric indicated by the lyric data and a pitch indicated by the operation of the operation unit; and synthesizes and outputs singing voice data based on the obtained singing voice feature of the singer.