Patent classifications
G10H2210/165
SOUND CONTROL DEVICE, SOUND CONTROL METHOD, AND SOUND CONTROL PROGRAM
A sound control device includes: a reception unit that receives a start instruction indicating a start of output of a sound; a reading unit that reads a control parameter that determines an output mode of the sound, in response to the start instruction being received; and a control unit that causes the sound to be output in a mode according to the read control parameter.
Information processing method and apparatus
An information processing method according to the present invention includes providing first musical piece information representing contents of a musical piece and performance information relating to a past performance prior to one unit period within the musical piece to a learner that has undergone learning relating to a specific tendency that relates to a performance, and generating, for the one unit period, performance information that is based on the specific tendency with the learner.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes an operation unit that receives a user performance; and at least one processor. wherein the at least one processor performs the following: in accordance with a user operation specifying a chord on the operation unit, obtaining lyric data of a lyric and obtaining a plurality of pieces of waveform data respectively corresponding to a plurality of pitches indicated by the specified chord; inputting the obtained lyric data to a trained model that has been trained and learned singing voices of a singer so as to cause the trained model to output acoustic feature data in response thereto; synthesizing each of the plurality of pieces of waveform data with the acoustic feature data so as to generate a plurality of pieces of synthesized waveform data; and outputting a polyphonic synthesized singing voice based on the generated plurality of pieces of synthesized waveform data.
Information processing method and apparatus for processing performance of musical piece
Provided is an information processing apparatus that generates various kinds of time series data according to a performance tendency of a user. The information processing apparatus includes an index specifying unit 22 that specifies performance tendency information that indicates a performance tendency of a performance of a musical piece by a user by inputting observational performance data X representing the performance to a learned model La and an information processing unit 23 that generates time series data Z regarding the musical piece according to the performance tendency information.
INFORMATION PROCESSING DEVICE, METHOD AND RECORDING MEDIA
An information processing device includes: an input interface; and at least one processor, configured to perform the following: selecting an instrument, a musical tone of which is to be digitally synthesized based on corresponding musical tone data, via the input interface; acquiring a parameter value that has been set for the selected instrument; generating a random number based on a random function; and changing a pitch of the musical tone of the selected instrument based on the generated random number and the acquired parameter values.
Performance analysis method
A performance analysis method according to the present invention includes generating information related to a performance tendency of a user, from observed performance information relating to a performance of a musical piece by the user and inferred performance information that occurs when the musical piece is performed based on a specific tendency.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes: a memory that stores lyric data including lyrics for a plurality of timings, pitch data including pitches for said plurality of timings, and a trained model that has been trained and learned singing voice features of a singer; and at least one processor, wherein at each of said plurality of timings, the at least one processor: if the operation unit is not operated, obtains, from the trained model, a singing voice feature associated with a lyric indicated by the lyric data and a pitch indicated by the pitch data; if the operation unit is operated, obtains, from the trained model, a singing voice feature associated with the lyric indicated by the lyric data and a pitch indicated by the operation of the operation unit; and synthesizes and outputs singing voice data based on the obtained singing voice feature of the singer.
INFORMATION PROCESSING METHOD, ESTIMATION MODEL CONSTRUCTION METHOD, INFORMATION PROCESSING DEVICE, AND ESTIMATION MODEL CONSTRUCTING DEVICE
An information processing device includes a memory storing instructions, and a processor configured to implement the stored instructions to execute a plurality of tasks. The tasks includes: a first generating task that generates a series of fluctuations of a target sound based on first control data of the target sound to be synthesized, using a first model trained to have an ability to estimate a series of fluctuations of the target sound based on first control data of the target sound, and a second generating task that generates a series of features of the target sound based on second control data of the target sound and the generated series of fluctuations of the target sound, using a second model trained to estimate a series of features of the target sound based on second control data of the target sound and a series of fluctuations of the target sound.
Systems, devices, and methods for harmonic structure in digital representations of music
Systems, devices, and methods for encoding the harmonic structure of a musical composition in a digital data structure are described. Tonal and rhythmic commonalities are identified across the musical bars that make up a musical composition. Individual bars of the musical composition are each analyzed to characterize their respective harmonic fingerprints in various forms, and the respective harmonic fingerprints are compared to sort the musical bars into harmonic equivalence categories. Isomorphic mappings between hierarchical data structures that encode the musical composition based on musicality and harmony, respectively, are also described. The systems, devices, and methods for encoding the harmonic structure of a musical composition in a digital data structure have broad applicability in computer-based composition and variation of music.