Patent classifications
G10H2210/165
Systems, Devices, and Methods for Harmonic Structure in Digital Representations of Music
Systems, devices, and methods for encoding the harmonic structure of a musical composition in a digital data structure are described. Tonal and rhythmic commonalities are identified across the musical bars that make up a musical composition. Individual bars of the musical composition are each analyzed to characterize their respective harmonic fingerprints in various forms, and the respective harmonic fingerprints are compared to sort the musical bars into harmonic equivalence categories. Isomorphic mappings between hierarchical data structures that encode the musical composition based on musicality and harmony, respectively, are also described.
The systems, devices, and methods for encoding the harmonic structure of a musical composition in a digital data structure have broad applicability in computer-based composition and variation of music.
INFORMATION PROCESSING DEVICE FOR MUSICAL SCORE DATA
An information processing method includes generating performance data that represent a performance of a musical piece that reflects a change caused by a factor that alters the performance of the musical piece, by inputting musical score data, which represent a musical score of the musical piece, and variability data, which represent the factor, into a trained model.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes an operation unit that receives a user performance; and at least one processor. wherein the at least one processor performs the following: in accordance with a user operation specifying a chord on the operation unit, obtaining lyric data of a lyric and obtaining a plurality of pieces of waveform data respectively corresponding to a plurality of pitches indicated by the specified chord; inputting the obtained lyric data to a trained model that has been trained and learned singing voices of a singer so as to cause the trained model to output acoustic feature data in response thereto; synthesizing each of the plurality of pieces of waveform data with the acoustic feature data so as to generate a plurality of pieces of synthesized waveform data; and outputting a polyphonic synthesized singing voice based on the generated plurality of pieces of synthesized waveform data.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color.
ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM
An electronic musical instrument includes: a memory that stores lyric data including lyrics for a plurality of timings, pitch data including pitches for said plurality of timings, and a trained model that has been trained and learned singing voice features of a singer; and at least one processor, wherein at each of said plurality of timings, the at least one processor: if the operation unit is not operated, obtains, from the trained model, a singing voice feature associated with a lyric indicated by the lyric data and a pitch indicated by the pitch data; if the operation unit is operated, obtains, from the trained model, a singing voice feature associated with the lyric indicated by the lyric data and a pitch indicated by the operation of the operation unit; and synthesizes and outputs singing voice data based on the obtained singing voice feature of the singer.
SYSTEMS, DEVICES, AND METHODS FOR HARMONIC STRUCTURE IN DIGITAL REPRESENTATIONS OF MUSIC
Systems, devices, and methods for encoding the harmonic structure of a musical composition in a digital data structure are described. Tonal and rhythmic commonalities are identified across the musical bars that make up a musical composition. Individual bars of the musical composition are each analyzed to characterize their respective harmonic fingerprints in various forms, and the respective harmonic fingerprints are compared to sort the musical bars into harmonic equivalence categories. Isomorphic mappings between hierarchical data structures that encode the musical composition based on musicality and harmony, respectively, are also described.
The systems, devices, and methods for encoding the harmonic structure of a musical composition in a digital data structure have broad applicability in computer-based composition and variation of music.
INFORMATION PROCESSING METHOD AND APPARATUS FOR PROCESSING PERFORMANCE OF MUSICAL PIECE
Provided is an information processing apparatus that generates various kinds of time series data according to a performance tendency of a user. The information processing apparatus includes an index specifying unit 22 that specifies performance tendency information that indicates a performance tendency of a performance of a musical piece by a user by inputting observational performance data X representing the performance to a learned model La and an information processing unit 23 that generates time series data Z regarding the musical piece according to the performance tendency information.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model so as to cause the trained acoustic model to output the acoustic feature data in response to the inputted prescribed lyric data and the inputted pitch data, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of the acoustic feature data output by the trained acoustic model.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes: a memory that stores a trained acoustic model obtained by performing machine learning on training musical score data and training singing voice data of a singer; and at least one processor, wherein the at least one processor: in accordance with a user operation on an operation element in a plurality of operation elements, inputs prescribed lyric data and pitch data corresponding to the user operation of the operation element to the trained acoustic model, and digitally synthesizes and outputs inferred singing voice data that infers a singing voice of the singer on the basis of at least a portion of acoustic feature data output by the trained acoustic model, and on the basis of instrument sound waveform data that are synthesized in accordance with the pitch data corresponding to the user operation of the operation element.
PERFORMANCE ANALYSIS METHOD
A performance analysis method according to the present invention includes generating information related to a performance tendency of a user, from observed performance information relating to a performance of a musical piece by the user and inferred performance information that occurs when the musical piece is performed based on a specific tendency.