G10H2210/121

Information processing method and apparatus
11568244 · 2023-01-31 · ·

An information processing method according to the present invention includes providing first musical piece information representing contents of a musical piece and performance information relating to a past performance prior to one unit period within the musical piece to a learner that has undergone learning relating to a specific tendency that relates to a performance, and generating, for the one unit period, performance information that is based on the specific tendency with the learner.

Electronic musical instrument, electronic musical instrument control method, and storage medium

An electronic musical instrument includes at least one processor that, in accordance with a user operation on an operation unit, obtains lyric data and waveform data corresponding to a first tone color; inputs the obtained lyric data to a trained model so as to cause the trained model to output acoustic feature data in response thereto; generates waveform data corresponding to a singing voice of a singer and corresponding to a second tone color that is different from the first tone color, based on the acoustic feature data outputted from the trained model and the obtained waveform data corresponding to the first tone color; and outputs a singing voice based on the generated waveform data corresponding to the second tone color.

ELECTRONIC MUSICAL INSTRUMENT, ELECTRONIC MUSICAL INSTRUMENT CONTROL METHOD, AND STORAGE MEDIUM

An electronic musical instrument includes an operation unit that receives a user performance; and at least one processor. wherein the at least one processor performs the following: in accordance with a user operation specifying a chord on the operation unit, obtaining lyric data of a lyric and obtaining a plurality of pieces of waveform data respectively corresponding to a plurality of pitches indicated by the specified chord; inputting the obtained lyric data to a trained model that has been trained and learned singing voices of a singer so as to cause the trained model to output acoustic feature data in response thereto; synthesizing each of the plurality of pieces of waveform data with the acoustic feature data so as to generate a plurality of pieces of synthesized waveform data; and outputting a polyphonic synthesized singing voice based on the generated plurality of pieces of synthesized waveform data.

Electronic musical instrument, electronic musical instrument control method, and storage medium

An electronic musical instrument includes: a memory that stores lyric data including lyrics for a plurality of timings, pitch data including pitches for said plurality of timings, and a trained model that has been trained and learned singing voice features of a singer; and at least one processor, wherein at each of said plurality of timings, the at least one processor: if the operation unit is not operated, obtains, from the trained model, a singing voice feature associated with a lyric indicated by the lyric data and a pitch indicated by the pitch data; if the operation unit is operated, obtains, from the trained model, a singing voice feature associated with the lyric indicated by the lyric data and a pitch indicated by the operation of the operation unit; and synthesizes and outputs singing voice data based on the obtained singing voice feature of the singer.

Generative composition with texture groups
11646007 · 2023-05-09 · ·

A computer-implemented method of generating a musical composition containing a plurality of musical texture groups is disclosed. The method includes assembling musical texture groups from musical instrument components and associating therewith a tag expressing emotional textural connotation. The instrument components have musical textural classifiers selected from a set of pre-defined textural classifiers such that different instrument components may have a different subset of pre-defined textural classifiers. The textural classifiers within a texture group possess either no musical feature attribute or a single musical feature attribute and any number of musical accompaniment attributes. The method then generates at least one chord scheme to a narrative brief, to provide an emotional connotation to a series of events, the chord scheme generated by selecting and assembling Form Atoms. The final step includes applying a texture to the chord scheme to generate the musical composition reflecting the narrative brief.

System and methods for automatically generating a musical composition having audibly correct form
11514877 · 2022-11-29 · ·

A generative composition system reduces existing musical artefacts to constituent elements termed “Form Atoms”. These Form Atoms may each be of varying length and have musical properties and associations that link together through Markov chains. To provide myriad new composition, a set of heuristics ensures that musical textures between concatenated musical sections follow a supplied and defined briefing narrative for the new composition whilst contiguous concatenated Form Atoms are also automatically selected to see that similarities in respective and identified attributes of musical textures for those musical sections are maintained to maintain good musical form. Within the composition work, chord spacing and control are practiced to maintain musical sense in the new composition and a primitive heuristics structure maintains pitch and permits key transformation. The system provides signal analysis and music generation by allowing emotional connotations to be specified and reproduced from cross-referenced Form-Atoms.

GENERATIVE COMPOSITION WITH DEFINED FORM ATOM HEURISTICS
20230186881 · 2023-06-15 · ·

A generative composition system reduces existing musical artefacts to constituent elements termed “Form Atoms”. These Form Atoms may each be of varying length and have musical properties and associations that link together through Markov chains. To provide myriad new composition, a set of heuristics ensures that musical textures between concatenated musical sections follow a supplied and defined briefing narrative for the new composition whilst contiguous concatenated Form Atoms are also automatically selected to see that similarities in respective and identified attributes of musical textures for those musical sections are maintained to support maintenance of musical form. Independent aspects of the disclosure further ensure that, within the composition work, such as a media product or a real-time audio stream, chord spacing determination and control are practiced to maintain musical sense in the new composition. Further, a structuring of primitive heuristics operates to maintain pitch and permit key transformation. The system and its functionality provides signal analysis and music generation through allowing emotional connotations to be specified and reproduced from cross-referenced Form-Atoms.

GENERATING MUSIC OUT OF A DATABASE OF SETS OF NOTES

A method of generating music contents from input music contents that includes development of models of music composition generation on the basis of business rules and composition rules. In parallel, sounds are prepared, which may be saved in the sound repository. Then, models in the form of source code are sent to a melody generator. Firstly, the generator is set with specific parameters using a controller conforming to MIDI standards and supplemented with composition characteristics read from the user preference database. Next, the contents are sent to automatic generation based on artificial intelligence algorithms and the digital score of the composition with the desired characteristics is generated. Sound tracks of individual instruments are rendered and the rendered tracks are mixed into the final music record. Next, the composition and its record are verified by the critic module using algorithms based on neural networks.

Electronic music box
09728171 · 2017-08-08 · ·

Music data memory includes pieces of music within a group and other pieces of music outside the group. The next piece to be played is automatically determined by random table among pieces within the group. Favorite or newest piece is weighted to be more frequently played in the group. Piece in music data memory is automatically included into the group by random table. Newly downloaded piece into music data memory is included into the group by priority. Most frequently played piece is excluded from the group in place of newly included piece. Favorite or newest piece may be an exception of exclusion. Next piece is capable of being played in tempo similar to that of preceding piece by means of tempo-adjusted or piece replacement or repetition of the same piece for the purpose of continued baby cradling in synchronism with the same tempo of succeeding pieces.

Method and system for template based variant generation of hybrid AI generated song

According to an embodiment, there is provided a system and method for automatic AI-based song construction based on ideas of a user. In some embodiments, an embodiment is provided with a database that contains harmony templates which can be used by the user to augment the playback of a given music work. Various embodiments of the instant invention also benefit from a combination of expert knowledge resident in an expert engine which contains rules for musically correct song generation and machine learning in an AI-based audio loop selection engine for the selection of compatible audio loops from a database of audio loops.