Patent classifications
G10H2210/111
Music yielder with conformance to requisites
There is disclosed a music yielding system including a controller, a music yielding device, a music analyzing device, and a musical data transferring device. The music yielding device may yield one or more sets of musical notes conforming to one or more attributes. The controller may cause one or more criteria to be set determining conformance of one or more of the sets of musical notes to one or more of the attributes. The music analyzing device may calculate and transmit one or more correlations within one or more of the sets of musical notes. The musical data transferring device may transfer one or more of the sets of musical notes between one or more origins and one or more destinations.
METHOD AND APPARATUS FOR CONVERTING COLOR DATA INTO MUSICAL NOTES
An approach is provided for converting color data into one or more musical notes. The approach involves reading, using a color-reading device, respective colors applied to a canvas by a plurality of drawing instruments as color data, wherein each of the plurality of drawing instruments is configured to draw in respective colors of a color palette, and wherein the respective colors of the color palette correspond respectively to a set of musical notes. The approach also involves processing, using a color processing module, the color data to generate a composition of the one or more musical notes from the color data based on a set of musical notes that correspond to the respective colors in the color data.
Auto-generated accompaniment from singing a melody
A method for processing a voice signal by an electronic system to create a song is disclosed. The method comprises the steps in the electronic system of acquiring an input singing voice recording (11); estimating a musical key (15b) and a Tempo (15a) from the singing voice recording (11); defining a tuning control (16) and a timing control (17) able to align the singing voice recording (11) with the estimated musical key (15b) and Tempo (15a); applying the tuning control (16) and the timing control (17) to the singing voice recording (11) so that an aligned voice recording (20) is obtained. Next, the method comprises the step of generating an music accompaniment (23) as function of the estimated musical key (15b) and Tempo (15a) and an arrangement database (22) and mixing the aligned voice recording (20) and the music accompaniment (23) to obtain the song (12). A system a server and a device are also disclosed.
SYSTEMS, DEVICES, AND METHODS FOR MUSICAL CATALOG AMPLIFICATION SERVICES
Musical catalog amplification services that leverage or deploy a computer-based musical composition system are described. The computer-based musical composition system employs algorithms and, optionally, artificial intelligence to generate new music based on analyses of existing music. The new music may be wholly distinctive from, or may include musical variations of, the existing music. Rights in the new music generated by the computer-based musical composition system are granted to the rights holder(s) of the existing music. In this way, the musical catalog(s) of the rights holder(s) is/are amplified to include additional music assets. The computer-based musical composition system may be tuned so that the new music sounds more like, or less like, the existing music of the rights holder(s). Revenues generated from the new music are shared between the musical catalog amplification service provider and the rights holder(s).
SYSTEMS, DEVICES, AND METHODS FOR ASSIGNING MOOD LABELS TO MUSICAL COMPOSITIONS
Computer-based systems, devices, and methods for assigning mood labels to musical compositions are described. A mood classifier is trained based on mood-labeled musically-coherent segments of musical compositions and subsequently applied to automatically assign mood labels to musically-coherent segments of musical compositions. In both cases, the musically-coherent segments are generated using automated segmentation algorithms.
SYSTEMS, DEVICES, AND METHODS FOR DECOUPLING NOTE VARIATION AND HARMONIZATION IN COMPUTER-GENERATED VARIATIONS OF MUSIC DATA OBJECTS
Computer-based systems, devices, and methods for generating variations of musical compositions are described. Musical compositions stored in digital media include one or more music data object(s) that encode notes. A first set of notes is characterized and a transformation is applied to replace at least one note in the first set of notes with at least one note in a second set of notes. The transformation may explore or call upon the full range of musical notes available without being constrained by conventions of musicality and harmony. For each particular note in the second set of notes that replaces a note in the first set of notes, whether the particular note is in musical harmony with other notes in the music data object is separately assessed and, if not, the particular note is adjusted to bring it into musical harmony with other notes in the music data object.
SYSTEMS, DEVICES, AND METHODS FOR COMPUTER-GENERATED MUSICAL COMPOSITIONS
Computer-based systems, devices, and methods for generating musical compositions are described. A population of musical compositions stored in digital media are each segmented to produce abridged samples. The samples are analyzed to identify “parent” compositions that best exhibit or evoke a particular desired quality. The parent compositions are cross-bred to generate a set of child compositions which are similarly segmented and analyzed. The child compositions that best exhibit or evoke the particular desired quality are re-cast as parent compositions from which another generation of child compositions are bred. Mutations in the form of musical variations are inserted in at least some iterations and the process is repeated until at least one child composition that satisfies some exit criterion is returned.
Method and system for simulating musical phrase
Disclosed is a method for simulating a musical phrase, wherein the musical phrase includes a sequence of notes, the method comprising generating timbral fingerprints associated with a sample library that comprises recordings of a plurality of notes in a plurality of intensities, using a linear predictive coding technique, wherein the timbral fingerprints relate to the plurality of intensities of the plurality of notes; determining an origin intensity for the musical phrase, wherein the origin intensity is one intensity selected from amongst the plurality of intensities; and simulating each note in the sequence of notes, by morphing a recording of each note in the origin intensity according to timbral fingerprints of the note in the plurality of intensities, for simulating the musical phrase.
Generative composition using form atom heuristics
A processor-based method of producing a generative musical composition is disclosed herein. The method includes the step of receiving a briefing narrative which describes a musical journey by referencing a plurality of emotional descriptions related to a plurality of musical sections. The generative musical composition is assembled with regard to the briefing narrative through the selection and concatenation of Form Atoms with tags that align with the emotional descriptions related to the musical sections. The Form Atoms, which have compositional nature aligned with the emotional descriptions and self-contained constructional properties representative of the historical corpus of music, are then selected and substituted into the generative composition. The method further involves the step of generating the musical composition by mapping musical transition between selectively chosen Form Atoms to reflect pre-established transitions between Form Atoms and groups Form Atoms that have been identified to have similar tags but different constructional properties.
SYSTEMS, DEVICES, AND METHODS FOR COMPUTER-GENERATED MUSICAL NOTE SEQUENCES
Computer-based systems, devices, and methods for generating musical note sequences are described. One or more musical composition(s) stored in digital media include one or more data object(s) that encode notes and/or note sequences. At least one note sequence is processed to form a time-ordered sequence of parallel notes, which is analyzed to determine a k-back probability transition matrix for the at least one note sequence. An attribute, such as a style, of the at least one note sequence is thus encoded and used to generate new note sequences that embody a similar attribute or style. In some implementations, the at least one note sequence may include a concatenated set of note sequences representative of a particular library of musical compositions.