Patent classifications
G10H1/0025
Cognitive music engine using unsupervised learning
A method for generating a musical composition based on user input is described. A first set of musical characteristics from a first input musical piece is received as an input vector. The first set of musical characteristics is perturbed to create a perturbed input vector as input in a first set of nodes in a first visible layer of an unsupervised neural net. The unsupervised neural net comprised of a plurality of computing layers, each computing layer composed of a respective set of nodes. The unsupervised neural net is operated to calculate an output vector from a higher level hidden layer in the unsupervised neural net. The output vector is used to create an output musical piece.
SYSTEMS, DEVICES, AND METHODS FOR MUSICAL CATALOG AMPLIFICATION SERVICES
Musical catalog amplification services that leverage or deploy a computer-based musical composition system are described. The computer-based musical composition system employs algorithms and, optionally, artificial intelligence to generate new music based on analyses of existing music. The new music may be wholly distinctive from, or may include musical variations of, the existing music. Rights in the new music generated by the computer-based musical composition system are granted to the rights holder(s) of the existing music. In this way, the musical catalog(s) of the rights holder(s) is/are amplified to include additional music assets. The computer-based musical composition system may be tuned so that the new music sounds more like, or less like, the existing music of the rights holder(s). Revenues generated from the new music are shared between the musical catalog amplification service provider and the rights holder(s).
Autonomous generation of melody
Implementations of the subject matter described herein provide a solution that enables a machine to automatically generate a melody. In this solution, user emotion and/or environment information is used to select a first melody feature parameter from a plurality of melody feature parameters, wherein each of the plurality of melody feature parameters corresponds to a music style of one of a plurality of reference melodies. The first melody feature parameter is further used to generate a first melody that conforms to the music style and is different from the reference melody. Thus, a melody that matches user emotions and/or environmental information may be automatically created.
AUDIO STEM IDENTIFICATION SYSTEMS AND METHODS
Methods, systems and computer program products are provided for determining acoustic feature vectors of query and target items in a first vector space, and mapping the acoustic feature vectors to a second vector space having a lower dimension. The distribution of vectors in the second vector space can then be used to identify items from the same songs, and/or items that are complementary. A mapping function is trained using a machine learning algorithm, such that complementary audio items are closer in the second vector space than the first, according to a given distance metric.
Music composition aid
Disclosed herein are computer-implemented method, computer-readable storage medium, and DAW embodiments for implementing a music composition aid. An embodiment includes retrieving a first constraint value, receiving a selection of a set of musical elements, and accepting a second constraint value corresponding to the set of musical elements. Some embodiments further include invoking an iterator function, using at least the second constraint value as an argument, and generating an output of the iterator function, limiting a size of the output of the iterator function, according to the lesser of the first constraint value or a transform of the second constraint value. Output of the iterator function may include, of the set of musical elements, a subset determined by the second constraint value. The size of the output may be no more than the first constraint value. Further embodiments may render the output of the iterator function visually and/or audibly, for example.
Systems, devices, and methods for segmenting a musical composition into musical segments
Systems, devices, and methods for segmenting musical compositions are described. Discrete, musically-coherent segments (such as intro, verse, chorus, bridge, solo, and the like) of a musical composition are identified. Distance measures are used to evaluate whether each bar of a musical composition is more like the bars that directly precede it or more like the bars that directly succeed it, and each respective series of musically similar bars is assigned to the same respective segment. Large changes in the distance measure(s) between adjacent bars may be used to identify boundaries between abutting musical segments. Computer systems and computer program products for implementing segmentation are also described. The results of segmentation may advantageously be applied in computer-based composition of music and musical variations, as well as in other applications involving labelling, characterizing, or otherwise processing music.
Method and System for Processing Input Data
A method for analyzing one or more notes in a musical composition, comprising for each note: getting a note, a chord and a scale. computing note properties using the note's value and the chord and the scale. A method for transforming one or more input notes into one or more new notes, comprising for each input note: getting an input note and its note properties, getting a new chord and a new scale for the input note, getting a list of notes candidates, computing distances between the input note and every note in the list, using input note's value, input note's note properties, candidate note's value and candidate note's note properties, finding the candidate that has the minimal distance, and setting a new note value using a note value of the candidate with the minimal distance.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing apparatus according to the present disclosure includes: an acquisition unit that acquires music information; an extraction unit that extracts a plurality of types of feature amounts from the music information acquired by the acquisition unit; and a generation unit that generates information in which the plurality of types of feature amounts extracted by the extraction unit is associated with predetermined identification information as music feature information to be used as learning data in composition processing using machine learning.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing apparatus according to the present disclosure includes: a storage unit that stores a plurality of pieces of music feature information in which a plurality of types of feature amounts extracted from music information is associated with predetermined identification information, the music feature information being used as learning data in composition processing using machine learning; a reception unit that receives instruction information transmitted from a terminal apparatus; an extraction unit that extracts the music feature information from the storage unit according to the instruction information; and an output unit that outputs presentation information of the music feature information extracted by the extraction unit.
ELECTRONIC MUSICAL INSTRUMENTS, METHOD AND STORAGE MEDIA THEREFOR
An electronic musical instrument includes: a performance controller; and at least one processor, configured to perform the following: instructing sound generation of a first musical tone in response to a first operation on the performance controller; in response to a second operation on the performance controller during the sound generation of the first musical tone, obtaining a first amplitude value of the first musical tone at a time of the second operation, and obtaining a second amplitude value at which a second musical tone is to be sound-produced in response to the second operation on the performance controller; acquiring a parameter value for determining at least one of pitch, timbre, and volume of the second musical tone based on a ratio of the first amplitude value to the second amplitude value; and instructing sound generation of the second musical tone in accordance with the acquired parameter value.