Patent classifications
G10H2210/131
MUSIC CONTEXT SYSTEM AND METHOD OF REAL-TIME SYNCHRONIZATION OF MUSICAL CONTENT HAVING REGARD TO MUSICAL TIMING
Due to discrepancies in musical timing signatures, the invention assesses whether a recorded displacement, expressed in terms of beats and fractions, between exit and entry points for a potential musical splice or cut, corresponds to permit a seamless music splicing of different musical sections. Assessment is achieved by establishing a third time base of pulses having a length dependent upon a lowest common multiple of fractions within respective bars for different sections, with the bars of the respective sections then partitioned into an equal number of fixed length pulses. A coefficient aligns different time signatures; it is a ratio between pulses within the different sections. The coefficient identifies corresponding locations of a cut point, related to a suitable anacrusis, in terms of respectively an aligned bar, beat, quaver and fraction in differing time signatures. The coefficient ensures that the time anacrusis in one time signature is interchangeable with others.
Systems, devices, and methods for decoupling note variation and harmonization in computer-generated variations of music data objects
Computer-based systems, devices, and methods for generating variations of musical compositions are described. Musical compositions stored in digital media include one or more music data object(s) that encode notes. A first set of notes is characterized and a transformation is applied to replace at least one note in the first set of notes with at least one note in a second set of notes. The transformation may explore or call upon the full range of musical notes available without being constrained by conventions of musicality and harmony. For each particular note in the second set of notes that replaces a note in the first set of notes, whether the particular note is in musical harmony with other notes in the music data object is separately assessed and, if not, the particular note is adjusted to bring it into musical harmony with other notes in the music data object.
Systems, devices, and methods for computer-generated musical note sequences
Computer-based systems, devices, and methods for generating musical note sequences are described. One or more musical composition(s) stored in digital media include one or more data object(s) that encode notes and/or note sequences. At least one note sequence is processed to form a time-ordered sequence of parallel notes, which is analyzed to determine a k-back probability transition matrix for the at least one note sequence. An attribute, such as a style, of the at least one note sequence is thus encoded and used to generate new note sequences that embody a similar attribute or style. In some implementations, the at least one note sequence may include a concatenated set of note sequences representative of a particular library of musical compositions.
Media-media augmentation system and method of composing a media product
A media-content augmentation system includes a database with a multiplicity of media files and associated metadata. Each media file is mapped to at least one contextual theme defined by beginning and end timings. A processing system couples to the database; and an input couples to the processing system. The input is in the form of temporally-varying events data. The processing system resolves the input into one or more categorized contextual themes, correlates the themes with metadata associated with selected media files relevant to the themes, and then splices or fades together selected media files to reflect the events as the input varies with time, thus generating as an output, a media product in which transitions between media are aligned with the temporally-varying events. The database may contain sections of digital media files. A method aligns sections in digital media files with temporally-varying events data to compose a media product.
AUDIO MIXING SONG GENERATION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
A method and an apparatus for generating a remix. The method comprises: obtaining at least two audios which are different singing versions of a same song: extracting, from each audio, a vocal signal and an instrumental signal to obtain a vocal set the vocal signal of each audio and a instrumental set comprising the instrumental signal of each audio: aligning tracks of all vocal signals in the vocal set based on reference rhythm information selected from rhythm information of all vocal signals in the vocal set, where all vocal signals having the aligned tracks serve as to-be-mixed vocal audios: determining an instrumental signal, of which a track is aligned with those of the to-be-mixed vocal audios, from the instrumental set as a to-be-mixed instrumental audio: and mixing the to-be-mixed vocal audios with the to-be-mixed instrumental audio to obtain the remix.
Auditory augmentation system and method of composing a media product
An auditory augmentation system includes a database with a multiplicity of audio sections and associated metadata for digital audio files. Each audio section is mapped to a contextual theme, each contextual theme being mapped to an audio section having an entry point and an exit point. The entry and exit points support seamless splice or fade transitions between different audio sections. A processing system couples to the database along with an input; the input is in the form of temporally-varying events data that defines a temporal input. The processing system resolves the temporal input into one or more of a plurality of categorized contextual themes, correlates the categorized contextual themes with metadata associated with selected audio sections relevant to the one or more categorized contextual themes, and splices or fades together selected audio sections, and generates, as an output, a media product in which transitions between audio sections are seamless.
COGNITIVE MUSIC ENGINE USING UNSUPERVISED LEARNING
A method for generating a musical composition based on user input is described. A first set of musical characteristics from a first input musical piece is received as an input vector. The first set of musical characteristics is perturbed to create a perturbed input vector as input in a first set of nodes in a first visible layer of an unsupervised neural net. The unsupervised neural net comprised of a plurality of computing layers, each computing layer composed of a respective set of nodes. The unsupervised neural net is operated to calculate an output vector from a higher level hidden layer in the unsupervised neural net. The output vector is used to create an output musical piece.
Method Of Editing Audio Signals Using Separated Objects And Associated Apparatus
A method comprises providing an audio file comprising two or more discrete tracks; separating the two or more discrete tracks; setting a limit on an amount at least one of the two or more discrete tracks may be altered; and outputting the separated and limited discrete tracks as a file for use by an end user.
SYSTEMS AND METHODS FOR AUTOMATIC MIXING OF MEDIA
Audio mix information is received from a plurality of users. Mix rules are determined from the audio mix information from the plurality of users, wherein the mix rules include a first mix rule associated with a first audio item. The first mix rule relates to an overlap of the first audio item with another audio item. The first mix rule is made available to one or more clients. After making the first mix rule available, an indication, from a respective client device, that the first audio item is to be mixed with a second audio item at the respective client device in accordance with the first mix rule is received. In response to the indication, a specification of the first mix rule is transmitted to the respective client device to be applied by the respective client device to generate a transition between the first audio item and the second item.
Cognitive music engine using unsupervised learning
A method for generating a musical composition based on user input is described. A first set of musical characteristics from a first input musical piece is received as an input vector. The first set of musical characteristics is perturbed to create a perturbed input vector as input in a first set of nodes in a first visible layer of an unsupervised neural net. The unsupervised neural net comprised of a plurality of computing layers, each computing layer composed of a respective set of nodes. The unsupervised neural net is operated to calculate an output vector from a higher level hidden layer in the unsupervised neural net. The output vector is used to create an output musical piece.