Patent classifications
G10H2210/125
MUSIC GENERATION DEVICE, MUSIC GENERATION METHOD, AND RECORDING MEDIUM
A music generation device includes: an acquisition unit that acquires first stream data and second stream data different from the first stream data; an accompaniment generation unit that generates accompaniment information, which is music data indicating an accompaniment, based on a change in the first stream data; a melody generation unit that generates melody information, which is music data indicating a melody, based on a change in the second stream data; a melody adjustment unit that adjusts the melody information in accordance with a key of the accompaniment indicated by the generated accompaniment information; a music combining unit that combines the accompaniment information and the adjusted melody information to generate musical piece information; and an output unit that outputs the generated musical piece information.
Method and system for hybrid AI-based song variant construction
According to an embodiment, there is provided a system and method for automatic AI-based song construction based on ideas of a user. It provides and benefits from a combination of expert knowledge resident in an expert engine which contains rules for a musically correct song generation and machine learning in an AI-based audio loop selection engine for the selection of fitting audio loops from a database of audio loops.
AI-BASED DJ SYSTEM AND METHOD FOR DECOMPOSING, MISING AND PLAYING OF AUDIO DATA
The present invention relates to a method for processing and playing audio data comprising the steps of receiving mixed input data and playing recombined output data. Furthermore, the invention relates to a device 10 for processing and playing audio data, preferably DJ equipment, comprising an audio input unit for receiving a mixed input signal, a recombination unit 32 and a playing unit 34 for playing recombined output data. In addition, the present invention relates to a method and a device for representing audio data, i.e. on a display.
Modular automated music production server
A music production system comprises: a computer interface comprising at least one input for receiving an external request for a piece of music and at least one output for transmitting a response to the external request which comprises or indicates a piece of music incorporating first music data; a first music production component configured to process second music data according to at least a first input setting so as to generate the first music data; a second music production component configured to receive via the computer interface an internal request, and provide the second music data based on at least a second input setting denoted by the internal request; and a controller configured to determine in response to the external request the first and second input settings, and instigate the internal request via the computer interface.
Audio Source Separation Processing Pipeline Systems and Methods
Systems and methods for audio source separation include receiving a single-track audio input sample having an unknown mixture of audio signals generated from a plurality of audio sources, and separating one or more of the audio sources from the single-track audio input sample using a sequential audio source separation model. Separating one or more of the audio sources may include defining a processing recipe comprising a plurality of source separation processes configured to receive an audio input mixture and output one or more separated source signals and a remaining complement signal mixture, and processing the single-track audio input sample in accordance with the processing recipe to generate a plurality of audio stems separated from the unknown mixture of audio signals.
Method, system, and computer-readable medium for creating song mashups
A system, method and computer product for combining audio tracks. In one example embodiment herein, the method comprises determining at least one music track that is musically compatible with a base music track, aligning those tracks in time, and combining the tracks. In one example embodiment herein, the tracks may be music tracks of different songs, the base music track can be an instrumental accompaniment track, and the at least one music track can be a vocal track. Also in one example embodiment herein, the determining is based on musical characteristics associated with at least one of the tracks, such as an acoustic feature vector distance between tracks, a likelihood of at least one track including a vocal component, a tempo, or musical key. Also, determining of musical compatibility can include determining at least one of a vertical musical compatibility or a horizontal musical compatibility among tracks.
AI BASED REMIXING OF MUSIC: TIMBRE TRANSFORMATION AND MATCHING OF MIXED AUDIO DATA
The present invention provides a method for processing audio data, comprising the steps of providing input audio data containing a mixture of audio data including first audio data of a first musical timbre and second audio data of a second musical timbre different from said first musical timbre, decomposing the input audio data to provide decomposed data representative of the first audio data, transforming the decomposed data to obtain third audio data.
SYSTEMS AND METHODS FOR DYNAMICALLY SYNTHESIZING AUDIO FILES ON A MOBILE DEVICE
Embodiments of the present invention provide for systems and methods for dynamically synthesizing audio files on a mobile device. Embodiments of the present invention are directed to an exemplary audio file recorder with unique music-related features, such as synchronization of multiple pieces of content by the beats per minute, with functionality for content creation, editing, layering of multiple tracks, and sharing. Further, embodiments of the present invention are also related to an exemplary computer software platform which allows users to record any audio into their mobile device and organize the audio files in a manner that allows the files to be easily retrieved and shared.
METHOD, SYSTEM, AND MEDIUM FOR AFFECTIVE MUSIC RECOMMENDATION AND COMPOSITION
A method, system, and medium for affective music recommendation and composition. A listener's current affective state and target affective state are identified, and an audio stream, such as a music playlist, is generated with the intent of effecting a controlled trajectory of the listener's affective state from the current state to the target state. The audio stream is generated by a machine learning system trained using data from the listener and/or other users indicating the effectiveness of specific audio segments, or audio segments having specific features, in effecting the desired affective trajectory. The audio stream is presented to the user as an auditory stimulus. The machine learning system may be updated based on the affective state changes induced in the listener after exposure to the auditory stimulus. Over time, the machine learning system gains a robust understanding of the relationship between music and human affect, and thus the machine learning system may also be used to compose, master, and/or adapt music configured to induce specific affective responses in listeners.
METHODS AND SYSTEMS FOR FACILITATING GENERATING MUSIC IN REAL-TIME USING PROGRESSIVE PARAMETERS
The invention generates progressive music in real-time for video games using random, seeded random, and manually input variables to affect melody, phrase length, harmonic chords and complexity, and percussive accompaniment. As the game is played, variables may be passed in that change the music to increase or decrease complexity and tension levels and to interpolate between styles. The generated music then progresses from stable, simple, and consonant to more tense, dissonant, and complex melodies, harmonies and rhythms, and back to the original stage as a musical resolution. Through variables controlling musical parameters, music may progressively change from the atonal region where there is no clear resolution or stability, to tonal, where there is only consonance and stability, and anywhere in between. These variables are assigned through a middleware or a game-engine setup that uses the current device as an audio source plugin, or manually coded into the individual video game.