G10H2210/105

SYSTEM, METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM FOR COLLABORATING ON A MUSICAL COMPOSITION OVER A COMMUNICATION NETWORK

A system and methods for collaborating on a musical composition over a communication network, the system having processing circuitry that obtains the musical composition stored within a data storage device of the system, the musical composition including a first musical input data associated with a first channel, receives, via the communication network, second musical input data from a client device, the second musical input data being associated with a second channel, generates a data block based on the received second musical input data, the generated data block including synchronization data associated with the second musical input data relative to at least a portion of the musical composition, and transmits the data block to memory, the memory being accessible via the communication network to the client device and other client devices that are collaborating on the musical composition.

Comparison Training for Music Generator
20220059062 · 2022-02-24 ·

Techniques are disclosed relating to automatically generating new music content based on image representations of audio files. A music generation system includes a music generation subsystem and a music classification subsystem. The music generation subsystem may generate output music content according to music parameters that define policy for generating music. The classification subsystem may be used to classify whether music is generated by the music generation subsystem or is professionally produced music content. The music generation subsystem may implement an algorithm that is reinforced by prediction output from the music classification subsystem. Reinforcement may include tuning the music parameters to generate more human-like music content.

MUSIC SHAPER
20230395050 · 2023-12-07 ·

A music composition, editing, and playback system and method provides a user interface design based on geometric interpretation of music theory replacing traditional modern music notation with geometric shapes including chords represented by polygons that are colored with colors or hues.

COMPUTING ORDERS OF MODELED EXPECTATION ACROSS FEATURES OF MEDIA

A method implemented by a determination engine is provided. The determination engine receives a media dataset comprising target piece music information, target piece audience information, corpus music information, corpus audience information, and corpus preference data. The determination engine determines a subset of the corpus music and preference information and determines at least one surprise factor of the subset of the corpus music and preference information across features at one of a plurality of orders. The determination engine learns a model that estimates a likelihood that time-varying surprise trends across the features achieves a preference level. The determination engine determines at least one surprise factor of the target piece music information across the features at the one of the plurality of orders and predicts, using the model, preference information using the time-varying surprise trends for the target piece music information across the features.

METHODS AND SYSTEMS FOR INTERACTIVE LYRIC GENERATION
20210335334 · 2021-10-28 ·

Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to a Lyric Engine. In various embodiments, the Lyric Engine receives, at a user interface, a selection of at least one song criteria. The Lyric Engine receives a first set of suggested song lyrics that correspond to the selected song criteria. The Lyric Engine presents, in the user interface, the first set of suggested song lyrics. The Lyric Engine receives, at the user interface, a selection of one or more of the suggested song lyrics in the first set. The Lyric Engine receives a second set of suggested song lyrics that correspond to the selected song criteria and the selected song lyrics. The Lyric Engine concurrently presents, in the user interface, the selected song lyrics and the second set of suggested song lyrics.

System and method for AI controlled song construction

According to an embodiment, there is provided a system and method for automatically generating a complete music work from a partially completed work provided by a user. One approach uses an artificial intelligence (AI) engine that is trained by creating incomplete works from a database of complete works and then instructing the AI to complete the incomplete works. A comparison is made between the completed works and the originals to determine the effectiveness of the training process. After the AI is trained, it is applied to the user's incomplete work to produce a final music item.

SYSTEMS AND METHODS FOR VISUAL IMAGE AUDIO COMPOSITION BASED ON USER INPUT
20210319774 · 2021-10-14 ·

The present invention relates to systems and methods for visual image audio composition. In particular, the present invention provides systems and methods for audio composition from a diversity of visual images and user determined sound database sources.

COMPUTER-BASED SYSTEMS, DEVICES, AND METHODS FOR GENERATING AESTHETIC CHORD PROGRESSIONS AND KEY MODULATIONS IN MUSICAL COMPOSITIONS
20210407477 · 2021-12-30 ·

Computer-based systems, devices, and methods for automatically generating aesthetic chord progressions and key modulations in musical compositions are described. Known harmonic relationships are expanded upon to produce a much richer set of harmonic transition probability models compared to conventional music theory, and these models are leveraged by a computer-based musical composition system to generate new musical compositions and variations of existing musical compositions. Techniques for enabling a computer-based musical composition system to automatically determine when to introduce a key modulation, what key to module to, and what chord progression(s) to use within the new key are all described.

Media-media augmentation system and method of composing a media product
11114074 · 2021-09-07 · ·

A media-content augmentation system includes a processing system that receives input data in the form of temporally-varying events data. The processing system resolves the input into one or more categorized contextual themes, correlates the themes with metadata associated with at least one reference media file, and then splices or fades together selected parts of the media file, thus generating as an output, a media product in which transitions between its contextual themes are aligned with selected temporal events in the input data. The temporarily-varying events take the form of a beginning and an end in the case of a sustained feature, or a specific point in time for a hit point. A method aligns sections in digital media files with temporally-varying events data to compose a media product. The system augments a sensory experience of a user by dynamically changing and then playing selected media files within the context of the categorized themes input to the processing system.

SYSTEMS AND METHODS FOR GENERATING AUDIO CONTENT IN A DIGITAL AUDIO WORKSTATION

A method includes displaying a graphical user interface (GUI) for a step sequencer in a digital audio workstation. The GUI includes a sequence of user interface elements corresponding to a portion of a roll for an audio composition. Each user interface element in the sequence of user interface elements represents a respective time interval for a note. The sequence of user interface elements. The method includes receiving a user input interacting with a first user interface element. The method includes, in response to the user input: splitting a played note represented by the first user interface element into two or more played notes. The method further includes providing the audio composition for playback by a speaker.