Patent classifications
G10H2210/111
Autonomous music composition and performance system employing real-time analysis of a musical performance to automatically compose and perform music to accompany the musical performance
An autonomous music composition and performance system employing an automated music composition and generation engine configured to receive musical signals from a set of a real or synthetic musical instruments being played by a group of human musicians. The system buffers and analyzes musical signals from the set of real or synthetic musical instruments, composes and generates music in real-time that augments the music being played by the band of musicians, and/or composes and generates music for subsequent playback, review and consideration by the human musicians.
Automated music composition and generation systems, engines and methods employing parameter mapping configurations to enable automated music composition and generation
An automated music composition and generation system includes a graphical user interface (GUI) based system user interface for enabling system users to review and select one or more musical experience descriptors as well as time and/or space parameters; and an automated music composition and generation engine, operably connected to the GUI-based system user interface, for receiving, storing and processing the musical experience descriptors and time and/or space parameters, and composing and generating digital pieces of music, each containing a set of musical notes arranged and performed in the digital piece of composed music. A system network and methods are provided for designing and developing parameter mapping configurations (SMCs) used in the automated music composition and generation engine so as to enable the automated music composition and generation engine to automatically compose and generate music in response to musical experience descriptors and time and/or space parameters provided as input to the system.
Method of composing a piece of digital music using musical experience descriptors to indicate what, when and how musical events should appear in the piece of digital music automatically composed and generated by an automated music composition and generation system
An automated music composition and generation system having a system user interface operably connected to an automated music composition and generation engine, and supporting a method of composing a piece of digital music using musical experience descriptors as to indicate what, when and how particular musical events should occur in the piece of digital music to be automatically composed and generated. The method uses the system user interface to select one or more musical experience descriptors and applying the musical experience descriptors along a timeline representation of a piece of digital music to be automatically composed and generated by the automated music composition and generation engine.
System and method for generating music from electrodermal activity data
The Plant Choir™ system comprises a software program and hardware that measures the electrodermal activity of a person, plant, or animal and translates those readings into music on a computing device. The EDA readings of the individual subjects are translated via the software into musical notes in real time. The creation of the notes is synchronized to a master tempo in order to allow the subjects to play together in a unified fashion similar to a choir. A riff mode allows the subjects to produce multiple notes per beat. The music is rendered using a software synthesis algorithm that employs the pre-recorded sounds of real instruments. The software can also utilize MIDI devices if the operating system has that capability. The software allows the user to load and save their settings so they can create and experiment with their own choir configurations and musical scales.
System and method for generating music from electrodermal activity data
The Plant Choir™ system comprises a software program and hardware that measures the electrodermal activity of a person, plant, or animal and translates those readings into music on a computing device. The EDA readings of the individual subjects are translated via the software into musical notes in real time. The creation of the notes is synchronized to a master tempo in order to allow the subjects to play together in a unified fashion similar to a choir. A riff mode allows the subjects to produce multiple notes per beat. The music is rendered using a software synthesis algorithm that employs the pre-recorded sounds of real instruments. The software can also utilize MIDI devices if the operating system has that capability. The software allows the user to load and save their settings so they can create and experiment with their own choir configurations and musical scales.
Method of and system for automated musical arrangement and musical instrument performance style transformation supported within an automated music performance system
An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.
COMPUTER-IMPLEMENTED METHOD OF DIGITAL MUSIC COMPOSITION
A computer-implemented method of digital music composition that creates a digital multi-genre musical composition track by downloading a host digital music track of a first genre and two or more separate donor multi-genre musical tracks, and then selectively modulating the instruments and rhythmic patterns of the donor musical tracks by manipulating the rhythmic patterns. The manipulation includes manipulating at least one of the intensities, frequency, sound, beat, and rhythm of the rhythmic pattern. The manipulated donor musical tracks are then integrated into the host musical track to create a combined digital multi-genre musical composition track, which can be downloaded, saved in a file, and replayed as needed.
Method of scoring digital media objects using musical experience descriptors to indicate what, where and when musical events should appear in pieces of digital music automatically composed and generated by an automated music composition and generation system
An automated music composition and generation system having a system user interface operably connected to an automated music composition and generation engine, and supporting a method of scoring a selected media object with one or more pieces of digital music. The method uses the system user interface to select one or more musical experience descriptors and then apply the selected musical experience descriptors to the selected digital media object to indicate what, when and how particular musical events should occur in the one or more pieces of digital music to be automatically composed and generated by the automated music composition and generation engine. The generated piece of digital music is then used in musically scoring the selected digital media object.
Method of automatically confirming the uniqueness of digital pieces of music produced by an automated music composition and generation system while satisfying the creative intentions of system users
A method of automatically confirming the uniqueness of digital pieces of music produced by an automated music composition and generation system while satisfying the creative intentions of system users. The method involves reviewing, selecting and providing one or more musical experience descriptors and time and/or space parameters, to an automated music composition and generation engine operably connected to the system user interface. The automated music composition and generation engine includes a music piece analysis subsystem for automatically examining each piece of composed music that has been generated by said automated music composition and generation engine, comparing the digital piece of composed and generated music against other digital pieces of music composed and generated by said automated music composition and generation system for said system user, and determining whether or not the examined digital piece of composed and generated music is sufficiently unique. Also, the method automatically confirms with the system user that each examined digital piece of composed and generated music satisfies the creative intentions of the system user.
AUTONOMOUS GENERATION OF MELODY
Implementations of the subject matter described herein provide a solution that enables a machine to automatically generate a melody. In this solution, user emotion and/or environment information is used to select a first melody feature parameter from a plurality of melody feature parameters, wherein each of the plurality of melody feature parameters corresponds to a music style of one of a plurality of reference melodies. The first melody feature parameter is further used to generate a first melody that conforms to the music style and is different from the reference melody. Thus, a melody that matches user emotions and/or environmental information may be automatically created.