Patent classifications
G10H2210/115
Automated music composition and generation system employing virtual musical instrument libraries for producing notes contained in the digital pieces of automatically composed music
An automated music composition and generation system including a system user interface for enabling system users to review and select one or more musical experience descriptors, as well as time and/or space parameters; and an automated music composition and generation engine, operably connected to the system user interface, for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user, so as to automatically compose and generate one or more digital pieces of music in response to the musical experience descriptors and time and/or space parameters selected by the system user. Each digital piece of composed and generated music contains a set of musical notes arranged and performed in the digital piece of music. The engine includes: a digital piece creation subsystem and a digital audio sample producing subsystem supported by virtual musical instrument libraries.
Method of and system for automatically generating music compositions and productions using lyrical input and music experience descriptors
An automated music composition and generation process within an automated music composition and generation system driven by lyrical musical experience descriptors. The process involves the system user accessing said automated music composition and generation system, employing an automated music composition and generation engine having a system user interface. The system user interface is used to select and provide musical experience descriptors, including lyrics, to the automated music composition and generation engine for processing by said automated music composition and generation engine. The system user initiates the automated music composition and generation engine to compose and generate music based on the musical experience descriptors and lyrics provided.
Automated music composition and generation system driven by lyrical input
An automated music composition and generation process within an automated music composition and generation system driven by lyrics. The process involves the system user accessing said automated music composition and generation system, employing an automated music composition and generation engine having a system user interface. The system user interface is used to provide lyrics to the automated music composition and generation engine for processing by the automated music composition and generation engine. The system user initiates the automated music composition and generation engine to compose and generate music based on lyrics the provided as input. The lyrics are analyzed for vowel formants to generate pitch events, which are used to support the automated music composition process.
Automated music composition and generation system employing an instrument selector for automatically selecting virtual instruments from a library of virtual instruments to perform the notes of the composed piece of digital music
An automated music composition and generation system for automatically composing and generating digital pieces of music using an automated music composition and generation engine driven by a set of emotion-type and style-type musical experience descriptors and time and/or space parameters provided by a system user. The automated music composition and generation engine includes an instrument subsystem supporting a library of virtual instruments, wherein each virtual instrument is capable of performing one or more notes of at least a portion of the composed piece of music, in response to the emotion-type and/or style-type musical experience descriptors; an instrument selector subsystem for automatically selecting one or more of virtual instruments from the library, so that each selected virtual instrument performs one or more notes of at least a portion of the composed piece of music; and a digital piece creation subsystem for creating the digital piece of composed music by assembling the notes produced from the virtual instruments selected from the library.
Automatically managing the musical tastes and preferences of system users based on user feedback and autonomous analysis of music automatically composed and generated by an automated music composition and generation system
An automated music composition and generation system having an automated music composition and generation engine for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user. The automated music composition and generation engine includes a user taste generation subsystem for automatically (i) determining the musical tastes and preferences of a system user based on user feedback and autonomous piece analysis, (ii) maintaining a system user profile reflecting musical tastes and preferences of each system user, and (iii) using the musical taste and preference information to change or modify the musical experience descriptors provided to the system to produce a digital piece of composed music composition that better reflects the musical tastes and preferences of the system user.
SYSTEMS, DEVICES, AND METHODS FOR GENERATING SYMBOL SEQUENCES AND FAMILIES OF SYMBOL SEQUENCES
The present systems, devices, and methods generally relate to generating families of symbol sequences with controllable degree of correlation within and between them using quantum computers, and particularly to the exploitation of this capability to generate families of symbol sequences representing musical events such as, but not limited to, musical notes, musical chords, musical percussion strikes, musical time intervals, musical note intervals, and musical key changes that comprise a musical composition. Quantum random walks on graphs representing allowed transitions between musical events are also employed in some implementations.
Pseudo—live sound and music
A method and apparatus for the creation and playback of music and/or sound, so that sound sequences are generated that vary from one playback to another playback. In one embodiment, during composition creation, artist(s) may define how the composition may vary from playback to playback using visually interactive display(s). The artist's definition may be embedded into a composition dataset. During playback, a composition data set may be processed by a playback device and/or a playback program, so that each time the composition is played-back a unique version may be generated. Variability during playback may include: the variable selection of alternative sound segment(s); variable editing of sound segment(s) during playback processing; variable placement of sound segment(s) during playback processing; the spawning of group(s) of alternative sound segments from initiating sound segment(s); and the combining and/or mixing of alternative sound segments in one or more sound channels. MIDI-like variable compositions and the variable use of sound segments comprised of a timed sequence of MIDI-like commands are also disclosed.
Audio Techniques for Music Content Generation
Techniques are disclosed relating to implementing audio techniques for real-time audio generation. For example, a music generator system may generate new music content from playback music content based on different parameter representations of an audio signal. In some cases, an audio signal can be represented by both a graph of the signal (e.g., an audio signal graph) relative to time and a graph of the signal relative to beats (e.g., a signal graph). The signal graph is invariant to tempo, which allows for tempo invariant modification of audio parameters of the music content in addition to tempo variant modifications based on the audio signal graph.
Listener-Defined Controls for Music Content Generation
Techniques are disclosed relating to implementing user-created controls to modify music content. A music generator system may be configured to automatically generate output music content by selecting and combining audio tracks based on various parameters. Users may create their own control elements that the music generator system may train (e.g., using AI techniques) to generate output music content according to a user's intended functionality of a user-created control element.
Block-Chain Ledger Based Tracking of Generated Music Content
Techniques are disclosed relating to tracking contributions to composed music content. In some embodiments, a computer system determines playback data for a music content mix, where the playback data indicates characteristics of playback of the music content mix and the music content mix includes a determined combination of multiple audio tracks. In some embodiments, the system records, in an electronic block-chain ledger data structure, information specifying individual playback data for one or more of the multiple audio tracks in the music content mix. The information specifying individual playback data for an individual audio track may include usage data for the individual audio track and signature information associated with the individual audio track.