Patent classifications
G10H2210/341
Controller for producing control signals
A controller, method, system, and computer-readable medium, for producing control signals. The controller comprises a pressure sensor, a hinged input mechanism configured to receive input forces and direct them towards the sensor, and a processor. The processor is configured to receive a signal from the pressure sensor indicating that the hinged input mechanism is being depressed or released and, based on the received signal, to determine, during a time interval, a rate of change of pressure detected at the sensor. The processor also generates a control signal associated with the hinged input mechanism, wherein the control signal comprises a velocity characteristic representing a speed at which the hinged input mechanism is depressed or released, and the velocity characteristic is based at least partly on the determined rate of change of pressure. In one example embodiment, the control signal is an audio control.
Method of and system for spotting digital media objects and event markers using musical experience descriptors to characterize digital music to be automatically composed and generated by an automated music composition and generation engine
An automated music composition and generation system and process for scoring a selected media object or event marker, with one or more pieces of digital music, by spotting the selected media object or event marker with musical experience descriptors selected and applied to the selected media object or event marker by the system user during a scoring process, and using said selected musical experience descriptors to drive an automated music composition and generation engine to automatically compose and generate the one or more pieces of digital music.
METHOD OF AND SYSTEM FOR AUTOMATICALLY GENERATING MUSIC COMPOSITIONS AND PRODUCTIONS USING LYRICAL INPUT AND MUSIC EXPERIENCE DESCRIPTORS
An automated music composition and generation process within an automated music composition and generation system driven by lyrical musical experience descriptors. The process involves the system user accessing said automated music composition and generation system, employing an automated music composition and generation engine having a system user interface. The system user interface is used to select and provide musical experience descriptors, including lyrics, to the automated music composition and generation engine for processing by said automated music composition and generation engine. The system user initiates the automated music composition and generation engine to compose and generate music based on the musical experience descriptors and lyrics provided.
AUTONOMOUS MUSIC COMPOSITION AND PERFORMANCE SYSTEM EMPLOYING REAL-TIME ANALYSIS OF A MUSICAL PERFORMANCE TO AUTOMATICALLY COMPOSE AND PERFORM MUSIC TO ACCOMPANY THE MUSICAL PERFORMANCE
An autonomous music composition and performance system employing an automated music composition and generation engine configured to receive musical signals from a set of a real or synthetic musical instruments being played by a group of human musicians. The system buffers and analyzes musical signals from the set of real or synthetic musical instruments, composes and generates music in real-time that augments the music being played by the band of musicians, and/or composes and generates music for subsequent playback, review and consideration by the human musicians.
METHOD OF AUTOMATICALLY CONFIRMING THE UNIQUENESS OF DIGITAL PIECES OF MUSIC PRODUCED BY AN AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM WHILE SATISFYING THE CREATIVE INTENTIONS OF SYSTEM USERS
A method of automatically confirming the uniqueness of digital pieces of music produced by an automated music composition and generation system while satisfying the creative intentions of system users. The method involves reviewing, selecting and providing one or more musical experience descriptors and time and/or space parameters, to an automated music composition and generation engine operably connected to the system user interface. The automated music composition and generation engine includes a music piece analysis subsystem for automatically examining each piece of composed music that has been generated by said automated music composition and generation engine, comparing the digital piece of composed and generated music against other digital pieces of music composed and generated by said automated music composition and generation system for said system user, and determining whether or not the examined digital piece of composed and generated music is sufficiently unique. Also, the method automatically confirms with the system user that each examined digital piece of composed and generated music satisfies the creative intentions of the system user.
AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM SUPPORTING AUTOMATED GENERATION OF MUSICAL KERNELS FOR USE IN REPLICATING FUTURE MUSIC COMPOSITIONS AND PRODUCTION ENVIRONMENTS
An automated music composition and generation system provided with a system user interface enabling system users to review, select and provide one or more musical experience descriptors as well as time and/or space parameters, to an automated music composition and generation engine, operably connected to the system user interface. The automated music composition and generation engine includes a musical kernel generation subsystem for automatically analyzing and saving musical kernel elements automatically abstracted from the digital piece of music. The abstracted musical kernel elements distinguish the digital piece of music from any other digital piece of music automatically composed and generated by the automated music composition and generation system, and serve as a music kernel definition of the digital piece of composed music, which can be subsequently used during future automated music composition and generation processes, and in future music production environments, to replicate the digital piece of composed music at a later time, either with complete or incomplete accuracy, as required or desired by the system user.
AUTOMATICALLY MANAGING THE MUSICAL TASTES AND PREFERENCES OF SYSTEM USERS BASED ON USER FEEDBACK AND AUTONOMOUS ANALYSIS OF MUSIC AUTOMATICALLY COMPOSED AND GENERATED BY AN AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM
An automated music composition and generation system having an automated music composition and generation engine for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user. The automated music composition and generation engine includes a user taste generation subsystem for automatically (i) determining the musical tastes and preferences of a system user based on user feedback and autonomous piece analysis, (ii) maintaining a system user profile reflecting musical tastes and preferences of each system user, and (iii) using the musical taste and preference information to change or modify the musical experience descriptors provided to the system to produce a digital piece of composed music composition that better reflects the musical tastes and preferences of the system user.
AUTOMATICALLY MANAGING THE MUSICAL TASTES AND PREFERENCES OF A POPULATION OF USERS REQUESTING DIGITAL PIECES OF MUSIC AUTOMATICALLY COMPOSED AND GENERATED BY AN AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM
An automated music composition and generation system having an automated music composition and generation engine for receiving, storing and processing musical experience descriptors as well as time and/or space parameters selected by the system user. The automated music composition and generation engine includes: a user taste generation subsystem for automatically determining the musical tastes and preferences of each system user based on user feedback and autonomous piece analysis, and maintaining a system user profile reflecting musical tastes and preferences of each system user; and a population taste aggregation subsystem for automatically aggregating the musical tastes and preferences of the population of system users, and modifying the musical experience descriptors and/or time and/or space parameters provided to the automated music composition and generation engine, so that the automatically generated digital pieces of composed music better reflect the musical tastes and preferences of the population of system users and more accurately and quickly meet future system user requests for automated music compositions.
AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM EMPLOYING VIRTUAL MUSICAL INSTRUMENT LIBRARIES FOR PRODUCING NOTES CONTAINED IN THE DIGITAL PIECES OF AUTOMATICALLY COMPOSED MUSIC
An automated music composition and generation system including a system user interface for enabling system users to review and select one or more musical experience descriptors, as well as time and/or space parameters; and an automated music composition and generation engine, operably connected to the system user interface, for receiving, storing and processing musical experience descriptors and time and/or space parameters selected by the system user, so as to automatically compose and generate one or more digital pieces of music in response to the musical experience descriptors and time and/or space parameters selected by the system user. Each digital piece of composed and generated music contains a set of musical notes arranged and performed in the digital piece of music. The automated music composition and generation engine includes: a digital piece creation subsystem for creating and delivering the digital piece of music to the system user interface; and a digital audio sample producing subsystem supported by virtual musical instrument libraries for producing digital audio samples of the set of notes contained in the generated digital piece of composed music.
AUTOMATED MUSIC COMPOSITION AND GENERATION SYSTEM DRIVEN BY LYRICAL INPUT
An automated music composition and generation process within an automated music composition and generation system driven by lyrics. The process involves the system user accessing said automated music composition and generation system, employing an automated music composition and generation engine having a system user interface. The system user interface is used to provide lyrics to the automated music composition and generation engine for processing by the automated music composition and generation engine. The system user initiates the automated music composition and generation engine to compose and generate music based on lyrics the provided as input. The lyrics are analyzed for vowel formants to generate pitch events, which are used to support the automated music composition process.