G10H2240/145

Drumstick controller
10991352 · 2021-04-27 ·

A percussion device includes a drumstick assembly. The drumstick assembly includes a drumstick having a base and a tip end, and a drumstick tip secured to the tip end of the drumstick, the drumstick tip including a sensor. The drumstick including the base thereof, and includes at least one control button, a communication element, and a processor in communication with the at least one control button, the drumstick tip and the communication element. The processor is configured to receive a signal from the drumstick tip and to generate output to the communication element. The output so generated includes a signal that specifies a sound file selected by operation of the at least one control button.

Method of digitally performing a music composition using virtual musical instruments having performance logic executing within a virtual musical instrument (VMI) library management system

An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.

Systems and methods for visual image audio composition based on user input
11004434 · 2021-05-11 ·

The present invention relates to systems and methods for visual image audio composition. In particular, the present invention provides systems and methods for audio composition from a diversity of visual images and user determined sound database sources.

METHOD OF AND SYSTEM FOR AUTOMATICALLY GENERATING DIGITAL PERFORMANCES OF MUSIC COMPOSITIONS USING NOTES SELECTED FROM VIRTUAL MUSICAL INSTRUMENTS BASED ON THE MUSIC-THEORETIC STATES OF THE MUSIC COMPOSITIONS

An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.

METHOD OF DIGITALLY PERFORMING A MUSIC COMPOSITION USING VIRTUAL MUSICAL INSTRUMENTS HAVING PERFORMANCE LOGIC EXECUTING WITHIN A VIRTUAL MUSICAL INSTRUMENT (VMI) LIBRARY MANAGEMENT SYSTEM

An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.

METHOD OF AND SYSTEM FOR AUTOMATED MUSICAL ARRANGEMENT AND MUSICAL INSTRUMENT PERFORMANCE STYLE TRANSFORMATION SUPPORTED WITHIN AN AUTOMATED MUSIC PERFORMANCE SYSTEM

An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.

Method of and system for automatically generating digital performances of music compositions using notes selected from virtual musical instruments based on the music-theoretic states of the music compositions

An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.

Devices and Methods for Sharing User Interaction
20210034176 · 2021-02-04 ·

A method, such as a computer implemented method, of data management, wherein content utilized by a first user can be identified and information about such content can be shared to at least one additional user such that the at least one additional user can pull the identified content from the content source.

TIMBRE CREATION SYSTEM

A timbre creation method, system, and computer program product include performing a timbre analysis of a sound from an input source to generate a digital fingerprint of the sound, performing deep learning to create a patch that matches the digital fingerprint, and generating a second patch for a synthesizer which reproduces a timbre that complements the digital fingerprint based on the patch.

MUSIC GENERATOR
20210027754 · 2021-01-28 ·

Techniques are disclosed relating to determining composition rules, based on existing music content, to automatically generate new music content. In some embodiments, a computer system accesses a set of music content and generates a set of composition rules based on analyzing combinations of multiple loops in the set of music content. In some embodiments, the system generates new music content by selecting loops from a set of loops and combining selected ones of the loops such that multiple ones of the loops overlap in time. In some embodiments, the selecting and combining loops is performed based on the set of composition rules and attributes of loops in the set of loops.