G10H2240/121

Method of combining audio signals

A method for automatically generating an audio signal, the method comprising receiving a source audio signal analyzing the source audio signal to identify a musical parameter characteristic thereof obtaining a supplemental audio signal based on the identified musical parameter characteristic and combining the source audio signal and the supplemental audio signal to form an extended audio signal.

System and method for recording and sharing music
09817551 · 2017-11-14 ·

A method includes uploading a plurality of songs to a server; allowing a first user to listen to a selected song from the plurality on an electronic device, the selected song being transmitted across an electronic network from the server; allowing the first user to record a user-generated stem track using the electronic device, the electronic device being located at the user location; without saving the user-generated stem track, transmitting only the user-generated stem track across the electronic network to the server; storing the user-generated stem track on the server; allowing the first user to select a plurality of other individual user-generated stem tracks from the server and combine the selected user-generated stem tracks to form a user-generated cover song; and transmitting an electronic message to other users who generated the other individual user-generated stem tracks, informing them of the combination of their stem tracks to form the cover song.

MUSIC CONTEXT SYSTEM AUDIO TRACK STRUCTURE AND METHOD OF REAL-TIME SYNCHRONIZATION OF MUSICAL CONTENT
20220044663 · 2022-02-10 ·

A system is described that permits identified musical phrases or themes to be synchronized and linked into changing real-world events. The achieved synchronization includes a seamless musical transition—achieved using a timing offset, such as relative advancement of an significant musical “onset,” that is inserted to align with a pre-existing but identified music signature, beat or timebase—between potentially disparate pre-identified musical phrases having different emotive themes defined by their respective time signatures, intensities, keys, musical rhythms and/or musical phrasing. The system operates to augment an overall sensory experience of a user in the real world by dynamically changing, re-ordering or repeating and then playing audio themes within the context of what is occurring in the surrounding physical environment, e.g. during different phases of a cardio workout in a step class the music rate and intensity increase during sprint periods and decrease during recovery periods.

METHOD AND SYSTEM FOR REALIZING INTELLIGENT MATCHING AND ADDING OF RHYTHMIC TRACKS
20220293134 · 2022-09-15 ·

The present disclosure provides a method and a system for realizing intelligent matching and adding of rhythmic tracks. In one example, a method is provided, including: pre-storing a plurality of music elements; generating a rhythmic track library according to the plurality of music elements, the rhythmic track library including a plurality of rhythmic tracks based on the plurality of music elements; detecting a BPM of original music, and calculating a time interval between every two beats based on the detected BPM of the original music; selecting one or more rhythmic tracks from the rhythmic track library, and assigning a time parameter to the one or more rhythmic tracks based on the time interval so as to match the rhythm of the original music; and adding the one or more rhythmic tracks assigned with the time parameter into the original music so as to combine with the original music and play.

Media-media augmentation system and method of composing a media product
11114074 · 2021-09-07 · ·

A media-content augmentation system includes a processing system that receives input data in the form of temporally-varying events data. The processing system resolves the input into one or more categorized contextual themes, correlates the themes with metadata associated with at least one reference media file, and then splices or fades together selected parts of the media file, thus generating as an output, a media product in which transitions between its contextual themes are aligned with selected temporal events in the input data. The temporarily-varying events take the form of a beginning and an end in the case of a sustained feature, or a specific point in time for a hit point. A method aligns sections in digital media files with temporally-varying events data to compose a media product. The system augments a sensory experience of a user by dynamically changing and then playing selected media files within the context of the categorized themes input to the processing system.

Auto-generated accompaniment from singing a melody

A method for processing a voice signal by an electronic system to create a song is disclosed. The method comprises the steps in the electronic system of acquiring an input singing voice recording (11); estimating a musical key (15b) and a Tempo (15a) from the singing voice recording (11); defining a tuning control (16) and a timing control (17) able to align the singing voice recording (11) with the estimated musical key (15b) and Tempo (15a); applying the tuning control (16) and the timing control (17) to the singing voice recording (11) so that an aligned voice recording (20) is obtained. Next, the method comprises the step of generating an music accompaniment (23) as function of the estimated musical key (15b) and Tempo (15a) and an arrangement database (22) and mixing the aligned voice recording (20) and the music accompaniment (23) to obtain the song (12). A system a server and a device are also disclosed.

Method of digitally performing a music composition using virtual musical instruments having performance logic executing within a virtual musical instrument (VMI) library management system

An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.

METHOD OF AND SYSTEM FOR AUTOMATICALLY GENERATING DIGITAL PERFORMANCES OF MUSIC COMPOSITIONS USING NOTES SELECTED FROM VIRTUAL MUSICAL INSTRUMENTS BASED ON THE MUSIC-THEORETIC STATES OF THE MUSIC COMPOSITIONS

An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.

METHOD OF DIGITALLY PERFORMING A MUSIC COMPOSITION USING VIRTUAL MUSICAL INSTRUMENTS HAVING PERFORMANCE LOGIC EXECUTING WITHIN A VIRTUAL MUSICAL INSTRUMENT (VMI) LIBRARY MANAGEMENT SYSTEM

An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.

Method of and system for automatically generating digital performances of music compositions using notes selected from virtual musical instruments based on the music-theoretic states of the music compositions

An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.