Patent classifications
G10H2210/086
System for creating, practicing and sharing of musical harmonies
Collaboratively creating musical harmonies includes receiving a user selection of a particular harmony. In response to this selection, there is displayed on a display screen of a computing device a plurality of musical note indicators or notes to specify a first harmony part of a musical piece to be performed. Real-time pitch detection is used to determine a pitch of each note which is voiced by a person, and a graphic indication of the actual pitch which is sung is displayed in conjunction with the musical note indicators.
Method and system for generating an audio or MIDI output file using a harmonic chord map
Techniques are provided for generating an output file. One technique involves the steps of generating audio or MIDI content blocks from one or more musical performances; receiving an input file having audio or MIDI music content; generating a harmonic chord map for the input file; using the harmonic chord map to automatically select a subset of the audio or MIDI content blocks, and generating the output file by combining the selected subset of content blocks and the input file. This technique may enable the creation of unique and new musical accompaniments by re-purposing audio or MIDI content from back catalogs and/or out-takes of musical works. The new arrangement may be provided in multiple music styles, genres, or moods and may contain performances from multiple musical instruments, which may be pre-recorded from live instrument performances and/or of MIDI generated musical content.
System, Method and Apparatus for Directing a Presentation of a Musical Score using Artificial Intelligence
Aspects of the subject disclosure may include, for example, receiving content in the form of musical score sheets or other data that includes instructions to play notes on a particular instrument, guidance that is enabled with respect to the content, obtaining new input such as musical scores or other instructions responsive to determining that the guidance is enabled, and obtaining the guidance with respect to a display of the content, where the obtained guidance is based on the input. The instructions may include the end user take certain action in playing the instrument or physical action in a marching band context. Artificial intelligence may be used to issue new documents, instructions or guidance. A GPS and drone communications system is also disclosed. Other embodiments are disclosed.
Method of and system for automatically generating digital performances of music compositions using notes selected from virtual musical instruments based on the music-theoretic states of the music compositions
An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.
METHOD AND SYSTEM FOR GENERATING AN AUDIO OR MIDI OUTPUT FILE USING A HARMONIC CHORD MAP
Techniques are provided for generating an output file. One technique involves the steps of generating audio or MIDI content blocks from one or more musical performances; receiving an input file having audio or MIDI music content; generating a harmonic chord map for the input file; using the harmonic chord map to automatically select a subset of the audio or MIDI content blocks, and generating the output file by combining the selected subset of content blocks and the input file. This technique may enable the creation of unique and new musical accompaniments by re-purposing audio or MIDI content from back catalogs and/or out-takes of musical works. The new arrangement may be provided in multiple music styles, genres, or moods and may contain performances from multiple musical instruments, which may be pre-recorded from live instrument performances and/or of MIDI generated musical content.
AUTOMATIC TRANSLATION USING DEEP LEARNING
Audio data of an original work is received. Text in the audio data is translated to a target language. The audio data is passed to a first deep learning model to learn voice features in the audio data. The audio data is passed to a second deep learning model to learn audio properties in the audio data. The translated text is synchronized to play in the position of original text of the original work in a synthesized voice. A translated audio data of the original work is created by combining the synchronized translated text in the synthesized voice with music of the audio data.
MUSIC GENERATION METHOD AND APPARATUS
A method of generating music and a music generation apparatus are disclosed. The music generation apparatus generates a musical score from audio samples, selects, from among a plurality of MIDI samples, a MIDI sample suitable for the musical score based on a position of a note on a time axis, adjusts pitches of notes of each measure of the selected MIDI sample to match the component notes and tonality of each measure of the musical score, and outputs a melody sample in which the pitches of the notes are adjusted.
System, method and apparatus for directing a presentation of a musical score via artificial intelligence
Aspects of the subject disclosure may include, for example, receiving content in the form of musical score sheets or other data that includes instructions to play notes on a particular instrument, guidance that is enabled with respect to the content, obtaining new input such as musical scores or other instructions responsive to determining that the guidance is enabled, and obtaining the guidance with respect to a display of the content, where the obtained guidance is based on the input. The instructions may include the end user take certain action in playing the instrument or physical action in a marching band context. Artificial intelligence may be used to issue new documents, instructions or guidance. A GPS and drone communications system is also disclosed. Other embodiments are disclosed.
SYSTEM AND METHOD FOR GENERATING MUSICAL SCORE
A method for generating a musical score based on user performance during playing a keyboard instrument may include detecting a status change of a plurality of execution devices of the keyboard instrument. The method may include generating a first signal according to the detected status change. The method may include generating a second signal indicating a plurality of timestamps. The method may include determining a tune of the musical score based on the first signal. The method may include determining a rhythm of the musical score based on the second signal. The method may further include generating the musical score based on the tune and the rhythm of the musical score.
Audio Contribution Identification System and Method
A system for identifying the contribution of a given sound source to a composite audio track, the system comprising an audio input unit operable to receive an input composite audio track comprising two or more sound sources, including the given sound source, an audio generation unit operable to generate, using a model of a sound source, an approximation of the contribution of the given sound source to the composite audio track, an audio comparison unit operable to compare the generated audio to at least a portion of the composite audio track to determine whether the generated audio provides an approximation of the composite audio track that meets a threshold degree of similarity, and an audio identification unit operable to identify, when the threshold is met, the generated audio as a suitable representation of the contribution of the sound source to the composite audio track.