Patent classifications
G10H2240/121
Music context system audio track structure and method of real-time synchronization of musical content
A system is described that permits identified musical phrases or themes to be synchronized and linked into changing real-world events. The achieved synchronization includes a seamless musical transitionachieved using a timing offset, such as relative advancement of an significant musical onset, that is inserted to align with a pre-existing but identified music signature, beat or timebasebetween potentially disparate pre-identified musical phrases having different emotive themes defined by their respective time signatures, intensities, keys, musical rhythms and/or musical phrasing. The system operates to augment an overall sensory experience of a user in the real world by dynamically changing, re-ordering or repeating and then playing audio themes within the context of what is occurring in the surrounding physical environment, e.g. during different phases of a cardio workout in a step class the music rate and intensity increase during sprint periods and decrease during recovery periods.
MEDIA-MEDIA AUGMENTATION SYSTEM AND METHOD OF COMPOSING A MEDIA PRODUCT
A media-content augmentation system includes a processing system that receives input data in the form of temporally-varying events data. The processing system resolves the input into one or more categorized contextual themes, correlates the themes with metadata associated with at least one reference media file, and then splices or fades together selected parts of the media file, thus generating as an output, a media product in which transitions between its contextual themes are aligned with selected temporal events in the input data. The temporarily-varying events take the form of a beginning and an end in the case of a sustained feature, or a specific point in time for a hit point. A method aligns sections in digital media files with temporally-varying events data to compose a media product. The system augments a sensory experience of a user by dynamically changing and then playing selected media files within the context of the categorized themes input to the processing system.
METHOD OF COMBINING AUDIO SIGNALS
A method for automatically generating an audio signal, the method comprising receiving a source audio signal analyzing the source audio signal to identify a musical parameter characteristic thereof obtaining a supplemental audio signal based on the identified musical parameter characteristic and combining the source audio signal and the supplemental audio signal to form an extended audio signal.
Data format
A method for constructing an adaptive media file comprising a plurality of audio components configured to be used to form an audio output arranged to have a controllable tempo, the method comprising providing first audio data associated with a first audio component of the plurality of audio components, setting a playback tempo range of the first audio data, providing second audio data associated with the first audio component, setting a playback tempo range of the second audio data, wherein the tempo range of the second audio data is different to the tempo range of the first audio data, and associating the first audio data, the second audio data and the respective playback tempo ranges.
Music context system and method of real-time synchronization of musical content having regard to musical timing
Due to discrepancies in musical timing signatures, the invention assesses whether a recorded displacement, expressed in terms of beats and fractions, between exit and entry points for a potential musical splice or cut, corresponds to permit a seamless music splicing of different musical sections. Assessment is achieved by establishing a third time base of pulses having a length dependent upon a lowest common multiple of fractions within respective bars for different sections, with the bars of the respective sections then partitioned into an equal number of fixed length pulses. A coefficient aligns different time signatures; it is a ratio between pulses within the different sections. The coefficient identifies corresponding locations of a cut point, related to a suitable anacrusis, in terms of respectively an aligned bar, beat, quaver and fraction in differing time signatures. The coefficient ensures that the time anacrusis in one time signature is interchangeable with others.
MUSIC CONTEXT SYSTEM AND METHOD OF REAL-TIME SYNCHRONIZATION OF MUSICAL CONTENT HAVING REGARD TO MUSICAL TIMING
Due to discrepancies in musical timing signatures, the invention assesses whether a recorded displacement, expressed in terms of beats and fractions, between exit and entry points for a potential musical splice or cut, corresponds to permit a seamless music splicing of different musical sections. Assessment is achieved by establishing a third time base of pulses having a length dependent upon a lowest common multiple of fractions within respective bars for different sections, with the bars of the respective sections then partitioned into an equal number of fixed length pulses. A coefficient aligns different time signatures; it is a ratio between pulses within the different sections. The coefficient identifies corresponding locations of a cut point, related to a suitable anacrusis, in terms of respectively an aligned bar, beat, quaver and fraction in differing time signatures. The coefficient ensures that the time anacrusis in one time signature is interchangeable with others.
DATA FORMAT
A method for constructing an adaptive media file comprising a plurality of audio components configured to be used to form an audio output arranged to have a controllable tempo, the method comprising providing first audio data associated with a first audio component of the plurality of audio components, setting a playback tempo range of the first audio data, providing second audio data associated with the first audio component, setting a playback tempo range of the second audio data, wherein the tempo range of the second audio data is different to the tempo range of the first audio data, and associating the first audio data, the second audio data and the respective playback tempo ranges.
AUTO-GENERATED ACCOMPANIMENT FROM SINGING A MELODY
A method for processing a voice signal by an electronic system to create a song is disclosed. The method comprises the steps in the electronic system of acquiring an input singing voice recording (11); estimating a musical key (15b) and a Tempo (15a) from the singing voice recording (11); defining a tuning control (16) and a timing control (17) able to align the singing voice recording (11) with the estimated musical key (15b) and Tempo (15a); applying the tuning control (16) and the timing control (17) to the singing voice recording (11) so that an aligned voice recording (20) is obtained. Next, the method comprises the step of generating an music accompaniment (23) as function of the estimated musical key (15b) and Tempo (15a) and an arrangement database (22) and mixing the aligned voice recording (20) and the music accompaniment (23) to obtain the song (12). A system a server and a device are also disclosed.
GENERATING AND MIXING AUDIO ARRANGEMENTS
A request for an audio arrangement having one or more target audio arrangement characteristics is received. One or more target audio attributes are identified based on the one or more target audio arrangement characteristics. First audio data is selected. The first audio data has a first set of audio attributes, which comprises at least some of the identified one or more target audio attributes. Second audio data is selected. The second audio data has a second set of audio attributes, which comprises at least some of the identified one or more target audio attributes. One or more mixed audio arrangements are output and/or data useable to generate the one or more mixed audio arrangements is output. The one or more mixed audio arrangements are generated by at least the selected first and second audio data being mixed using an automated audio mixing procedure.
Media-media augmentation system and method of composing a media product
A media-content augmentation system includes a database with a multiplicity of media files and associated metadata. Each media file is mapped to at least one contextual theme defined by beginning and end timings. A processing system couples to the database; and an input couples to the processing system. The input is in the form of temporally-varying events data. The processing system resolves the input into one or more categorized contextual themes, correlates the themes with metadata associated with selected media files relevant to the themes, and then splices or fades together selected media files to reflect the events as the input varies with time, thus generating as an output, a media product in which transitions between media are aligned with the temporally-varying events. The database may contain sections of digital media files. A method aligns sections in digital media files with temporally-varying events data to compose a media product.