Patent classifications
G10H2240/075
MEDIA-MEDIA AUGMENTATION SYSTEM AND METHOD OF COMPOSING A MEDIA PRODUCT
A media-content augmentation system includes a processing system that receives input data in the form of temporally-varying events data. The processing system resolves the input into one or more categorized contextual themes, correlates the themes with metadata associated with at least one reference media file, and then splices or fades together selected parts of the media file, thus generating as an output, a media product in which transitions between its contextual themes are aligned with selected temporal events in the input data. The temporarily-varying events take the form of a beginning and an end in the case of a sustained feature, or a specific point in time for a hit point. A method aligns sections in digital media files with temporally-varying events data to compose a media product. The system augments a sensory experience of a user by dynamically changing and then playing selected media files within the context of the categorized themes input to the processing system.
TRANSITIONS BETWEEN MEDIA CONTENT ITEMS
A system of playing media content items determines transitions between pairs of media content items by determining desirable locations in which transitions across the pairs of media content items occur. The system uses a plurality of track features of media content items and determines such track features of each media content item associated with each of transition point candidates, such as beat positions, of that media content item. The system determines similarity in the plurality of track features between the transition point candidates of a first media content item and the transition point candidates for a second media content item being played subsequent to the first media content item. The transition points or portions of the first and second media content items are selected from the transition point candidates for the first and second media content items based on the similarity results.
MODULAR AUTOMATED MUSIC PRODUCTION SERVER
A music production system comprises: a computer interface comprising at least one input for receiving an external request for a piece of music and at least one output for transmitting a response to the external request which comprises or indicates a piece of music incorporating first music data; a first music production component configured to process second music data according to at least a first input setting so as to generate the first music data; a second music production component configured to receive via the computer interface an internal request, and provide the second music data based on at least a second input setting denoted by the internal request; and a controller configured to determine in response to the external request the first and second input settings, and instigate the internal request via the computer interface.
Technologies for generating a musical fingerprint
Techniques are described herein for generating a music fingerprint representative of a performance style of an individual. One or more characteristics associated with musical data are identified. A score to associated with each of the identified one or more characteristics is determined. The music fingerprint is generated based on the determine score for each of the identified one or more characteristics.
AUTOMATED MIDI MUSIC COMPOSITION SERVER
A music composition system for composing music segments comprises: a computer interface comprising at least one external input for receiving from an external device a request for a musical composition; a controller configured to determine based on a request received at the external input a plurality of musical parts for the musical composition; and a composition engine configured to generate, for each of the determined musical parts, at least one musical segment in digital musical notation format, the musical segments configured to cooperate musically when performed simultaneously. The computer interface comprises at least one external output configured to output a response to the request, the request comprising or indicating each of the musical segments in digital musical notation format for rendering into audio data at the external device.
Disregarding audio content
One embodiment provides a method, including: receiving, at an information handling device, a user input to play media files associated with a media file type from a playlist comprising a plurality of media files; analyzing, using a processor, the plurality of media files to identify at least one media file not associated with the media file type; disregarding, at least temporarily, based on the analyzing, the at least one media file; and providing, based on the disregarding, output of a media file from the playlist other than the at least temporarily disregarded at least one media file. Other aspects are described and claimed.
Transitions between media content items
A system of playing media content items determines transitions between pairs of media content items by determining desirable locations in which transitions across the pairs of media content items occur. The system uses a plurality of track features of media content items and determines such track features of each media content item associated with each of transition point candidates, such as beat positions, of that media content item. The system determines similarity in the plurality of track features between the transition point candidates of a first media content item and the transition point candidates for a second media content item being played subsequent to the first media content item. The transition points or portions of the first and second media content items are selected from the transition point candidates for the first and second media content items based on the similarity results.
Music context system and method of real-time synchronization of musical content having regard to musical timing
Due to discrepancies in musical timing signatures, the invention assesses whether a recorded displacement, expressed in terms of beats and fractions, between exit and entry points for a potential musical splice or cut, corresponds to permit a seamless music splicing of different musical sections. Assessment is achieved by establishing a third time base of pulses having a length dependent upon a lowest common multiple of fractions within respective bars for different sections, with the bars of the respective sections then partitioned into an equal number of fixed length pulses. A coefficient aligns different time signatures; it is a ratio between pulses within the different sections. The coefficient identifies corresponding locations of a cut point, related to a suitable anacrusis, in terms of respectively an aligned bar, beat, quaver and fraction in differing time signatures. The coefficient ensures that the time anacrusis in one time signature is interchangeable with others.
Systems and methods for embedding data in media content
An electronic device determines a first audio event of a first media content item and modifies the first media content item by superimposing a first set of data that corresponds to the first media content item over the first audio event. The first audio event has a first audio profile configured to be presented over a first channel for playback. The first set of data has a second audio profile configured to be presented over the first channel for playback. Playback of the second audio profile is configured to be masked by the first audio profile during playback of the first media content item. The electronic device transmits, to a second electronic device, the modified first media content item.
METHOD AND SYSTEM FOR CATEGORIZING MUSICAL SOUND ACCORDING TO EMOTIONS
A computer implemented method for analysing sounds, such as audio tracks, and automatically classifying the sounds in a space in which arousal is one axis and valence is another axis. The location of a sound or track in that arousal-valence space is automatically determined using a computer implemented system that analyses, measures or infers values for each of the following base feature parameters: harmonicity, turbulence, rhythmicity, sharpness, volume and linear harmonic cost, or any combination of two or more of those parameters.