Patent classifications
G10H2210/076
Audio processing techniques for semantic audio recognition and report generation
System, apparatus and method for determining semantic information from audio, where incoming audio is sampled and processed to extract audio features, including temporal, spectral, harmonic and rhythmic features. The extracted audio features are compared to stored audio templates that include ranges and/or values for certain features and are tagged for specific ranges and/or values. Extracted audio features that are most similar to one or more templates from the comparison are identified according to the tagged information. The tags are used to determine the semantic audio data that includes genre, instrumentation, style, acoustical dynamics, and emotive descriptor for the audio signal.
TRANSITION FUNCTIONS OF DECOMPOSED SIGNALS
A device including: first and second input units providing first and second input signals of first and second audio tracks, a decomposition unit to decompose the first input audio signal to obtain decomposed signals, a playback unit to start playback of a first output signal obtained from recombining at least first and second decomposed signals at first and second volume levels, respectively, and a transition unit for performing a transition between playback of the first output signal and playback of a second output signal obtained from the second input signal. The transition unit is adapted for reducing the first/second volume levels according to first/second transition functions. The device includes an analyzing unit to analyze an audio signal to determine a song part junction between two song parts. The transition time interval of at least one of the transition functions is set such as to include the song part junction.
MUSIC CONTEXT SYSTEM AUDIO TRACK STRUCTURE AND METHOD OF REAL-TIME SYNCHRONIZATION OF MUSICAL CONTENT
A system is described that permits identified musical phrases or themes to be synchronized and linked into changing real-world events. The achieved synchronization includes a seamless musical transition—achieved using a timing offset, such as relative advancement of an significant musical “onset,” that is inserted to align with a pre-existing but identified music signature, beat or timebase—between potentially disparate pre-identified musical phrases having different emotive themes defined by their respective time signatures, intensities, keys, musical rhythms and/or musical phrasing. The system operates to augment an overall sensory experience of a user in the real world by dynamically changing, re-ordering or repeating and then playing audio themes within the context of what is occurring in the surrounding physical environment, e.g. during different phases of a cardio workout in a step class the music rate and intensity increase during sprint periods and decrease during recovery periods.
NETWORK MUSICAL INSTRUMENT
Methods and systems are described that are utilized for remotely controlling a musical instrument. A first digital record comprising musical instrument digital commands from a first electronic instrument for a first item of music is accessed. The first digital record is transmitted over a network using a network interface to a remote, second electronic instrument for playback to a first user. Optionally, video data is streamed to a display device of a user while the first digital record is played back by the second electronic instrument. A key change command is transmitted over the network using the network interface to the second electronic instrument to cause the second electronic instrument to playback the first digital record for the first item of music in accordance with the key change command. The key change command may be transmitted during the streaming of the video data.
INFORMATION PROVIDING METHOD AND INFORMATION PROVIDING DEVICE
The information providing method includes: sequentially identifying a performance speed at which a user performs a piece of music; identifying, in the piece of music, a performance position that is performed by the user; setting an adjustment amount in accordance with a temporal variation in the identified performance speed; and providing the user with music information corresponding to a time point that is later, by the adjustment amount, than a time point that corresponds to the performance position identified in the piece of music.
ELECTRONIC MUSICAL INSTRUMENT, CONTROL METHOD FOR ELECTRONIC MUSICAL INSTRUMENT, AND STORAGE MEDIUM
An electronic musical instrument includes a performance operator and at least one processor. In accordance with pitch data associated with the performance operator operated by a user, the at least one processor digitally synthesizes and outputs inferential musical sound data including inferential performance technique of a player. The inferential performance technique of the player is based on acoustic feature data output by a trained acoustic model obtained by performing machine learning on: a training score data set including training pitch data; and a training performance data set obtained by the player playing a musical instrument and is not played in the user operation of the performance operator.
SYSTEMS AND METHODS FOR PROVIDING AUDIO-FILE LOOP-PLAYBACK FUNCTIONALITY
Systems and methods for providing audio-file loop-playback functionality are provided. The system includes a processor that performs a method including setting a playback loop start-point based on a first selection of a button; setting a loop end-point, associating a loop with an audio file, and entering into the loop based on a second selection of the button; and exiting the loop based on a third selection of the button. Associating the loop with the audio file includes adding metadata to the audio file. The metadata associates the loop with a button. The method includes reentering the loop based on a fourth selection of the button and exiting the loop based on a fifth selection of the button.
Intelligent accompaniment generating system and method of assisting a user to play an instrument in a system
The intelligent accompaniment generating system includes an input module, an analysis module, a generation module and a musical equipment. The input module is configured to receive a musical pattern signal derived from a raw signal. The analysis module is configured to analyze the musical pattern signal to extract a set of audio features, wherein the input module is configured to transmit the musical pattern signal to the analysis module. The generation module is configured to obtain a playing assistance information having an accompaniment pattern from the analysis module, wherein the accompaniment pattern has at least two parts having different onsets therebetween, and each onsets of the at least two parts is generated by an algorithm according to the set of audio features. The musical equipment includes a digital amplifier configured to output an accompaniment signal according to the accompaniment pattern.
METHOD AND SYSTEM FOR GENERATING AN AUDIO OR MIDI OUTPUT FILE USING A HARMONIC CHORD MAP
Techniques are provided for generating an output file. One technique involves the steps of generating audio or MIDI content blocks from one or more musical performances; receiving an input file having audio or MIDI music content; generating a harmonic chord map for the input file; using the harmonic chord map to automatically select a subset of the audio or MIDI content blocks, and generating the output file by combining the selected subset of content blocks and the input file. This technique may enable the creation of unique and new musical accompaniments by re-purposing audio or MIDI content from back catalogs and/or out-takes of musical works. The new arrangement may be provided in multiple music styles, genres, or moods and may contain performances from multiple musical instruments, which may be pre-recorded from live instrument performances and/or of MIDI generated musical content.
METHOD, DEVICE AND SOFTWARE FOR APPLYING AN AUDIO EFFECT
The present invention provides a method for processing music audio data, comprising the steps of providing input audio data representing a first piece of music containing a mixture of predetermined musical timbres, decomposing the input audio data to generate at least a first audio track representing a first musical timbre selected from the predetermined musical timbres, and a second audio track representing a second musical timbre selected from the predetermined musical timbres, applying a predetermined first audio effect to the first audio track, applying no audio effect or a predetermined second audio effect, which is different from the first audio effect, to the second audio track, and obtaining recombined audio data by recombining the first audio track with the second audio track.