Patent classifications
G10H2250/641
SYSTEMS AND METHODS FOR PROVIDING AUDIO-FILE LOOP-PLAYBACK FUNCTIONALITY
Systems and methods for providing audio-file loop-playback functionality are provided. The system includes a processor that performs a method including setting a playback loop start-point based on a first selection of a button; setting a loop end-point, associating a loop with an audio file, and entering into the loop based on a second selection of the button; and exiting the loop based on a third selection of the button. Associating the loop with the audio file includes adding metadata to the audio file. The metadata associates the loop with a button. The method includes reentering the loop based on a fourth selection of the button and exiting the loop based on a fifth selection of the button.
Synthesized percussion pedal and docking station
Methods, Apparatus, and a System (collectively a “platform”) for facilitating, enabling, or enhancing creation, control, and playback of digital audio loops or parts are disclosed herein. The platform may include playing back midi song segments. The midi song segments may comprise a midi sequence that is looped a predetermined number of times. The platform may include transitioning to another midi song segment automatically after predetermined number of loops or transitioning in response to a command. The platform may include changing the number of loops during playback of a song segment in response to a command. The platform may relate to enabling automatic generation of song segments during a performance. The platform may include automatically selecting midi sequences to enhance playback. The platform may include other features pertaining to enhancing or enabling digital music creation or composition.
Electronic device and method for outputting sounds
A method for outputting a sound in an electronic device is provided. The method includes generating a loop module corresponding to a loop element; displaying the generated loop module; and outputting a sound included in the displayed loop module.
Musical instrument effects processor
A method in accord with certain implementations involves, at a data interface of a musical instrument effects processor, receiving an extracted characteristic of an audible sound that is captured at a microphone; transferring the extracted characteristic to a digital signal processor residing in the musical instrument effects processor; receiving input signals at an input to the musical instrument effects processor; at the digital signal processor of the musical instrument effects processor, modifying the received input signals using the extracted characteristics to create the electronic audio effect; and outputting the modified input signals as an output signal from the musical instrument effects processor. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
Method, device and software for controlling transport of audio data
A method for processing music audio data, including providing input audio data representing a first piece of music comprising a mixture of musical timbres. The method also includes decomposing the input audio data to generate at least first-timbre decomposed data representing a first timbre selected from the musical timbres of the first piece of music, and second-timbre decomposed data representing a second timbre selected from the musical timbres of the first piece of music. The method also includes applying a transport control to obtain transport controlled first-timbre decomposed data. The method also includes recombining audio data obtained from the transport controlled first-timbre decomposed data with audio data obtained from the second-timbre decomposed data to obtain recombined audio data.
APPARATUS, METHOD, AND COMPUTER-READABLE STORAGE MEDIUM FOR COMPENSATING FOR LATENCY IN MUSICAL COLLABORATION
An apparatus, method, and computer-readable storage medium that compensate for latency in a musical collaboration. The method includes, setting a tempo for a first client device to follow, receiving a musical piece from the first client device, transmitting the musical piece to a second client device, and instructing the second client device, via an instruction transmitted along with the musical piece, to delay playback of the musical piece a predetermined amount of time to compensate for latency in the musical collaboration, the predetermined amount of time being associated with a measure or a fraction of a measure.
PERFORMANCE INFORMATION PROCESSING DEVICE AND METHOD
Performance information of a music performance executed by a user is received, and temporarily stored into a buffer for each given time period. The performance information is recorded into a recording section in response to a recording instruction by the user. Second performance information having a definite time period is reproduced repeatedly, and the user ad-libs a desired musical performance while listening to the repeatedly reproduced tones of the second performance information. The given time period is set to coincide with the definite time period of the second performance information. Temporarily-stored performance information for the given time period is recorded in one of a plurality of recording tracks. In response to a plurality of user's recording instructions, a plurality of different segments of performance information for the given time period are recorded into respective ones of the recording tracks, and these different segments are reproduced repeatedly in synchronized fashion.
Digital control of the sound effects of a musical instrument
The object of the present invention concerns a control device (100) for a generation module (GM) of sound effects (EF.sub.A, EF.sub.B) of a musical instrument (MI), such device comprising computer software configured for: —the capture, using a digital camera (10), of at least one digital image (I) comprising at least one portion of the user's (U) face; —processing of such at least one image (I) to define expression data (D_EX.sub.i, i being a positive integer) containing information relating to facial expressions (EX.sub.a, EX.sub.b) of the user (U); —an analysis of such expression data (D_EX.sub.i) using a predefined first database (DB1) to determine a sound effect data (D_EF.sub.j, j being a positive integer) containing information relating to at least one sound effect (EF.sub.A, EF.sub.B) corresponding to the facial expression (EX.sub.a, EX.sub.b) of the user (U).
Systems and methods for selecting an audio track by performing a gesture on a track-list image
Systems and methods for selecting an audio track by performing a gesture on a track-list image are provided. The system includes a processor that performs a method including displaying the audio-track list, detecting a contact with the touchscreen display at a location corresponding to the audio track, detecting a continuous movement of the contact in a direction, detecting a length of the continuous movement, and selecting the audio track if the continuous movement has a length longer than a threshold length. The method includes shifting text associated with the audio track based on the length and direction of the continuous movement. The method includes determining that the selection is a command to queue the audio track for playback or add it to a preparation track list. This determination may be based on the direction of the continuous movement.
Memory device, waveform data editing method
Provided are a memory device and waveform data editing method and editing program thereof. Waveform data obtained by sampling a musical sound is acquired, and a difference between a harmonic frequency of an n.sup.th harmonic of the waveform data and a resonance sound frequency of the n.sup.th harmonic sound of a resonance sound generation circuit is calculated, and if the difference is 1 Hz or more, a waveform of a frequency component of 20 Hz centered on a central of the frequency of the n.sup.th harmonic of a frequency spectrum is clipped. The difference calculated in regard to the clipped waveform is reduced. The waveform and the clipped original waveform are combined to edit the waveform data. Thus, in the waveform data, the difference between the harmonic frequencies of the resonance characteristic is eliminated, and resonance is facilitated and occurrence of beat of the sound is prevented.