Patent classifications
G10H7/00
Musical instrument effects processor
A method in accord with certain implementations involves, at a data interface of a musical instrument effects processor, receiving an extracted characteristic of an audible sound that is captured at a microphone; transferring the extracted characteristic to a digital signal processor residing in the musical instrument effects processor; receiving input signals at an input to the musical instrument effects processor; at the digital signal processor of the musical instrument effects processor, modifying the received input signals using the extracted characteristics to create the electronic audio effect; and outputting the modified input signals as an output signal from the musical instrument effects processor. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
Musical instrument effects processor
A method in accord with certain implementations involves, at a data interface of a musical instrument effects processor, receiving an extracted characteristic of an audible sound that is captured at a microphone; transferring the extracted characteristic to a digital signal processor residing in the musical instrument effects processor; receiving input signals at an input to the musical instrument effects processor; at the digital signal processor of the musical instrument effects processor, modifying the received input signals using the extracted characteristics to create the electronic audio effect; and outputting the modified input signals as an output signal from the musical instrument effects processor. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.
Recommending audio sample combinations
A recommendation of at least one of multiple audio samples or sets of audio samples to combine with a particular audio sample or set of audio samples is automatically generated. The recommendation is generated by determining the rhythmic compatibility as well as the harmonic compatibility of the particular audio sample or set of samples with each of the multiple audio samples or sets of audio samples. For each of the multiple audio samples or sets of audio samples, a compatibility rating is generated based on the rhythmic compatibility and the harmonic compatibility of the audio sample or set of audio samples with the particular audio sample or set of audio samples. At least one of the multiple audio samples or sets of audio samples is presented by a computing device as a recommendation to combine with the particular audio sample or set of audio samples.
Modular platform for creation and manipulation of audio and musical signals
A platform for audio and electronic music applications where the electronics are implemented as modules, and the modules mount in a cabinet with a common power supply and infrastructure. The platform addresses problems in electrical, mechanical, usability, and power distribution areas, and is suitable for guitar effects, synthesizers, studio equipment, and DJ gear.
Recording method, recording system, recording program storage medium, acoustic processing method, and acoustic processing device
A recording method includes acquiring each piece of acoustic data representing a sound from each of a plurality of portable terminal devices. The each of the plurality of portable terminal devices includes a recording unit configured to generate the piece of acoustic data. The recording method also includes executing synchronization processing for synchronizing the respective pieces of acoustic data and executing mixing processing for mixing a plurality of pieces of acoustic data for which the synchronization processing has been executed.
Recording method, recording system, recording program storage medium, acoustic processing method, and acoustic processing device
A recording method includes acquiring each piece of acoustic data representing a sound from each of a plurality of portable terminal devices. The each of the plurality of portable terminal devices includes a recording unit configured to generate the piece of acoustic data. The recording method also includes executing synchronization processing for synchronizing the respective pieces of acoustic data and executing mixing processing for mixing a plurality of pieces of acoustic data for which the synchronization processing has been executed.
ELECTRONIC PERCUSSION CONTROLLER, INSTRUMENT AND METHOD
An electronic percussion instrument controller includes a selection input device, a setting input device and a processor. The selection input device is configured to select an instrument which defines a tone that corresponds to a musical performance input device. The setting input device is configured to selectively set a tone lock for the musical performance input device. The processor is programmed to maintain a set tone of the musical performance input device for which the tone lock is set by the setting device.
ELECTRONIC PERCUSSION CONTROLLER, INSTRUMENT AND METHOD
An electronic percussion instrument controller includes a selection input device, a setting input device and a processor. The selection input device is configured to select an instrument which defines a tone that corresponds to a musical performance input device. The setting input device is configured to selectively set a tone lock for the musical performance input device. The processor is programmed to maintain a set tone of the musical performance input device for which the tone lock is set by the setting device.
Digital control of the sound effects of a musical instrument
The object of the present invention concerns a control device (100) for a generation module (GM) of sound effects (EF.sub.A, EF.sub.B) of a musical instrument (MI), such device comprising computer software configured for: —the capture, using a digital camera (10), of at least one digital image (I) comprising at least one portion of the user's (U) face; —processing of such at least one image (I) to define expression data (D_EX.sub.i, i being a positive integer) containing information relating to facial expressions (EX.sub.a, EX.sub.b) of the user (U); —an analysis of such expression data (D_EX.sub.i) using a predefined first database (DB1) to determine a sound effect data (D_EF.sub.j, j being a positive integer) containing information relating to at least one sound effect (EF.sub.A, EF.sub.B) corresponding to the facial expression (EX.sub.a, EX.sub.b) of the user (U).
Virtual instrument playing scheme
Technologies are generally described for a virtual instrument playing system. In some examples, a virtual instrument playing system may include a sensor data receiving unit configured to receive first sensor data of a first user and second sensor data of the first user, a sound event prediction unit configured to detect a sound event of the first user and to predict a sound generation timing corresponding to the sound event of the first user based at least in part on the first sensor data of the first user, an instrument identification unit configured to identify a virtual instrument corresponding to the sound event from one or more virtual instruments based at least in part on the second sensor data of the first user, a sound data generation unit configured to generate sound data of the first user regarding the identified virtual instrument based at least in part on the sound generation timing, and a video data generation unit configured to generate video data of the first user regarding the identified virtual instrument based at least in part on the second sensor data of the first user.