Patent classifications
G10H2210/095
DYNAMIC MUSIC MODIFICATION
Music may be generated by electronically applying one or more functions that change a compositional nature of a musical input in a first tonality to generate a musical output in a second tonality in response to an event in a videogame. Data corresponding to the output melody may be recorded in a recording medium.
Techniques for controlling the expressive behavior of virtual instruments and related systems and methods
Techniques for automatically controlling the expressive behavior of a virtual musical instrument by analyzing an audio recording of a live musician are provided. In some embodiments, an audio recording may be analyzed at various points along the timeline of the recording to derive corresponding values of a parameter that is in some way representative of the musical expression of the live musician. Values of control parameters that control one or more aspects of the audio playback of a virtual instrument may then be generated based on the determined values of the expression parameter. Values of control parameters may be provided to a sample library to control how a digital score selects and/or plays back samples from the library, and/or values of the control parameters may be stored with the digital score for subsequent playback.
ELECTRONIC WIND INSTRUMENT, ELECTRONIC WIND INSTRUMENT CONTROLLING METHOD AND STORAGE MEDIUM WHICH STORES PROGRAM THEREIN
An electronic wind instrument includes a tonguing sensor which detects tonguing, a breath sensor which detects a breath value, a loudspeaker which outputs a musical sound and a processor which controls the musical sound, in which the processor acquires a tonguing value which depends on a tonguing time which is the time which has elapsed after start of the tonguing which is detected by the tongue sensor, decides a silencing effect value which indicates a degree of volume reduction depending on the tonguing value, acquires the breath value which depends on a magnitude of a breath sensor signal which indicates a result of detection by the breath sensor and makes the loudspeaker emit the musical sound whose volume which depends on the breath value is reduced depending on the silencing effect value.
DYNAMIC MUSIC MODIFICATION
A method for electronic music generation comprising electronically applying one or more functions that change one or more compositional elements of a musical input in a first tonality or other musical representation to generate a musical output in a second tonality or other musical representation and recording data corresponding to the musical output in a recording medium or rendering such musical Transformations to a reproductive medium such as an amplifier and speakers or headphones.
TECHNIQUES FOR CONTROLLING THE EXPRESSIVE BEHAVIOR OF VIRTUAL INSTRUMENTS AND RELATED SYSTEMS AND METHODS
Techniques for automatically controlling the expressive behavior of a virtual musical instrument by analyzing an audio recording of a live musician are provided. In some embodiments, an audio recording may be analyzed at various points along the timeline of the recording to derive corresponding values of a parameter that is in some way representative of the musical expression of the live musician. Values of control parameters that control one or more aspects of the audio playback of a virtual instrument may then be generated based on the determined values of the expression parameter. Values of control parameters may be provided to a sample library to control how a digital score selects and/or plays back samples from the library, and/or values of the control parameters may be stored with the digital score for subsequent playback.
Method and system for performing musical score
Disclosed is a method for performing a musical score, the method comprising generating a plurality of score maps using at least one of an electronic representation of the musical score and event-based notations for at least one musical note in the musical score, wherein each score map corresponds to a single performance characteristic of the musical score and contains a plurality of events related to the single performance characteristic, and wherein each of the plurality of score maps is processed by a processing block to generate a plurality of playback characteristic maps.
Generation and transmission of musical performance data
Systems and methods are generally described for generation and transmission of musical performance data. In some examples, the musical input device may generate a first command encoding a first musical event. In various further examples, the musical input device may generate a first message corresponding to the first command, the first message encoding a first acoustic attribute type of the first musical event. In some examples, the musical input device may generate a second message corresponding to the first command, the second message encoding a second acoustic attribute type of the first musical event. In various examples, the musical input device may generate timestamp data denoting a time of an occurrence of the first musical event. In some examples, the musical input device may send the timestamp data, the first command, the first message and the second message to a computing device.
GENERATION AND TRANSMISSION OF MUSICAL PERFORMANCE DATA
Systems and methods are generally described for generation and transmission of musical performance data. In some examples, the musical input device may generate a first command encoding a first musical event. In various further examples, the musical input device may generate a first message corresponding to the first command, the first message encoding a first acoustic attribute type of the first musical event. In some examples, the musical input device may generate a second message corresponding to the first command, the second message encoding a second acoustic attribute type of the first musical event. In various examples, the musical input device may generate timestamp data denoting a time of an occurrence of the first musical event. In some examples, the musical input device may send the timestamp data, the first command, the first message and the second message to a computing device.
Electronic musical instrument, musical sound generating method, and storage medium
An electronic musical instrument includes a plurality of keys respectively specifying different pitches when operated; a memory; and a sound processor. In response to a current operation of a current key, which is one of the plurality of keys, the sound processor retrieves the information stored in the memory for a previous operation, if any, of a previous key, which is a same as the current key or is another one of the plurality of keys, and performs a prescribed processing on a beginning part of the waveform data generated for the current operation of the current key in accordance with the retrieved information stored in the memory for the previous operation of the previous key so as to generate processed waveform data in response to the current operation of the current key. The resulting processed waveform data can be configured to better mimic artists' performance of an original instrument.
Electronic wind instrument capable of performing a tonguing process
An electronic wind instrument is provided with a processor (CPU 5) and plural touch sensors (a detecting unit 12s and detecting units 13s) disposed along a first direction. A first output variable da/dt is obtained, representing a variation per unit time of an output value from a first sensor (detecting unit 12s) in the plural touch sensors disposed on the side close to first end (tip side) in the first direction. A second output variable dS/dt is obtained, representing a variation per unit time of a sum of output values from second sensors (detecting units 13s) disposed between a second end (heel side) and the first sensor in the first direction. The processor judges based on the first output variable da/dt and the second output variable dS/dt whether a tonging process should be performed.