Patent classifications
G10H2240/145
SOUND PROCESSING COMPONENT AND STRING INSTRUMENT EMPLOYING COMPONENT
A sound processing component applied to a string instrument for providing multiple playing forms and a MIDI function. An acquisition module acquires vibration information of multiple strings and outputs analog signals, and a first amplification and filter module amplifies the analog signals and filters the analog signals. A first conversion module converts the analog signals into digital signals, and a processing module identifies playing information of the digital signals, converts the digital signals to MIDI data based on the playing information, converts the MIDI data to audio data, and add audio effects for the audio data. A second conversion module converts the audio data with the audio effects into analog audio signals, and a second amplification and filter module amplifies the analog audio signals, filters the analog audio signals, and transmit the filtered analog audio signals to a loudspeaker. The string instrument is also provided.
System and method for AI controlled song construction
According to an embodiment, there is provided a system and method for automatically generating a complete music work from a partially completed work provided by a user. One approach uses an artificial intelligence (AI) engine that is trained by creating incomplete works from a database of complete works and then instructing the AI to complete the incomplete works. A comparison is made between the completed works and the originals to determine the effectiveness of the training process. After the AI is trained, it is applied to the user's incomplete work to produce a final music item.
Timbre creation system
A timbre creation method, system, and computer program product include performing a timbre analysis of a sound from an input source to generate a digital fingerprint of the sound, performing deep learning to create a patch that matches the digital fingerprint, and generating a second patch for a synthesizer which reproduces a timbre that complements the digital fingerprint based on the patch.
SYSTEMS AND METHODS FOR VISUAL IMAGE AUDIO COMPOSITION BASED ON USER INPUT
The present invention relates to systems and methods for visual image audio composition. In particular, the present invention provides systems and methods for audio composition from a diversity of visual images and user determined sound database sources.
System and method for a networked virtual musical instrument
A system and method for operating and performing a remotely networked virtual musical instrument. A client transmits musical control data to a remote server over the network, encompassing a digital music engine and digitally sampled virtual musical instruments. In return, the client consumes, synchronizes, and mixes the combined server playback stream from the network of the fully expressive and interactive musical performance with zero audible latency.
Electronic musical instrument and electronic musical instrument system
Provided is an electronic musical instrument. The electronic musical instrument is configured to generate an internal acoustic signal; generate a sound generation instruction signal; output the sound generation instruction signal to an external sound source configured to generate an external acoustic signal; switch a first state in which the external acoustic signal is generated by the external sound source in response to the sound generation instruction signal, to a second state in which the internal acoustic signal is generated in response to the sound generation instruction signal; and, when the first state is switched to the second state, set a mode of the internal acoustic signal such that the internal acoustic signal is generated with a mode having a predetermined degree of similarity to a mode of the external acoustic signal generated in the first state.
Stick Controller
A stick device that includes a base and a tip end, and a tip secured to the tip end of the stick, the stick tip including a sensor. The stick including the base thereof, and includes at least one control button, a communication element, and a processor in communication with the at least one control button, the stick tip and the communication element. The processor is configured to receive a signal from the stick tip and to generate output to the communication element. The output so generated includes a signal that specifies a sound file selected by operation of the at least one control button.
Method and system for simulating musical phrase
Disclosed is a method for simulating a musical phrase, wherein the musical phrase includes a sequence of notes, the method comprising generating timbral fingerprints associated with a sample library that comprises recordings of a plurality of notes in a plurality of intensities, using a linear predictive coding technique, wherein the timbral fingerprints relate to the plurality of intensities of the plurality of notes; determining an origin intensity for the musical phrase, wherein the origin intensity is one intensity selected from amongst the plurality of intensities; and simulating each note in the sequence of notes, by morphing a recording of each note in the origin intensity according to timbral fingerprints of the note in the plurality of intensities, for simulating the musical phrase.
Generative composition using form atom heuristics
A processor-based method of producing a generative musical composition is disclosed herein. The method includes the step of receiving a briefing narrative which describes a musical journey by referencing a plurality of emotional descriptions related to a plurality of musical sections. The generative musical composition is assembled with regard to the briefing narrative through the selection and concatenation of Form Atoms with tags that align with the emotional descriptions related to the musical sections. The Form Atoms, which have compositional nature aligned with the emotional descriptions and self-contained constructional properties representative of the historical corpus of music, are then selected and substituted into the generative composition. The method further involves the step of generating the musical composition by mapping musical transition between selectively chosen Form Atoms to reflect pre-established transitions between Form Atoms and groups Form Atoms that have been identified to have similar tags but different constructional properties.
Method of and system for automated musical arrangement and musical instrument performance style transformation supported within an automated music performance system
An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.