Patent classifications
G10H2240/145
MODULAR AUTOMATED MUSIC PRODUCTION SERVER
A music production system comprises: a computer interface comprising at least one input for receiving an external request for a piece of music and at least one output for transmitting a response to the external request which comprises or indicates a piece of music incorporating first music data; a first music production component configured to process second music data according to at least a first input setting so as to generate the first music data; a second music production component configured to receive via the computer interface an internal request, and provide the second music data based on at least a second input setting denoted by the internal request; and a controller configured to determine in response to the external request the first and second input settings, and instigate the internal request via the computer interface.
AUTOMATED MIDI MUSIC COMPOSITION SERVER
A music composition system for composing music segments comprises: a computer interface comprising at least one external input for receiving from an external device a request for a musical composition; a controller configured to determine based on a request received at the external input a plurality of musical parts for the musical composition; and a composition engine configured to generate, for each of the determined musical parts, at least one musical segment in digital musical notation format, the musical segments configured to cooperate musically when performed simultaneously. The computer interface comprises at least one external output configured to output a response to the request, the request comprising or indicating each of the musical segments in digital musical notation format for rendering into audio data at the external device.
Electronic musical instrument and electronic musical instrument system
Provided is an electronic musical instrument. The electrical musical instrument is configured to generate an internal acoustic signal; generate a sound generation instruction signal; output the sound generation instruction signal to an external sound source configured to generate an external acoustic signal; switch a first state in which the external acoustic signal is generated by the external sound source in response to the sound generation instruction signal, to a second state in which the internal acoustic signal is generated in response to the sound generation instruction signal; and, when the first state is switched to the second state, control the volume of the internal acoustic signal such that the state relating to the volume of sound generation based on the internal acoustic signal approaches the state relating to the volume of sound generation based on the external acoustic signal.
SYSTEM AND METHOD FOR AI CONTROLLED SONG CONSTRUCTION
According to an embodiment, there is provided a system and method for automatically generating a complete music work from a partially completed work provided by a user. One approach uses an artificial intelligence (AI) engine that is trained by creating incomplete works from a database of complete works and then instructing the AI to complete the incomplete works. A comparison is made between the completed works and the originals to determine the effectiveness of the training process. After the AI is trained, it is applied to the user's incomplete work to produce a final music item.
METHOD AND SYSTEM FOR AI CONTROLLED LOOP BASED SONG CONSTRUCTION
According to an embodiment, there is provided a system and method for automatic AI controlled loop based song construction. It provides and benefits from a machine learning AI in a audio loop selection engine for the generation of a song structure and for the selection of fitting audio loops from a database of audio loops. In one embodiment, the instant method provides a music generation process that utilizes an AI system that has been trained and validated on a music item database to complete the creation of a music item given an incomplete song that was started but not finished by a user.
Devices and methods for sharing user interaction
A method, such as a computer implemented method, of data management, wherein content utilized by a first user can be identified and information about such content can be shared to at least one additional user such that the at least one additional user can pull the identified content from the content source.
TECHNIQUES FOR CONTROLLING THE EXPRESSIVE BEHAVIOR OF VIRTUAL INSTRUMENTS AND RELATED SYSTEMS AND METHODS
Techniques for automatically controlling the expressive behavior of a virtual musical instrument by analyzing an audio recording of a live musician are provided. In some embodiments, an audio recording may be analyzed at various points along the timeline of the recording to derive corresponding values of a parameter that is in some way representative of the musical expression of the live musician. Values of control parameters that control one or more aspects of the audio playback of a virtual instrument may then be generated based on the determined values of the expression parameter. Values of control parameters may be provided to a sample library to control how a digital score selects and/or plays back samples from the library, and/or values of the control parameters may be stored with the digital score for subsequent playback.
Music mash up collectable card game
An intuitive music composition game platform with various modes of operation in a single reader system and a music mash up collectable card game and method using cards with tags and unique identifications. A single reading platform includes a cover for card storage and for supporting a smart device platform for reading many Near Field communication (NFC) embedded cards, with stacking features and colored light indicated input lanes selection for user identification and selection. Various game modes include individual, studio mix, party modes game play features in music mash up collectable card games, which may be used together with or independent of accessory devices as controllers, or smart device user interfaces with Bluetooth or Wi-Fi for communicating user selection and operation.
SYSTEMS AND METHODS FOR VISUAL IMAGE AUDIO COMPOSITION BASED ON USER INPUT
The present invention relates to systems and methods for visual image audio composition. In particular, the present invention provides systems and methods for audio composition from a diversity of visual images and user determined sound database sources.
Music generator
Techniques are disclosed relating to determining composition rules, based on existing music content, to automatically generate new music content. In some embodiments, a computer system accesses a set of music content and generates a set of composition rules based on analyzing combinations of multiple loops in the set of music content. In some embodiments, the system generates new music content by selecting loops from a set of loops and combining selected ones of the loops such that multiple ones of the loops overlap in time. In some embodiments, the selecting and combining loops is performed based on the set of composition rules and attributes of loops in the set of loops.