Patent classifications
G10H2240/305
Method of composing a piece of digital music using musical experience descriptors to indicate what, when and how musical events should appear in the piece of digital music automatically composed and generated by an automated music composition and generation system
An automated music composition and generation system having a system user interface operably connected to an automated music composition and generation engine, and supporting a method of composing a piece of digital music using musical experience descriptors as to indicate what, when and how particular musical events should occur in the piece of digital music to be automatically composed and generated. The method uses the system user interface to select one or more musical experience descriptors and applying the musical experience descriptors along a timeline representation of a piece of digital music to be automatically composed and generated by the automated music composition and generation engine.
PERFORMANCE SYSTEM, PERFORMANCE MODE SETTING METHOD, AND PERFORMANCE MODE SETTING DEVICE
A performance system can include a primary unit and a secondary unit. The secondary unit is provided for an electronic musical instrument that can be set to a first mode that enables performance by a single individual and a second mode that enables performance by a plurality of individuals, has connecting portions enabling connection of one or a plurality of receivers having at least a receiving function, and is able to communicate with the primary unit. The performance can also include an determining portion for determining whether or not a plurality of the receivers is connected to the connecting portion, and a transmitting portion for transmitting, to the primary unit, a notification that the second mode has been set, upon determining that a plurality of the receivers has been connected to the connecting portion. The secondary unit can include a setting portion for setting the electronic musical instrument to the second mode upon a determination that a plurality of receivers has been connected to the connecting portion.
WIRELESS COMMUNICATION DEVICE AND WIRELESS COMMUNICATION METHOD
A wireless communication device and a wireless communication method are provided. When a master mode is set in a mode memory, a mode setting notification is transmitted to a different wireless communication device. When a mode setting permission notification for the mode setting notification is received from the different wireless communication device, the master mode is settled as a communication mode, and the master mode is set in the mode memory. When a slave mode is set in the mode memory and when a mode setting notification is received from the different wireless communication device, the slave mode is settled as the communication mode, the slave mode is set in the mode memory, and a mode setting permission notification is transmitted to the different wireless communication device. Thus, the communication mode can be automatically set in a pair of wireless communication devices.
WIRELESS COMMUNICATION DEVICE, WIRELESS COMMUNICATION METHOD, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
A wireless communication device and a wireless communication method capable of curbing occurrence of latency even when performing wireless communication at predetermined time intervals are provided. MIDI data is transmitted and received by communication (B) performed through a wireless module during an intermission between communications (A) performed at predetermined time intervals through the wireless module. Accordingly, a communication speed for wireless communication is improved. Therefore, compared to a case in which wireless communication is performed by only the communication (A), MIDI data can be quickly transmitted and received, and thus occurrence of latency can be curbed.
Method of scoring digital media objects using musical experience descriptors to indicate what, where and when musical events should appear in pieces of digital music automatically composed and generated by an automated music composition and generation system
An automated music composition and generation system having a system user interface operably connected to an automated music composition and generation engine, and supporting a method of scoring a selected media object with one or more pieces of digital music. The method uses the system user interface to select one or more musical experience descriptors and then apply the selected musical experience descriptors to the selected digital media object to indicate what, when and how particular musical events should occur in the one or more pieces of digital music to be automatically composed and generated by the automated music composition and generation engine. The generated piece of digital music is then used in musically scoring the selected digital media object.
Method of automatically confirming the uniqueness of digital pieces of music produced by an automated music composition and generation system while satisfying the creative intentions of system users
A method of automatically confirming the uniqueness of digital pieces of music produced by an automated music composition and generation system while satisfying the creative intentions of system users. The method involves reviewing, selecting and providing one or more musical experience descriptors and time and/or space parameters, to an automated music composition and generation engine operably connected to the system user interface. The automated music composition and generation engine includes a music piece analysis subsystem for automatically examining each piece of composed music that has been generated by said automated music composition and generation engine, comparing the digital piece of composed and generated music against other digital pieces of music composed and generated by said automated music composition and generation system for said system user, and determining whether or not the examined digital piece of composed and generated music is sufficiently unique. Also, the method automatically confirms with the system user that each examined digital piece of composed and generated music satisfies the creative intentions of the system user.
Automated music composition and generation system supporting automated generation of musical kernels for use in replicating future music compositions and production environments
An automated music composition and generation system provided with a system user interface enabling system users to review, select and provide one or more musical experience descriptors as well as time and/or space parameters, to an automated music composition and generation engine, operably connected to the system user interface. The automated music composition and generation engine includes a musical kernel generation subsystem for automatically analyzing and saving musical kernel elements automatically abstracted from the digital piece of music. The abstracted musical kernel elements distinguish the digital piece of music from any other digital piece of music automatically composed and generated by the automated music composition and generation system, and serve as a music kernel definition of the digital piece of composed music, which can be subsequently used during future automated music composition and generation processes, and in future music production environments, to replicate the digital piece of composed music at a later time, either with complete or incomplete accuracy, as required or desired by the system user.
AUTOMATIC ORCHESTRATION OF A MIDI FILE
An electronic device segments a first and second MIDI files into pluralities of source segments and target segments. For each of a plurality of consecutive pairs of first and second target segments, the electronic device identifies a first source segment corresponding to the first target segment of the consecutive pair and identifies a second source segment corresponding to the second target segment of the consecutive pair, where the first and second source segments are identified by determining that the first and second source segments are harmonically conformant to the corresponding first and second target segments, and determining that a transition between the first and second source segments is graphically conformant to a transition between a consecutive pair of source segments. The electronic device generates a third MIDI file using the identified first and second source segments for each of the plurality of consecutive pairs of first and second target segments.
SOUND EFFECT SYNTHESIS
Disclosed herein is a sound synthesis system for generating a user defined synthesised sound effect, the system comprising: a receiver of user defined inputs for defining a sound effect; a generator of control parameters in dependence on the received user defined inputs; a plurality of sound effect objects, wherein each sound effect object is arranged to generate a different class of sound and each sound effect object comprises a sound synthesis model arranged to generate a sound in dependence on one or more of the control parameters; a plurality of audio effect objects, wherein each audio effect object is arranged to receive a sound from one or more sound effect objects and/or one or more other audio effect objects, process the received sound in dependence on one or more of the control parameters and output the processed sound; a scene creation function arranged to receive sound output from one or more sound effect objects and/or audio effect objects and to generate a synthesised sound effect in dependence on the received sound; and an audio routing function arranged to determine the arrangement of audio effect objects, sound effect objects and scene creation function such that one or more sounds received by the scene creation function are dependent on the audio routing function; wherein the determined arrangement of audio effect objects, sound effect objects and the scene creation function by the audio routing function is dependent on the user defined inputs.
Self-produced music apparatus and method
An application for operating on a smart phone that records a musician's performance, either voice or instrumental, in combination with pre-recorded music. The combination allows for the auto tuning of the recording, the compression of the recording, the equalization of the recording, adding in reverb, correcting latency and the audio quantization of the rhythm, in addition to music enhancement features such as vocal spread, DeEsser, vocal doubler, vocal harmonizer, tape saturation, pitch correcdtion, flanger, phaser, auto pan, vibrato, tremolo, rotary, ring modulator, metalizer, expander, noise gate, wah, vocal leveling, tape stop, half speed, LoFi, and stutter. Once combined, the song is transmitted to social media and/or to an online store for sale. The user can also make a video with the song. Additional marketing such as song competitions or music reviews and ratings are also provided.