Patent classifications
G10H2240/056
System and method for generating an audio file
The present invention relates to a computer implemented system and method for generating an audio output file. The method including using one or more processors to perform steps of: receiving audio tracks, each audio track created according to audio parameters; separating each audio track into at least one selectable audio block, each audio block including audio content from a musical instrument involved in creating the audio track; assigning a unique identifier to each audio block; using the unique identifiers to select audio blocks, and generating the audio output by combining the audio blocks. The present invention prevents the use of the same combination of audio blocks in the generation of audio output to ensure that the audio output files generated a sufficiently unique. Also provided are audio file recording, editing and mixing modules enabling a user to have full creative control over mix and other parameters to modify as desired the audio file generated.
GENERATING MUSIC OUT OF A DATABASE OF SETS OF NOTES
A method of generating music contents from input music contents that includes development of models of music composition generation on the basis of business rules and composition rules. In parallel, sounds are prepared, which may be saved in the sound repository. Then, models in the form of source code are sent to a melody generator. Firstly, the generator is set with specific parameters using a controller conforming to MIDI standards and supplemented with composition characteristics read from the user preference database. Next, the contents are sent to automatic generation based on artificial intelligence algorithms and the digital score of the composition with the desired characteristics is generated. Sound tracks of individual instruments are rendered and the rendered tracks are mixed into the final music record. Next, the composition and its record are verified by the critic module using algorithms based on neural networks.
Modular approach to large string array electronic musical instruments such as specialized harps, zithers, sympathetic string arrays, partch kithara and harmonic cannon
A modular approach to large string array electronic musical instruments such as specialized harps, zithers, sympathetic string arrays, the Harry Partch Kithara, the Harry Partch Harmonic Cannon, and other large string array electronic musical instruments is presented. A mounting frame is used to interchangeably secure a plurality of a plurality of musical instrument modules, each comprising a plurality of strings configured to vibrate and create electronic signals. An electronic interface is configured to transmit electrical signals from the plurality of musical instrument modules to an external system. The electronic interface can be configured to provide a multichannel output. The arrangement can further comprise either or both of at least one audio mixer and at least one signal processor.
Method and apparatus for an adaptive and interactive teaching of playing a musical instrument
A method for online music learning of playing a musical instrument, that may be a string instrument, a woodwind instrument, a brass instrument, a percussion instrument, or vocal (singing) is described. A client device, such as a smartphone or a tablet notifies a person, such as visually by a display or audibly by a sounder, of a sequence of musical symbols that may be part of a musical piece to be played on the musical instrument. The pace or tempo of the notified sequence is adapted according to a stored skill level value of the person and the pace or tempo associated with the musical piece. The client device, monitors, using a microphone(s) in the client device, the errors in the playing of the sequence, and using a predefined accordingly updates the stored skill level value, and accordingly changes the arrangement, pace and/or tempo of the next sequence of musical symbols.
SYSTEM AND METHOD FOR GENERATING AN AUDIO FILE
A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.
EMULATING A VIRTUAL INSTRUMENT FROM A CONTINUOUS MOVEMENT VIA A MIDI PROTOCOL
The present invention relates to methods and systems for creating a sound effect out of a continuous movement, in particular by means of detecting a continuous movement through a force sensor in a device. A method is shown for creating a sound effect out of a continuous movement. The method comprises a step of providing a first device, where-by the device is adapted at detecting continuous movement and a no-movement state. The method further comprises the step of defining at least one first parameter of movement, in particular a first axis of movement of said continuous movement. A further step comprises the assigning at least one first midi-channel to the first axis of movement. A base-line value is defined for the no-movement state, and along that first axis of movement a range of values is relative to said base-line value is defined. This range of values is relative to said base-line value is reflective of a continuous movement along that first axis of movement. A sound effect is then output relative to the detected continuous movement. One aspect or additional embodiment of the present invention comprises the step of defining at least one first parameter of movement, whereby said first parameter of movement is an angular range in one axis X, Y, Z of an orientation in space of the first device (99.1) adapted at detecting continuous movement (A.1) and a no-movement state.
Crowd-sourced technique for pitch track generation
Digital signal processing and machine learning techniques can be employed in a vocal capture and performance social network to computationally generate vocal pitch tracks from a collection of vocal performances captured against a common temporal baseline such as a backing track or an original performance by a popularizing artist. In this way, crowd-sourced pitch tracks may be generated and distributed for use in subsequent karaoke-style vocal audio captures or other applications. Large numbers of performances of a song can be used to generate a pitch track. Computationally determined pitch trackings from individual audio signal encodings of the crowd-sourced vocal performance set are aggregated and processed as an observation sequence of a trained Hidden Markov Model (HMM) or other statistical model to produce an output pitch track.
NETWORK MUSICAL INSTRUMENT
Methods and systems are described that are utilized for remotely controlling a musical instrument. A first digital record comprising musical instrument digital commands from a first electronic instrument for a first item of music is accessed. The first digital record is transmitted over a network using a network interface to a remote, second electronic instrument for playback to a first user. Optionally, video data is streamed to a display device of a user while the first digital record is played back by the second electronic instrument. A key change command is transmitted over the network using the network interface to the second electronic instrument to cause the second electronic instrument to playback the first digital record for the first item of music in accordance with the key change command. The key change command may be transmitted during the streaming of the video data.
SYSTEM AND METHOD FOR GENERATING AN AUDIO FILE
A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.
METHOD AND SYSTEM FOR GENERATING AN AUDIO OR MIDI OUTPUT FILE USING A HARMONIC CHORD MAP
Techniques are provided for generating an output file. One technique involves the steps of generating audio or MIDI content blocks from one or more musical performances; receiving an input file having audio or MIDI music content; generating a harmonic chord map for the input file; using the harmonic chord map to automatically select a subset of the audio or MIDI content blocks, and generating the output file by combining the selected subset of content blocks and the input file. This technique may enable the creation of unique and new musical accompaniments by re-purposing audio or MIDI content from back catalogs and/or out-takes of musical works. The new arrangement may be provided in multiple music styles, genres, or moods and may contain performances from multiple musical instruments, which may be pre-recorded from live instrument performances and/or of MIDI generated musical content.