Patent classifications
G10H2240/021
Editing of MIDI files
The present disclosure relates to a method of editing an audio stream (S) having at least one tone (T1) extending over time in said stream. The method comprises cutting the stream at a first time point of the stream, producing a first cut (A) having a left cutting end (A.sub.L) and a right cutting end (A.sub.R); allocating a respective memory cell to each of the cutting ends; in each of the memory cells, storing information about the tone; and, for one of the cutting ends, concatenating the cutting end with a further stream cutting end which also has an allocated memory cell with information stored therein about any tones extending to said further cutting end. The concatenating comprises using the information stored in the memory cells for adjusting any of the tones extending to the cutting ends.
AUTOMATIC PREPARATION OF A NEW MIDI FILE
The present disclosure relates to a method of automatically preparing a MIDI file based on a target MIDI file comprising respective note information about each of a plurality of target notes and a source MIDI file comprising respective note information about each of a plurality of source notes. Each note information comprises pitch information defining a pitch of the note. The method comprises ranking the plurality of target notes based on the pitch of each target note. The method also comprises, for each of the ranked target notes, removing the pitch information from the note information of the target note. The method also comprises, for each of the ranked target notes, replacing the removed pitch information with pitch information of a corresponding source note, whereby the target note has the same pitch as the corresponding source note, forming a plurality of new notes of a new MIDI file.
AUTOMATIC ORCHESTRATION OF A MIDI FILE
An electronic device segments a first and second MIDI files into pluralities of source segments and target segments. For each of a plurality of consecutive pairs of first and second target segments, the electronic device identifies a first source segment corresponding to the first target segment of the consecutive pair and identifies a second source segment corresponding to the second target segment of the consecutive pair, where the first and second source segments are identified by determining that the first and second source segments are harmonically conformant to the corresponding first and second target segments, and determining that a transition between the first and second source segments is graphically conformant to a transition between a consecutive pair of source segments. The electronic device generates a third MIDI file using the identified first and second source segments for each of the plurality of consecutive pairs of first and second target segments.
MICROTONAL MUSICAL INSTRUMENT INTERFACE DEVICE
A microtonal musical instrument interface device between one or more Musical Instrument Digital Interface (MIDI) controllers and one or more musical instruments comprises a housing and a plurality of potentiometers on a surface of the housing. The potentiometers comprise twelve tuning potentiometers constructed and arranged to correspond to notes of a musical scale, each tuning knob for tuning one of the notes; an offset potentiometer for globally tuning all of the notes by a same amount; and a range potentiometer for setting a maximum tuning range of the tuning potentiometers. A microprocessor in the housing modifies a MIDI data stream received from the one or more MIDI controllers for output to the one or more musical instruments according to a position of the potentiometers.
PLAGIARISM RISK DETECTOR AND INTERFACE
Methods, systems and computer program products are provided for testing a lead sheet for plagiarism. A test lead sheet receiving having a plurality of passages is received at receiving a plagiarism detector. A set of annotations describing a level of plagiarism of a plurality of elements (e.g., chord sequence, subsequences, melodic fragments (i.e., notes), rhythm, harmony, etc.) of the test lead sheet in relation to the preexisting lead sheets are generated and output via an output device.
EDITING OF AUDIO FILES
This disclosure relates to editing an audio file of a time stream having a plurality of tones T. The stream is cut at a first time point of the stream, producing a first cut A cutting the stream into a first stream and a second stream, whereby each tone which extends across the first cut, is cut into a first part Ta which is in the first stream and a second part Tb which is in the second stream. For each of the tones extending across the first cut, a respective memory space is allocated to each of the first part and the second part, each of the memory spaces storing an original state of the tone. The first stream is allocated with a further stream, comprising adjusting the first part of one of the tones based on the information stored in the memory space allocated to said first part.
EDITING OF MIDI FILES
The present disclosure relates to a method of editing an audio stream (S) having at least one tone (T1) extending over time in said stream. The method comprises cutting the stream at a first time point of the stream, producing a first cut (A) having a left cutting end (A.sub.L) and a right cutting end (A.sub.R); allocating a respective memory cell to each of the cutting ends; in each of the memory cells, storing information about the tone; and, for one of the cutting ends, concatenating the cutting end with a further stream cutting end which also has an allocated memory cell with information stored therein about any tones extending to said further cutting end. The concatenating comprises using the information stored in the memory cells for adjusting any of the tones extending to the cutting ends.
INFORMATION PROCESSING METHOD
An information processing method according to the present invention includes providing first musical piece information representing contents of a musical piece and performance information relating to a past performance prior to one unit period within the musical piece to a learner that has undergone learning relating to a specific tendency that relates to a performance, and generating, for the one unit period, performance information that is based on the specific tendency with the learner.
Electronic musical instrument, electronic musical instrument control method, and storage medium
An electronic musical instrument includes a display, a memory, and at least one processor. The memory is configured to store a plurality of song data items. Each of the plurality of song data items includes a plurality of event data items and the plurality of song data items does not include size information of each of the plurality of event data items. The at least one processor is configured to read at least one song data item from among the plurality of song data items, add an identifier to each of the plurality of event data items of the read at least one song data item, calculate size information for each of the plurality of event data items, associate the size information calculated for each of the plurality of event data items with the corresponding identifier, display a content of a first event data item, refer to the associated size information when the content of the first event data item is displayed on the display and a content of a second event data item is not displayed on the display, and display, in accordance with the associated size information referred to, the content of the second event data item on the display, instead of displaying the content of the first event data item.
Musical attribution in a two-dimensional digital representation
Musical attribution is performed in a two-dimensional (2D) digital representation. A piece of music representing a musical score is inputted. An abstracted representation of blanks of the score, called a digital audio canvas, is produced. Interactive, dynamic attribution is performed by a user to bring to life the musical score of abstracted blanks. Instrumentation selection, relative volume, scale selection, and score tempo are all musical attributes that are conveyed to the score of abstracted blanks. The score of the digital audio canvas is played back using the attributed blanks. The playback of the score is enabled by selecting appropriate abstracted blanks. The appropriate abstracted blanks are included among other blanks for increased educational and enjoyment value. The modified score is converted back into the format of the original inputted piece of music.