Method and system for generating musical notations
11798522 · 2023-10-24
Inventors
Cpc classification
International classification
Abstract
Disclosed is a computer-implemented method and system for generating musical notations. The method comprises receiving, via a first input module of a user interface, a musical note, receiving, via a second input module of the user interface, one or more parameters to be associated with the musical note, wherein the one or more parameters comprise at least one of an arrangement context providing information about an event for the musical note, a pitch context providing information about a pitch for the musical note, and an expression context providing information about one or more articulations for the musical note, and generating a notation output based on the entered musical note and the added one or more parameters associated therewith.
Claims
1. A computer-implemented method for generating notations, the method comprising: receiving, via a first input module of a user interface, a musical note; receiving, via a second input module of the user interface, one or more parameters to be associated with the musical note, wherein the one or more parameters comprise at least one of: an arrangement context providing information about an event for the musical note including at least one of a duration for the musical note, a timestamp for the musical note and a voice layer index for the musical note, a pitch context providing information about a pitch for the musical note including at least one of a pitch class for the musical note, an octave for the musical note and a pitch curve for the musical note, and an expression context providing information about one or more articulations for the musical note including at least one of an articulation map for the musical note, a dynamic type for the musical note and an expression curve for the musical note, wherein, the articulation map for the musical note provides a relative position as a percentage indicating an absolute position of the musical note, the dynamic type for the musical note indicates a type of dynamic applied over the duration of the musical note, the expression curve for the musical note indicates a container of points representing values of an action force associated with the musical note; and generating, via a processor arrangement, a notation output based on the entered musical note and the added one or more parameters associated therewith.
2. The method according to claim 1, wherein, in the arrangement context, the duration for the musical note indicates a time duration of the musical note, the timestamp for the musical note indicates an absolute position of the musical note, and the voice layer index for the musical note provides a value from a range of indexes indicating a placement of the musical note in a voice layer, or a rest in the voice layer.
3. The method according to claim 1, wherein, in the pitch context, the pitch class for the musical note indicates a value from a range including C, C #, D, D #, E, F, F #, G, G #, A, A #, B for the musical note, the octave for the musical note indicates an integer number representing an octave of the musical note, and the pitch curve for the musical note indicates a container of points representing a change of the pitch of the musical note over duration thereof.
4. The method according to claim 1, wherein the one or more articulations comprise: dynamic change articulations providing instructions for changing the dynamic level for the musical note, duration change articulations providing instructions for changing the duration of the musical note, or relation change articulations providing instructions to impose additional context on a relationship between two or more musical notes.
5. The method according to claim 4 further comprising receiving, via a third input module of the user interface, individual profiles for each of the one or more articulations for the musical note, wherein the individual profiles comprise one or more of: a genre of the musical note, an instrument of the musical note, a given era of the musical note, a given author of the musical note.
6. The method according to claim 5, wherein an expression conveyed by each of the one or more articulations for the musical note depends on the defined individual profile therefor.
7. The method according to claim 1, wherein a pause as the musical note is represented as a RestEvent having the one or more parameters associated therewith, including the arrangement context with the duration, the timestamp and the voice layer index for the pause as the musical note.
8. The method according to claim 1, wherein the one or more articulations comprise single-note articulations including one or more of: Standard, Staccato, Staccatissimo, Tenuto, Marcato, Accent, SoftAccent, LaissezVibrer, Subito, Fadeln, FadeOut, Harmonic, Mute, Open, Pizzicato, SnapPizzicato, RandomPizzicato, UpBow, DownBow, Detache, Martele, Jete, ColLegno, SulPont, SulTasto, GhostNote, CrossNote, CircleNote, TriangleNote, DiamondNote, Fall, QuickFall, Doit, Plop, Scoop, Bend, SlideOutDown, SlideOutUp, SlidelnAbove, SlidelnBelow, VolumeSwell, Distortion, Overdrive, Slap, Pop.
9. The method according to claim 1, wherein the one or more articulations comprise multi-note articulations including one or more of: DiscreteGlissando, ContinuousGlissando, Legato, Pedal, Arpeggio, ArpeggioUp, ArpeggioDown, ArpeggioStraightUp, ArpeggioStraightDown, Vibrato, Wide Vibrato, Mol to Vibrato, SenzaVibrato, Tremolo8th, Tremolo16th, Tremolo32nd, Tremolo64th, Trill, TrillBaroque, UpperMordent, LowerMordent, UpperMordentBaroque, LowerMordentBaroque, PrallMordent, MordentWithUpperPrefix, UpMordent, DownMordent, Tremblement, UpPrall, PrallUp, PrallDown, LinePrall, Slide, Turn, InvertedTum, PreAppoggiatura, PostAppoggiatura, Acciaccatura, TremoloBar.
10. A system for generating notations, the system comprising: a user interface; a first input module to receive, via the user interface, a musical note; a second input module to receive, via the user interface, one or more parameters to be associated with the musical note, wherein the one or more parameters comprise at least one of: an arrangement context providing information about an event for the musical note including at least one of a duration for the musical note, a timestamp for the musical note and a voice layer index for the musical note, a pitch context providing information about a pitch for the musical note including at least one of a pitch class for the musical note, an octave for the musical note and a pitch curve for the musical note, and an expression context providing information about one or more articulations for the musical note including at least one of an articulation map for the musical note, a dynamic level for the musical note and an expression curve for the musical note, wherein, the articulation map for the musical note provides a relative position as a percentage indicating an absolute position of the musical note, the timestamp for the musical note indicates a duration of the musical note, the voice layer index for the musical note provides a value from a range of indexes indicating a placement of the musical note in a voice layer, or a rest in the voice layer; and a processing arrangement configured to generate a notation output based on the entered musical note and the added one or more parameters associated therewith.
11. The system according to claim 10, wherein, in the arrangement context, the duration for the musical note indicates a time duration of the musical note, the timestamp for the musical note indicates an absolute position of the musical note, and the voice layer index for the musical note provides a value from a range of indexes indicating a placement of the musical note in a voice layer, or a rest in the voice layer.
12. The system according to claim 10, wherein, in the pitch context, the pitch class for the musical note indicates a value from a range including C, C #, D, D #, E, F, F #, G, G #, A, A #, B for the musical note, the octave for the musical note indicates an integer number representing an octave of the musical note, and the pitch curve for the musical note indicates a container of points representing a change of the pitch of the musical note over duration thereof.
13. The system according to claim 10, wherein the one or more articulations comprise: dynamic change articulations providing instructions for changing the dynamic level for the musical note, duration change articulations providing instructions for changing the duration of the musical note, or relation change articulations providing instructions to impose additional context on a relationship between two or more musical notes.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) One or more embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DETAILED DESCRIPTION OF THE DRAWINGS
(13) Referring to
(14) At a step 102, the method 100 comprises receiving, via a first input module of a user interface, a musical note. The musical note(s) may be entered by a user via the first input module configured to allow the user to enter the musical note to be translated or notated by the method 100. The musical note may be received from a musical scoring program/software or from a musical instrument (e.g., a keyboard or a guitar). In some embodiments, the musical note may indicate that a musical note is being played without any other data associated with the note.
(15) At a step 104, the method 100 further comprises receiving, via a second input module of the user interface, one or more parameters to be associated with the musical note, wherein the one or more parameters comprise at least one of: an arrangement context providing information about an event for the musical note including at least one of a duration for the musical note, a timestamp for the musical note and a voice layer index for the musical note, a pitch context providing information about a pitch for the musical note including at least one of a pitch class for the musical note, an octave for the musical note and a pitch curve for the musical note, and an expression context providing information about one or more articulations for the musical note including at least one of an articulation map for the musical note, a dynamic type for the musical note and an expression curve for the musical note.
(16) And, at a step 106, the method further comprises generating a notation output, via a processor arrangement, based on the entered musical note and the added one or more parameters associated therewith. Upon addition of one or more parameters via the second input module by the user, the method 100 further comprises generating the notation output based on the one or more parameters.
(17) The steps 102, 104, and 106 are only illustrative and other alternatives can also be provided where one or more steps are added, one or more steps are removed, or one or more steps are provided in a different sequence without departing from the scope of the claims herein.
(18) It should be understood that in some embodiments, the system and method described herein are not associated with the MIDI protocol. The embodiments described herein may function as a replacement for the MIDI protocol. However, the embodiments described herein may be converted to a MIDI protocol MIDI for devices that are only compatible with MIDI.
(19) The generated notation output described herein may be converted to MIDI by removing information that is beyond the scope of conventional MIDI devices.
(20) For example, to convert the protocol associated with the embodiments described herein, durations may be converted to simply Note On/Note Off events. Furthermore, a combination of the pitch and octave contexts may be converted to a MIDI pitch class (e.g. C2, D4, etc.). The velocity measurement described in the present specification records velocity of a note at a far higher resolution than the MIDI protocol so this may be converted down to MIDI velocity range (0-127). Furthermore, (i) rests events and (ii) articulations, such as staccato, pizzicato, arco, mute, palm mute, sul ponticello, snare off, flutter tongue, etc. would be discarded because the MIDI protocol does not support articulations or rest events. In the case of two notes that are tied together, these would have been converted to a single “Note On” and “Note Off” event and slurs/phrase marks would be also be discarded because MIDI doesn't understand them.
(21) Referring to
(22) Referring to
(23) Referring to
(24) Referring to
(25) Referring to
(26) Referring to
(27) Referring to
(28) Referring to
(29) For illustrative purposes, and to aid in understanding features of the specification, an example will now be introduced. This example is not intended to limit the scope of the claims. In some embodiments, and referring now to
(30) This idea of ‘multiple articulation’ contexts is not known in the art and is not available when using a MIDI protocol. Moreover, articulations may mean different things when combined with various other articulations. For example, a staccato within a phrase mark may be articulated differently depending on the assigned instrument. When playing violin, a staccato note has a very specific sound. For piano, a different sound again. For guitar, it's meaningless and should be ignored.
(31) This written description uses examples to disclose multiple embodiments, including the preferred embodiments, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. Aspects from the various embodiments described, as well as other known equivalents for each such aspects, can be mixed and matched by one of ordinary skill in the art to construct additional embodiments and techniques in accordance with principles of this application.
(32) Those in the art will appreciate that various adaptations and modifications of the above-described embodiments can be configured without departing from the scope and spirit of the claims. Therefore, it is to be understood that the claims may be practiced other than as specifically described herein.