Method and system for generating musical notations

11798522 · 2023-10-24

    Inventors

    Cpc classification

    International classification

    Abstract

    Disclosed is a computer-implemented method and system for generating musical notations. The method comprises receiving, via a first input module of a user interface, a musical note, receiving, via a second input module of the user interface, one or more parameters to be associated with the musical note, wherein the one or more parameters comprise at least one of an arrangement context providing information about an event for the musical note, a pitch context providing information about a pitch for the musical note, and an expression context providing information about one or more articulations for the musical note, and generating a notation output based on the entered musical note and the added one or more parameters associated therewith.

    Claims

    1. A computer-implemented method for generating notations, the method comprising: receiving, via a first input module of a user interface, a musical note; receiving, via a second input module of the user interface, one or more parameters to be associated with the musical note, wherein the one or more parameters comprise at least one of: an arrangement context providing information about an event for the musical note including at least one of a duration for the musical note, a timestamp for the musical note and a voice layer index for the musical note, a pitch context providing information about a pitch for the musical note including at least one of a pitch class for the musical note, an octave for the musical note and a pitch curve for the musical note, and an expression context providing information about one or more articulations for the musical note including at least one of an articulation map for the musical note, a dynamic type for the musical note and an expression curve for the musical note, wherein, the articulation map for the musical note provides a relative position as a percentage indicating an absolute position of the musical note, the dynamic type for the musical note indicates a type of dynamic applied over the duration of the musical note, the expression curve for the musical note indicates a container of points representing values of an action force associated with the musical note; and generating, via a processor arrangement, a notation output based on the entered musical note and the added one or more parameters associated therewith.

    2. The method according to claim 1, wherein, in the arrangement context, the duration for the musical note indicates a time duration of the musical note, the timestamp for the musical note indicates an absolute position of the musical note, and the voice layer index for the musical note provides a value from a range of indexes indicating a placement of the musical note in a voice layer, or a rest in the voice layer.

    3. The method according to claim 1, wherein, in the pitch context, the pitch class for the musical note indicates a value from a range including C, C #, D, D #, E, F, F #, G, G #, A, A #, B for the musical note, the octave for the musical note indicates an integer number representing an octave of the musical note, and the pitch curve for the musical note indicates a container of points representing a change of the pitch of the musical note over duration thereof.

    4. The method according to claim 1, wherein the one or more articulations comprise: dynamic change articulations providing instructions for changing the dynamic level for the musical note, duration change articulations providing instructions for changing the duration of the musical note, or relation change articulations providing instructions to impose additional context on a relationship between two or more musical notes.

    5. The method according to claim 4 further comprising receiving, via a third input module of the user interface, individual profiles for each of the one or more articulations for the musical note, wherein the individual profiles comprise one or more of: a genre of the musical note, an instrument of the musical note, a given era of the musical note, a given author of the musical note.

    6. The method according to claim 5, wherein an expression conveyed by each of the one or more articulations for the musical note depends on the defined individual profile therefor.

    7. The method according to claim 1, wherein a pause as the musical note is represented as a RestEvent having the one or more parameters associated therewith, including the arrangement context with the duration, the timestamp and the voice layer index for the pause as the musical note.

    8. The method according to claim 1, wherein the one or more articulations comprise single-note articulations including one or more of: Standard, Staccato, Staccatissimo, Tenuto, Marcato, Accent, SoftAccent, LaissezVibrer, Subito, Fadeln, FadeOut, Harmonic, Mute, Open, Pizzicato, SnapPizzicato, RandomPizzicato, UpBow, DownBow, Detache, Martele, Jete, ColLegno, SulPont, SulTasto, GhostNote, CrossNote, CircleNote, TriangleNote, DiamondNote, Fall, QuickFall, Doit, Plop, Scoop, Bend, SlideOutDown, SlideOutUp, SlidelnAbove, SlidelnBelow, VolumeSwell, Distortion, Overdrive, Slap, Pop.

    9. The method according to claim 1, wherein the one or more articulations comprise multi-note articulations including one or more of: DiscreteGlissando, ContinuousGlissando, Legato, Pedal, Arpeggio, ArpeggioUp, ArpeggioDown, ArpeggioStraightUp, ArpeggioStraightDown, Vibrato, Wide Vibrato, Mol to Vibrato, SenzaVibrato, Tremolo8th, Tremolo16th, Tremolo32nd, Tremolo64th, Trill, TrillBaroque, UpperMordent, LowerMordent, UpperMordentBaroque, LowerMordentBaroque, PrallMordent, MordentWithUpperPrefix, UpMordent, DownMordent, Tremblement, UpPrall, PrallUp, PrallDown, LinePrall, Slide, Turn, InvertedTum, PreAppoggiatura, PostAppoggiatura, Acciaccatura, TremoloBar.

    10. A system for generating notations, the system comprising: a user interface; a first input module to receive, via the user interface, a musical note; a second input module to receive, via the user interface, one or more parameters to be associated with the musical note, wherein the one or more parameters comprise at least one of: an arrangement context providing information about an event for the musical note including at least one of a duration for the musical note, a timestamp for the musical note and a voice layer index for the musical note, a pitch context providing information about a pitch for the musical note including at least one of a pitch class for the musical note, an octave for the musical note and a pitch curve for the musical note, and an expression context providing information about one or more articulations for the musical note including at least one of an articulation map for the musical note, a dynamic level for the musical note and an expression curve for the musical note, wherein, the articulation map for the musical note provides a relative position as a percentage indicating an absolute position of the musical note, the timestamp for the musical note indicates a duration of the musical note, the voice layer index for the musical note provides a value from a range of indexes indicating a placement of the musical note in a voice layer, or a rest in the voice layer; and a processing arrangement configured to generate a notation output based on the entered musical note and the added one or more parameters associated therewith.

    11. The system according to claim 10, wherein, in the arrangement context, the duration for the musical note indicates a time duration of the musical note, the timestamp for the musical note indicates an absolute position of the musical note, and the voice layer index for the musical note provides a value from a range of indexes indicating a placement of the musical note in a voice layer, or a rest in the voice layer.

    12. The system according to claim 10, wherein, in the pitch context, the pitch class for the musical note indicates a value from a range including C, C #, D, D #, E, F, F #, G, G #, A, A #, B for the musical note, the octave for the musical note indicates an integer number representing an octave of the musical note, and the pitch curve for the musical note indicates a container of points representing a change of the pitch of the musical note over duration thereof.

    13. The system according to claim 10, wherein the one or more articulations comprise: dynamic change articulations providing instructions for changing the dynamic level for the musical note, duration change articulations providing instructions for changing the duration of the musical note, or relation change articulations providing instructions to impose additional context on a relationship between two or more musical notes.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    (1) One or more embodiments of the present disclosure will now be described, by way of example only, with reference to the following diagrams wherein:

    (2) FIG. 1 is an illustration of a flowchart listing steps involved in a computer-implemented method 100 for generating notations, in accordance with an embodiment of the present disclosure;

    (3) FIG. 2 is an illustration of a block diagram of a system 200 for generating notations, in accordance with another embodiment of the present disclosure;

    (4) FIG. 3 is an illustration of an exemplary depiction of a musical note using the one or more parameters, in accordance with an embodiment of the present disclosure;

    (5) FIG. 4 is an exemplary depiction of a musical note being translated into an arrangement context, in accordance with an embodiment of the present disclosure;

    (6) FIG. 5 is an exemplary depiction of a musical note being translated into a pitch context, in accordance with an embodiment of the present disclosure;

    (7) FIG. 6 is an exemplary depiction of a musical note being translated into an expression context, in accordance with an embodiment of the present disclosure;

    (8) FIG. 7A is an exemplary depiction of a musical note with a sforzando dynamic applied therein, in accordance with an embodiment of the present disclosure;

    (9) FIG. 7B is an exemplary depiction of the musical note being translated into an expression context, wherein the expression context comprises an articulation map, in accordance with another embodiment of the present disclosure;

    (10) FIG. 8 is an exemplary depiction of a complete translation of a musical note via the method of FIG. 1 or system of FIG. 2, in accordance with one or more embodiments of the present disclosure;

    (11) FIG. 9 is an exemplary depiction of a passage of a musical score in accordance with an embodiment of the present disclosure;

    (12) FIG. 9A is an exemplary depiction of a passage of a musical score in accordance with an embodiment of the present disclosure.

    DETAILED DESCRIPTION OF THE DRAWINGS

    (13) Referring to FIG. 1, illustrated is a flowchart listing steps involved in a computer-implemented method 100 for generating notations, in accordance with an embodiment of the present disclosure. As shown, the method 100 comprising steps 102, 104, and 106.

    (14) At a step 102, the method 100 comprises receiving, via a first input module of a user interface, a musical note. The musical note(s) may be entered by a user via the first input module configured to allow the user to enter the musical note to be translated or notated by the method 100. The musical note may be received from a musical scoring program/software or from a musical instrument (e.g., a keyboard or a guitar). In some embodiments, the musical note may indicate that a musical note is being played without any other data associated with the note.

    (15) At a step 104, the method 100 further comprises receiving, via a second input module of the user interface, one or more parameters to be associated with the musical note, wherein the one or more parameters comprise at least one of: an arrangement context providing information about an event for the musical note including at least one of a duration for the musical note, a timestamp for the musical note and a voice layer index for the musical note, a pitch context providing information about a pitch for the musical note including at least one of a pitch class for the musical note, an octave for the musical note and a pitch curve for the musical note, and an expression context providing information about one or more articulations for the musical note including at least one of an articulation map for the musical note, a dynamic type for the musical note and an expression curve for the musical note.

    (16) And, at a step 106, the method further comprises generating a notation output, via a processor arrangement, based on the entered musical note and the added one or more parameters associated therewith. Upon addition of one or more parameters via the second input module by the user, the method 100 further comprises generating the notation output based on the one or more parameters.

    (17) The steps 102, 104, and 106 are only illustrative and other alternatives can also be provided where one or more steps are added, one or more steps are removed, or one or more steps are provided in a different sequence without departing from the scope of the claims herein.

    (18) It should be understood that in some embodiments, the system and method described herein are not associated with the MIDI protocol. The embodiments described herein may function as a replacement for the MIDI protocol. However, the embodiments described herein may be converted to a MIDI protocol MIDI for devices that are only compatible with MIDI.

    (19) The generated notation output described herein may be converted to MIDI by removing information that is beyond the scope of conventional MIDI devices.

    (20) For example, to convert the protocol associated with the embodiments described herein, durations may be converted to simply Note On/Note Off events. Furthermore, a combination of the pitch and octave contexts may be converted to a MIDI pitch class (e.g. C2, D4, etc.). The velocity measurement described in the present specification records velocity of a note at a far higher resolution than the MIDI protocol so this may be converted down to MIDI velocity range (0-127). Furthermore, (i) rests events and (ii) articulations, such as staccato, pizzicato, arco, mute, palm mute, sul ponticello, snare off, flutter tongue, etc. would be discarded because the MIDI protocol does not support articulations or rest events. In the case of two notes that are tied together, these would have been converted to a single “Note On” and “Note Off” event and slurs/phrase marks would be also be discarded because MIDI doesn't understand them.

    (21) Referring to FIG. 2, illustrated is a block diagram of a system 200 for generating notations, in accordance with another embodiment of the present disclosure. As shown, the system 200 comprises a user interface 202, a first input module 204, a second input module 206, and a processing arrangement 208. Herein, the first input module 204 may be configured to receive, via the user interface 202, a musical note. The system 200 further comprises a second input module 206 to receive, via the user interface 202, one or more parameters to be associated with the musical note, wherein the one or more parameters comprise at least one of an arrangement context providing information about an event for the musical note including at least one of a duration for the musical note, a timestamp for the musical note and a voice layer index for the musical note, a pitch context providing information about a pitch for the musical note including at least one of a pitch class for the musical note, an octave for the musical note and a pitch curve for the musical note, and an expression context providing information about one or more articulations for the musical note including at least one of an articulation map for the musical note, a dynamic level for the musical note and an expression curve for the musical note. For example, the first input module 204 enables a user to enter the musical note and the second input module 206 enables the user to modify or add the one or more parameters associated therewith. The system 100 further comprises the processing arrangement 208 configured to generate a notation output based on the entered musical note and the added one or more parameters associated therewith.

    (22) Referring to FIG. 3, illustrated is an exemplary depiction of a musical note using the one or more parameters 300, in accordance with one or more embodiments of the present disclosure. As shown, the exemplary musical note is depicted using the one or more parameters 300 added by the user via the second input module 206 of the user interface 202 i.e., the musical note may be translated using the one or more parameters 300 for further processing and analysis thereof. Herein, the one or more parameters 300 comprises at least an arrangement context 302, wherein the arrangement context 302 comprises a timestamp 302A, a duration 302B and a voice layer index 302C. Further, the one or more parameters 300 comprises a pitch context 304, wherein the pitch context 304 comprises a pitch class 304A, an octave 304B, and a pitch curve 304C. Furthermore, the one or more parameters 300 comprises an expression context 306, wherein the expression context 306 comprises an articulation map 306A, a dynamic type 306B, and an expression curve 306C. Collectively, the arrangement context 302, the pitch context 304, the expression context 306 enable the method 100 or the system 200 to generate accurate and effective notations. In come embodiments, pitch context 304 may comprise (i) a pitch level and (ii) a pitch curve. Pitch level may include an integral value of the pitch, which may be a product of “pitch class” and “octave”. This may provide for with various pitch deviations which may not exist in common 12 tone equal temperament tonal system. Furthermore, inverse transformations of a pitch level may be used to determine a pitch class and octave with remaining amount of “tuning” (if it exists). The pitch curve determines a pitch of time and, as such, may comprise a container of points representing a change of the pitch level of the musical note over a duration of time.

    (23) Referring to FIG. 4, illustrated is an exemplary depiction of a musical note 400 being translated into the arrangement context 302, in accordance with an embodiment of the present disclosure. As shown, the musical note 400 comprises a stave and five distinct events or notes that are required to be translated into corresponding arrangement context i.e., the five distinct events of the musical note 400 are represented by the arrangement context 302 further comprising inherent arrangement contexts 402A to 402E. The first musical note is represented as a first arrangement context 402A comprising a timestamp=0 s, a duration=500 ms, and a voice layer index=0. The second musical note is represented as a second arrangement context 402B comprising a timestamp=500 ms, a duration=500 ms, and a voice layer index=0. The third musical note is represented as a third arrangement context 402C comprising a timestamp=1000 ms, a duration=250 ms, and a voice layer index=0. The fourth musical note is represented as a fourth arrangement context 402D comprising a timestamp=1250 s, a duration=250 ms, and a voice layer index=0. The fifth musical note is represented as a fifth arrangement context 402A comprising a timestamp=0 s, a duration=500 ms, and a voice layer index=0.

    (24) Referring to FIG. 5, illustrated is an exemplary depiction of a musical note 500 being translated into the pitch context 304, in accordance with an embodiment of the present disclosure. As shown, the musical note 500 comprises two distinct events or notes that are required to be translated into corresponding pitch context i.e., the two distinct events of the musical note 500 are represented by the pitch contexts 304 further comprising inherent pitch contexts 504A and 504B. The first musical note is represented by the first pitch context 504A, wherein the first pitch context 504A comprises the pitch class=E, the octave=5, and the pitch curve 506A. The second musical note is represented by the second pitch context 504B, wherein the second pitch context 504B comprises the pitch class=C, the octave=5, and the pitch curve 506B.

    (25) Referring to FIG. 6, illustrated is an exemplary depiction of a musical note 600 being translated into the expression context 306, in accordance with an embodiment of the present disclosure. As shown, the musical note 500 comprises three distinct events or notes that are required to be translated into corresponding expression context 306 i.e., the three distinct events of the musical note 600 are represented by the expression context 306 further comprising inherent expression contexts 606A to 606C. The first musical note is represented as a first expression context 606A, wherein the first expression context 606A comprises an articulation map (not shown), a dynamic type=‘mp’, and an expression curve 604A. The second musical note is represented as a second expression context 606B, wherein the second expression context 606B comprises an articulation map (not shown), a dynamic type=‘mf’., and an expression curve 604B. The third musical note is represented as a third expression context 606C, wherein the third expression context 606C comprises an articulation map (not shown), a dynamic type=‘mf’, and an expression curve 604C.

    (26) Referring to FIG. 7A, illustrated is an exemplary depiction of a musical note 700 with a sforzando (sfz) dynamic applied therein, in accordance with some embodiments of the present disclosure. As shown, the musical note 700 comprises three distinct events or notes that are required to be translated into the expression context 306 i.e., the three events are translated into the corresponding expression context 306, with each event or note marked with a “Staccato” articulation and wherein, the second note of the musical note 700 comprises the sforzando (or “subito forzando”) dynamic applied therewith, which indicates that the player should suddenly play with force. The first musical note is represented as a first expression context 706A, wherein the first expression context 706A comprises an articulation map (not shown), a dynamic type=‘natural’, and an expression curve 704A and the third musical note is represented as a third expression context 706C, wherein the first expression context 706A is similar to the third expression context 706C. However, the second musical note is represented as a second expression context 706B, wherein the second expression context 706B comprises an articulation map (not shown), a dynamic type=‘mp’, and an expression curve 704B. In this case, the expression curve 704B is short, with a sudden “attack” phase followed by a gradual “release” phase over the duration of the note.

    (27) Referring to FIG. 7B, illustrated is an exemplary depiction of the musical note 700 being translated into the expression context 306, wherein the expression context 306 comprises an articulation map 702, in accordance with one or more embodiments of the present disclosure. As shown, the articulation map 702 describes the distribution of the one or more articulations; wherein, since all performance instructions are applicable to a single note i.e., the second note of the musical note 700, the timestamp and duration of each particular articulation matches the corresponding notes.

    (28) Referring to FIG. 8, illustrated is an exemplary depiction of a complete translation of a musical note 800 via the method 100 or system 200, in accordance with one or more embodiments of the present disclosure. As shown, the musical note 800 comprises seven distinct events i.e., six note events and a rest event. The musical note 800 is expressed or translated in the terms of the one or more parameters 300, wherein each of the six note events comprises respective arrangement context 402X, pitch context 504X, and expression context 606X, X indicates position of an event within the musical note 800, and wherein the rest event comprises only the arrangement context 402E associated therewith. The first event of the musical note 800 i.e., the first note event is expressed by the first arrangement context 402A comprising the time stamp=0 s, duration=500 ms, and voice layer index=0, the first pitch context 504A comprising the pitch class=‘F’, the octave=5, and the pitch curve 506A, and the first expression context 606A comprising the articulation map (not shown), the dynamic type, and the expression curve 604A. Similarly, the second event of the musical note 800 i.e., the second note event is expressed by the second arrangement context 402B comprising the time stamp=0 s, duration=500 ms, and voice layer index=0, the second pitch context 504B comprising the pitch class=‘D’, the octave=5, and the pitch curve 506A, and the second expression context 606A comprising the articulation map (not shown), the dynamic type, and the expression curve 604B. Such a process is followed for each of the events in the musical note 800 except for the rest event i.e., the fifth event of the musical note, wherein only a fifth arrangement context 402E is used for expression of the rest event, wherein the fifth arrangement context 402E comprising the timestamp=750 ms, the duration=250 ms and the voice layer index=0.

    (29) For illustrative purposes, and to aid in understanding features of the specification, an example will now be introduced. This example is not intended to limit the scope of the claims. In some embodiments, and referring now to FIG. 9 and FIG. 9A, an example of a music score is illustrated. FIG. 9 and FIG. 9A illustrate a slur. Coding the slur in FIG. 9, a sampler may determine that the first note occupies 25% of the overall duration of the slur. This information allows the sampler to define a phrasing behavior for the slur. Furthermore, a third quarter note 901 “knows” that it's occupies 50-75% of the slur's duration as well as 0-100% of an accent's 902 duration.

    (30) This idea of ‘multiple articulation’ contexts is not known in the art and is not available when using a MIDI protocol. Moreover, articulations may mean different things when combined with various other articulations. For example, a staccato within a phrase mark may be articulated differently depending on the assigned instrument. When playing violin, a staccato note has a very specific sound. For piano, a different sound again. For guitar, it's meaningless and should be ignored.

    (31) This written description uses examples to disclose multiple embodiments, including the preferred embodiments, and also to enable any person skilled in the art to practice the invention, including making and using any devices or systems and performing any incorporated methods. Aspects from the various embodiments described, as well as other known equivalents for each such aspects, can be mixed and matched by one of ordinary skill in the art to construct additional embodiments and techniques in accordance with principles of this application.

    (32) Those in the art will appreciate that various adaptations and modifications of the above-described embodiments can be configured without departing from the scope and spirit of the claims. Therefore, it is to be understood that the claims may be practiced other than as specifically described herein.