SYNTHETIC MUSICAL INSTRUMENT WITH TOUCH DYNAMICS AND/OR EXPRESSIVENESS CONTROL
20170011724 · 2017-01-12
Inventors
- Perry R. Cook (Jacksonville, OR)
- Jeannie Yang (San Francisco, CA)
- Yar Woo (San Francisco, CA, US)
- John Shimmin (San Francisco, CA)
- Randal Leistikow (Denver, CO)
- Michael Berger (Redwood City, CA, US)
- Jeff Smith (San Francisco, CA, US)
Cpc classification
G10H2210/225
PHYSICS
G10H2220/355
PHYSICS
G10H1/368
PHYSICS
G10H1/0016
PHYSICS
G10H2220/395
PHYSICS
G10H2220/241
PHYSICS
G10H2210/201
PHYSICS
G10H2210/091
PHYSICS
International classification
Abstract
Notwithstanding practical limitations imposed by mobile device platforms and applications, truly captivating musical instruments may be synthesized in ways that allow musically expressive performances to be captured and rendered in real-time. Synthetic musical instruments that provide a game, grading or instructional mode are described in which one or more qualities of a user's performance are assessed relative to a musical score. By providing a range of modes (from score-assisted to fully user-expressive), user interactions with synthetic musical instruments are made more engaging and tend to capture user interest over generally longer periods of time. Synthetic musical instruments are described in which force dynamics of user gestures (such as finger contact forces applied to a multi-touch sensitive display or surface and/or the temporal extent and applied pressure of sustained contact thereon) are captured and drive the digital synthesis in ways that enhance expressiveness of user performances.
Claims
1. A method comprising: using a portable computing device as a synthetic musical instrument; presenting a user of the synthetic musical instrument with visual cues on a multi-touch sensitive display of the portable computing device, the presented visual cues indicative of temporally sequenced note selections in accord with a musical score; capturing note sounding gestures indicated by the user based on finger contacts with the multi-touch sensitive display, wherein individual ones of the captured note sounding gestures are characterized, at least in part, based on position and dynamics of finger contact with the multi-touch sensitive display; and audibly rendering a performance on the portable computing device in real-time correspondence with the captured note sounding gestures, including the finger contact dynamics thereof.
2. The method of claim 1, wherein the finger contact dynamics include a characterization of finger contact force applied to the multi-touch sensitive display; and wherein the characterization of finger contact force is used as at least a contributing indicator for velocity with which a corresponding note is sounded in the audibly rendered performance.
3. The method of claim 2, further comprising: for member notes of a chord sounded in the audibly rendered performance, applying a generally uniform velocity based on the characterization of at least one corresponding finger contact force.
4. The method of claim 2, further comprising: for member notes of a chord sounded in the audibly rendered performance, applying individual velocities based, at least in part, on characterizations of respective finger contact forces.
5. The method of claim 2, wherein the finger contact force is characterized at the portable computing device based on sensitivity of the multi-touch sensitive display itself to a range of applied force magnitudes.
6. The method of claim 5, wherein the characterization of finger contact force includes a remapping from a multi-touch sensitive display contact force data domain to a mapped range of note velocities for the synthetic musical instrument.
7. The method of claim 6, wherein the synthetic musical instrument includes a piano or keyboard; and wherein the remapping is in accord with a normalized half-sigmoidal-type mapping function.
8. The method of claim 2, wherein the finger contact force is characterized at the portable computing device based on accelerometer data associable with the finger contact.
9. The method of claim 2, wherein the finger contact dynamics further include both onset and release of a finger contact; and wherein a temporal extent of the finger contact, from onset to release, is used as at least a contributing indicator for sustaining of a corresponding note sounded in the audibly rendered performance.
10. The method of claim 2, wherein the finger contact dynamics further include aftertouch dynamics used as at least for vibrato or bend of a corresponding note sounded in the audibly rendered performance.
11. The method of claim 1, wherein the musical score encodes a temporal sequencing of note selections together with corresponding dynamics, the method further comprising: for at least a subset of the captured note sounding gestures, computing effective note sounding dynamics based, for a given note sounding gesture, on both: the score-coded dynamics for the corresponding note selection; and user-expressed dynamics of finger contact with the multi-touch sensitive display; and audibly rendering the performance on the portable computing device in real-time correspondence with the captured note sounding gestures based on the computed effective note sounding dynamics.
12. The method of claim 11, further comprising: computing the effective note sounding dynamics as a function that includes a weighed sum of the score-coded and user-expressed dynamics.
13. The method of claim 12, wherein the weighed sum includes an approximately 25% contribution in accord with score-coded note velocities and an approximately 75% contribution in accord user-expressed note sounding velocity characterized based on finger contact forces applied to the multi-touch sensitive display.
14. The method of claim 11, further comprising: varying comparative contributions of score-coded dynamics and user-expressed dynamics to the computed effective note sounding dynamics based on a user interface control.
15. The method of claim 14, wherein the user interface control is provided at least in part, using a slider, knob or selector visually presented on the multi-touch sensitive display; and wherein the user interface control provides either or both of: a predetermined set of values for the comparative contributions and an effectively continuous variation of the comparative contributions.
16. The method of claim 14, further comprising: dynamically varying the comparative contributions.
17. The method of claim 11, further comprising: based on the musical score, dynamically varying during a course of the performance comparative contributions of score-coded dynamics and user-expressed dynamics to the computed effective note sounding dynamics.
18. The method of claim 11, further comprising: computing the effective note sounding dynamics as a function that modulates score-coded note velocities based on characterization of user-expressed finger contact forces applied to the multi-touch sensitive display in connection with the particular note sounding gestures.
19. The method of claim 1, further comprising: determining correspondence of respective captured note sounding gestures with the visual cues; and grading the user's performance based on the determined correspondences.
20. The method of claim 19, further comprising: presenting the user with visual cues indicative of score-coded note velocities, wherein the determined correspondences include correspondence of score-coded note velocities with note velocities actually expressed by the users note sounding gestures.
21. The method of claim 19, wherein the determined correspondences include a measure of correspondence of finger contact dynamics for particular note sounding gesture with visually cued note velocity.
22. The method of claim 1, wherein the presented visual cues traverse at least a portion of the multi-touch sensitive display toward a sounding zone.
23. The method of claim 1, wherein the synthetic musical instrument is a piano or keyboard, and wherein the visual cues travel across the multi-touch sensitive display and represent, in one dimension of the multi-touch sensitive display, desired key contacts in accordance with notes of the score and, in a second dimension generally orthogonal to the first, temporal sequencing of the desired key contacts.
24. The method of claim 1, wherein the synthetic musical instrument is a string instrument, and wherein the visual cues code, in one dimension of the multi-touch sensitive display, desired contact with corresponding ones of the strings in accordance with the score and, in a second dimension generally orthogonal to the first, temporal sequencing of the desired contacts paced in accord with the current value of the target tempo.
25. The method of claim 24, wherein the captured note sounding gestures are indicative of both string excitation and pitch selection for the excited string.
26. The method of claim 19, further comprising: presenting on the multi-touch sensitive display a lesson plan of exercises, wherein the captured note selection gestures correspond to performance by the user of a particular one of the exercises; and advancing the user to a next exercise of the lesson plan based on a grading of the user's performance of the particular exercise.
27. The method of claim 1, wherein the portable computing device includes a communications interface, the method further comprising, transmitting an encoded stream of the note sounding gestures via the communications interface for rendering of the performance on a remote device.
28. The method of claim 1, wherein the audible rendering includes: modeling acoustic response for one of a piano, a guitar, a violin, a viola, a cello and a double bass; and driving the modeled acoustic response with inputs corresponding to the captured note sounding gestures and, for at least some of the captured note sounding gestures, a combination of score-coded and user-expressed dynamics.
29. The method of claim 1, wherein the portable computing device is selected from the group of: a compute pad; a personal digital assistant or book reader; and a mobile phone or media player.
30. The method of claim 27, further comprising: geocoding the transmitted gesture stream; and displaying a geographic origin for, and in correspondence with audible rendering of, another user's performance encoded as another stream of notes sounding gestures received via the communications interface directly or indirectly from a remote device.
31. An apparatus comprising: a portable computing device having a multi-touch display interface; and machine readable code executable on the portable computing device to implement the synthetic musical instrument, the machine readable code including instructions executable to present a user of the synthetic musical instrument with visual cues on a multi-touch sensitive display of the portable computing device, the presented visual cues indicative of temporally sequenced note selections in accord with a musical score, wherein the musical score further encodes dynamics for at least some of the note selections; and the machine readable code further executable to (i) capture note sounding gestures indicated by the user based on finger contacts with the multi-touch sensitive display, wherein individual ones of the captured note sounding gestures are characterized, at least in part, based on position and dynamics of finger contact with the multi-touch sensitive display and (ii) for at least a subset of the captured note sounding gestures, to compute effective note sounding dynamics based, for a given note sounding gesture, on both the score-coded dynamics for the corresponding note selection and user-expressed dynamics of finger contact with the multi-touch sensitive display.
32. The apparatus of claim 31, further comprising: machine readable code executable on the portable computing device to audibly render the performance on the portable computing device in real-time correspondence with the captured note sounding gestures based on the computed effective note sounding dynamics.
33. The apparatus of claim 31, embodied as one or more of a compute pad, a handheld mobile device, a mobile phone, a personal digital assistant, a smart phone, a media player and a book reader.
34. A computer program product encoded in media and including instructions executable to implement a synthetic musical instrument on a portable computing device having a multi-touch display interface, the computer program product encoding and comprising: instructions executable on the portable computing device to present a user of the synthetic musical instrument with visual cues on the multi-touch sensitive display of the portable computing device, the presented visual cues indicative of temporally sequenced note selections in accord with a musical score, wherein the musical score further encodes dynamics for at least some of the note selections; and instructions executable on the portable computing device to (i) capture note sounding gestures indicated by the user based on finger contacts with the multi-touch sensitive display, wherein individual ones of the captured note sounding gestures are characterized, at least in part, based on position and dynamics of finger contact with the multi-touch sensitive display and (ii) for at least a subset of the captured note sounding gestures, to compute effective note sounding dynamics based, for a given note sounding gesture, on both the score-coded dynamics for the corresponding note selection and user-expressed dynamics of finger contact with the multi-touch sensitive display.
35. The computer program product of claim 31, further encoding and comprising: instructions executable on the portable computing device to audibly render the performance on the portable computing device in real-time correspondence with the captured note sounding gestures based on the computed effective note sounding dynamics.
36. The computer program product of claim 31, wherein the media are readable by the portable computing device or readable incident to a computer program product conveying transmission to the portable computing device.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0055] The present invention is illustrated by way of example and not limitation with reference to the accompanying figures, in which like references generally indicate similar elements or features.
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063] Skilled artisans will appreciate that elements or features in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions or prominence of some of the illustrated elements or features may be exaggerated relative to other elements or features in an effort to help to improve understanding of embodiments of the present invention.
DESCRIPTION
[0064] Many aspects of the design and operation of a synthetic musical instrument with touch dynamics and/or expressiveness control will be understood based on the description herein of certain exemplary piano- or keyboard-type implementations and teaching examples. Nonetheless, it will be understood and appreciated based on the present disclosure that variations and adaptations for other instruments are contemplated. Portable computing device implementations and deployments typical of a social music applications for iOS and Android devices are emphasized for purposes of concreteness. Score or tablature user interface conventions popularized in the Magic Piano, Magic Fiddle, Magic Guitar, Leaf Trombone: World Stage and Ocarina 2 applications (available from Smule Inc.) are likewise emphasized.
[0065] While these synthetic keyboard-type, string and even wind instruments and application software implementations provide a concrete and helpful descriptive framework in which to describe aspects of the invented techniques, it will be understood that Applicant's techniques and innovations are not necessarily limited to such instrument types or to the particular user interface designs or conventions (including e.g., musical score presentations, note sounding gestures, visual cuing, sounding zone depictions, etc.) implemented therein. Indeed, persons of ordinary skill in the art having benefit of the present disclosure will appreciate a wide range of variations and adaptations as well as the broad range of applications and implementations consistent with the examples now more completely described.
Exemplary Synthetic Piano-Type Application
[0066]
[0067]
[0068]
[0069]
[0070]
[0071] Just as early and late sounding of cued notes are potentially expressive, so too can be finger contact dynamics that, in embodiments of a synthetic musical instrument implemented on a portable computing device capable of registering variations finger contact forces applied a multi-touch sensitive display. More specifically, measured or estimated magnitudes of finger contact forces applied in the course of the key strike gestures described above are captured as user expression of keyed note velocity and/or after-touch key pressure. Persons of skill in the art having benefit of the present disclosure will appreciate that, in certain embodiments, visual cuing symbologies such as that illustrated in
[0072]
[0073] In general, the audible rendering can include synthesis of tones, overtones, harmonics, perturbations and amplitudes and other performance characteristics based on the captured gesture stream. In some cases, rendering of the performance includes audible rendering by converting to acoustic energy a signal synthesized from the gesture stream encoding (e.g., by driving a speaker). In some cases, the audible rendering is on the very device on which the musical performance is captured. In some cases, the gesture stream encoding is conveyed to a remote device whereupon audible rendering converts a synthesized signal to acoustic energy.
[0074] The digital synthesis (554) of a piano (or other keyboard-type percussion instrument) allows the user musician to control an actual expressive model using multi-sensor interactions (e.g., finger strikes at laterally coded note positions on screen, perhaps with sustenance or damping gestures expressed by particular finger travel or via a orientation- or accelerometer-type sensor 517) as inputs. In a portable computing device 501 embodiment that provides a force or pressure sensitive multi-touch sensitive display or which is configured to generate similar accelerometer-based data, key strike forces are captured as an additional component of user expression. Note that digital synthesis (554) is, at least for full synthesis modes, driven by the user musician's note sounding gestures, rather than by mere tap triggered release of the next score coded note. In this way, the user is actually causing the sound and controlling the timing, velocity, sustain, decay, pitch, quality and other characteristics of notes (including chords) sounded. A variety of computational techniques may be employed and will be appreciated by persons of ordinary skill in the art. For example, exemplary techniques include wavetable or FM synthesis.
[0075] Wavetable or FM synthesis is generally a computationally efficient and attractive digital synthesis implementation for piano-type musical instruments such as those described and used herein as primary teaching examples. However, and particularly for adaptations of the present techniques to syntheses of certain types of multi-string instruments (e.g., unfretted multi-string instruments such as violins, violas cellos and double bass), physical modeling may provide a livelier, more expressive synthesis that is responsive (in ways similar to physical analogs) to the continuous and expressively variable excitation of constituent strings. For a discussion of digital synthesis techniques that may be suitable in other synthetic instruments, see generally, commonly-owned U.S. Pat. No. 8,772,621, which is incorporated by reference herein.
[0076] Referring again to
[0077] In general, musical scores in storage 556 may be included with a distribution of the synthetic musical instrument or may be demand retrieved by a user via a communications interface as an in-app purchase. Generally, scores may be encoded in accord with any suitable coding scheme such as in accord with well-known musical instrument digital interface(MIDI-) or open sound control(OSC-) type standards, file/message formats and protocols (e.g., standard MIDI [.mid or .smf] formats, extensible music file, XMF formats; extensible MIDI [.xmi] formats; RIFF-based MIDI [.rmi] formats; extended RMID formats, etc.). Formats may be augmented or annotated to indicate operative windows for adaptive tempo management and/or musical phrase boundaries or key notes.
Performance Grading, Evaluation or Scoring
[0078]
[0079] Specifically,
[0080] In some embodiments and game-play modes, note soundings by a user-musician are scored or credited to a grade, if the selections, timings, velocities, and/or after-touch key pressures expressed in the form of captured note sounding gestures correspond to visually-cued aspects of the musical score. Thus, grading of a user's expressed performance (653) will be understood as follows: [0081] A) with respect to individually cued notes, notes struck in horizontal (lateral) alignment with the horizontal screen position of the visual note cue (i.e., tap the screen on top of the note) are credited based on proper note selections, [0082] B) likewise with respect to individually cued notes, chords, and members of cued chords, applied finger contact forces are evaluated for at least relative correspondence with cued note velocities and after-touch key pressure, and [0083] B) with respect to both chords and individually cued notes, the notes (or constituent notes) struck between the time they vertically enter the horizontal highlighted scoring region (or sounding zone) and the time they leave the region are likewise credited (as in accord with a current tempo). Notes struck before or after the region are not credited, but may nonetheless contribute to a speeding up or slowing down of the current tempo in cases or embodiments that optionally provide adaptive tempo.
[0084] In this manner, songs that are longer and have more notes will yield potentially higher scores or at least the opportunity therefor. The music itself becomes a difficulty metric for the performance, some songs will be easier (and contain fewer notes, simpler sequences and pacings, etc.), while others will be harder (and may contain more notes, more difficult note/chord sequences, varied note velocities, after-touch key pressures, paces, etc.). Users can compete for top scores on a song-by-song basis so the variations in difficulty across songs are not a concern.
Expressiveness
[0085] A flexible performance grading system will generally allow users to create expressive musical performances. As will be appreciated by many a musician, successful and pleasing musical performances are generally not contingent upon performing to precisely-specified note velocities or to an absolute and strict single tempo. Instead, variations in expressed note velocities and tempo are commonly (and desirably) used as intentional musical artifacts by performers, emphasizing and deemphasizing certain notes, chords or members of a chord, embellishing with note sustains or variations after-touch key pressures, speeding up or slowing down phrases, etc. to add emphasis. These modulations in tempo (onsets and sustains) as well as note velocity and/or after-touch (or post-onset key pressure) can all contribute to expressiveness. Accordingly, in synthetic piano implementations described herein, we aim to allow users to be expressive while remaining, generally speaking, rhythmically and otherwise consistent with musical score.
OTHER EMBODIMENTS
[0086]
[0087] While the invention(s) is (are) described with reference to various embodiments, it will be understood that these embodiments are illustrative and that the scope of the invention(s) is not limited to them. Many variations, modifications, additions, and improvements are possible. For example, while a synthetic piano implementation has been used as an illustrative example, variations on the techniques described herein for other synthetic musical instruments such as string instruments (e.g., guitars, violins, etc.) and wind instruments (e.g., trombones) will be appreciated. Furthermore, while certain illustrative processing techniques have been described in the context of certain illustrative applications, persons of ordinary skill in the art will recognize that it is straightforward to modify the described techniques to accommodate other suitable signal processing techniques and effects.
[0088] Embodiments in accordance with the present invention may take the form of, and/or be provided as, a computer program product encoded in a machine-readable medium as instruction sequences and other functional constructs of software, which may in turn be executed in a computational system (such as a iPhone handheld, mobile device or portable computing device) to perform methods described herein. In general, a machine readable medium can include tangible articles that encode information in a form (e.g., as applications, source or object code, functionally descriptive information, etc.) readable by a machine (e.g., a computer, computational facilities of a mobile device or portable computing device, etc.) as well as tangible storage incident to transmission of the information. A machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., disks and/or tape storage); optical storage medium (e.g., CD-ROM, DVD, etc.); magneto-optical storage medium; read only memory (ROM); random access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or other types of medium suitable for storing electronic instructions, operation sequences, functionally descriptive information encodings, etc.
[0089] In general, plural instances may be provided for components, operations or structures described herein as a single instance. Boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the invention(s). In general, structures and functionality presented as separate components in the exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionality presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the invention(s).