G10H2220/121

GESTURE-ENABLED INTERFACES, SYSTEMS, METHODS, AND APPLICATIONS FOR GENERATING DIGITAL MUSIC COMPOSITIONS

This disclosure is directed to systems, methods, apparatuses, and techniques that utilize enhanced gesture-based input mechanisms to facilitate rapid creation and editing of digital music compositions. These technologies can be specially designed and configured to optimize creation, editing, and/or sharing of digital music compositions on mobile electronic devices that include capacitive sensing mechanisms. The technologies include multi-gesture functionalities that enable users to view and access various notation customization features in a compact space of a mobile device display. Additionally, the technologies encompass improved data storage models that enable underlying notation data to be accessed in multiple operational modes, and permit frequencies or pitches of notations to be accurately generated and incorporated into audio signals.

Gesture-enabled interfaces, systems, methods, and applications for generating digital music compositions

This disclosure is directed to systems, methods, apparatuses, and techniques that utilize enhanced gesture-based input mechanisms to facilitate rapid creation and editing of digital music compositions. These technologies can be specially designed and configured to optimize creation, editing, and/or sharing of digital music compositions on mobile electronic devices that include capacitive sensing mechanisms. The technologies include multi-gesture functionalities that enable users to view and access various notation customization features in a compact space of a mobile device display. Additionally, the technologies encompass improved data storage models that enable underlying notation data to be accessed in multiple operational modes, and permit frequencies or pitches of notations to be accurately generated and incorporated into audio signals.

System and method for generating musical score

A method for generating a musical score based on user performance during playing a keyboard instrument may include detecting a status change of a plurality of execution devices of the keyboard instrument. The method may include generating a first signal according to the detected status change. The method may include generating a second signal indicating a plurality of timestamps. The method may include determining a tune of the musical score based on the first signal. The method may include determining a rhythm of the musical score based on the second signal. The method may further include generating the musical score based on the tune and the rhythm of the musical score.

METHOD AND SYSTEM FOR AUTOMATIC MUSIC TRANSCRIPTION AND SIMPLIFICATION
20230099808 · 2023-03-30 · ·

Provided are systems and methods for transforming a digital score file into one or more of a plurality of levels of simplified visualization outputs. Methods of the present invention may be computer implemented. Systems of the present invention may include at least one display device, a non-transitory memory having instructions embedded thereon, and a processor in communication with the non-transitory memory and the at least one display device. Systems and methods of the present invention may be configured to receive at least one digital score file, upon which one or more simplification rules are executed, resulting in at least one simplified visualization output. Simplification rules may include, but are not limited to, song length, tempo adjustment, tie, rhythm, harmonic rhythm, and chord. One or more simplified visualization outputs are then provided.

MULTIDIMENSIONAL GESTURES FOR MUSIC CREATION APPLICATIONS
20230032765 · 2023-02-02 · ·

A graphical user interface for music creation applications, such as score notation applications and digital audio workstations, includes multi-dimensional gestures. To enter a sound event into a musical project, a user uses an input device to select and drag a desired sound event in one or more dimensions. The relative position or rate of movement along a given dimension defines a value of a sound event parameter allocated to the given dimension. The sound event is entered into the project when the selection is released. The user inputs the gesture using a pointing device such as a mouse, stylus with a touch screen, or finger on a touch screen. Stylus dimensions mapped to sound event parameters may include, horizontal and vertical stylus tip positions, vertical and horizontal tilt of the stylus, and stylus tip pressure. Sound event parameters controlled by the gestures may include diatonic pitch, chromatic inflection, and duration.

VIRTUAL-MUSICAL-INSTRUMENT-BASED AUDIO PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT

A virtual-musical-instrument-based audio processing method is provided. In the method, a video is played. A virtual musical instrument is displayed in the video when the virtual musical instrument is matched with at least one musical instrument graphic element in the video. Played audio of the virtual musical instrument is outputted according to interactions with the at least one musical instrument graphic element matched with the virtual musical instrument in the video. Apparatus and non-transitory computer-readable storage medium counterpart embodiments are also contemplated.

AUTOMATIC MUSIC DOCUMENT DISPLAYING ON PERFORMING MUSIC
20230067175 · 2023-03-02 · ·

A user interface presents structural musical information in a score in a way where both the start and the end points of each jump in the score are visible simultaneously. Each jump is presented in a manner that allows the user to select during performance, which one of different alternatives to choose, when approaching a decision point like a repeat in the song.

AI Tool to Improve Music Performance
20230154446 · 2023-05-18 ·

Disclosed embodiments include systems and methods to teach and analyze a student's progress in learning to play a musical instrument, sing or perform other musical endeavors. Embodiments include the production of an AI score or music AI score which may be an extraction of performance parameters such as a student's tone, speed, rhythm, pitch loudness and other metrics. A musical AI score may also track changes in measured performance while playing a piece of music. Such changes within a piece of music or over time in performing various pieces of music can be valuable is a student's self-assessment or a music teacher's approach tailored to the particular student. A music or signing AI score can help a student to select and to prioritize their music repertoire, focus performance efforts, optimize time schedule, improve appreciation for music, improve the overall quality of music performance and instructor relationship.

SYSTEMS AND METHODS FOR TRANSPOSING SPOKEN OR TEXTUAL INPUT TO MUSIC
20230197058 · 2023-06-22 ·

Described herein are real-time musical translation devices (RETM) and methods of use thereof. Exemplary uses of RETMs include optimizing the understanding and/or recall of an input message for a user and improving a cognitive process in a user.

AUTOMATIC PERFORMANCE SYSTEM, AUTOMATIC PERFORMANCE METHOD, AND SIGN ACTION LEARNING METHOD
20170337910 · 2017-11-23 ·

An automatic performance system includes a sign detector configured to detect a sign action of a performer performing a musical piece, a performance analyzer configured to sequentially estimates a performance position in the musical piece by analyzing an acoustic signal representing performed sound in parallel with the performance, and a performance controller configured to control an automatic performance device to carry out an automatic performance of the musical piece so that the automatic performance is synchronized with the sign action detected by the sign detector and a progress of the performance position estimated by the performance analyzer.