Patent classifications
G10H2220/241
Apparatus and method of sound modulation using touch screen with pressure sensor
Disclosed are an apparatus and a method of generating a sound by using a touch screen. A sound modulation apparatus according to the present invention includes: a sensor information input unit configured to receive sensing information on a position, at which a touch input of a user is applied to the screen, and pressure according to the touch input applied to the screen; and a sound modulation unit configured to set a tone frequency and volume of a sound to be output, and set the tone frequency according to the position, to which the touch input of the user is applied, and set the volume according to a size of the pressure according to the touch input.
Electronic device and method for reproducing sound in the electronic device
An electronic device and a method for reproducing a sound in the electronic device are provided. The electronic device includes a touchscreen displaying a keyboard having a plurality of keys and a plurality of sound source buttons corresponding respectively to a plurality of different sound sources, a processor connected electrically to the touchscreen, and a memory connected electrically to the processor, wherein the memory stores instructions that are executed to cause the processor to perform control such that when an input to at least one key among the plurality of keys is received, the sound source corresponding to at least one sound source button selected among the plurality of sound source buttons is reproduced as a sound corresponding to the received input.
User interfaces for virtual instruments
Embodiments of the present disclosure can provide systems, methods, and computer-readable medium for implementing user interfaces for interacting with a virtual instrument. For example, first touch input indicating a string location of a plurality of string locations within the note selection area. Audio output corresponding to the sting location may be presented on a speaker based at least in part on the first touch input. Second touch input corresponding to an ornamental interface element of the user interface may be received. In response to the first and second touch input, a series of two or more audio outputs may be presented on the speaker according to a predetermined pattern.
ERGONOMIC ELECTRONIC MUSICAL INSTRUMENT WITH PSEUDO-STRINGS
An ergonomic, portable, electronic, string-like instrument that utilizes a string-like interface. The string-like interface is tactile for sightless playability and capable of advanced input such as force and pressure sensitivity. The string-like interface functions to select a note, trigger a selected note, select and play a note on the instrument or an external peripheral. The instrument is played using the techniques of multiple stringed instruments and the ergonomics allow the user to hold and handle the device consistent with playing techniques familiar to musicians of multiple instruments. It is internally or externally powered and connects directly to industry-standard musical hardware such as MIDI devices, amplifiers and multi-track recorders.
Overlay for Touchscreen Piano Keyboard
The present invention relates to an overlay for a touchscreen piano keyboard implemented on an iPad or similar touchscreen device. It includes a screen covering sheet, that has a top surface pitted with hollows, such that it blocks activation of the touchscreen piano keys when lightly pressed, but not when more firmly pressed, thereby emulating the pressing of physical piano keys.
Playback, recording, and analysis of music scales via software configuration
Playback, recording, and analysis of music scales via software configuration. In an embodiment, a graphical user interface is generated with staff and keyboard canvases, visually representing a music staff and keyboard, respectively, a scale input, parameter input(s), and a play input. In response to selection of a scale, the staff canvas is updated to visually represent the notes in the scale. In response to the selection of a musical parameter, the staff canvas and/or keyboard canvas are updated to reflect the musical parameter. In response to selection of the play input, a soundtrack of the scale is output, while simultaneously highlighting the note, being played, on the staff canvas and the key, associated with the note being played, on the keyboard canvas.
PLAYBACK, RECORDING, AND ANALYSIS OF MUSIC SCALES VIA SOFTWARE CONFIGURATION
Playback, recording, and analysis of music scales via software configuration. In an embodiment, a graphical user interface is generated with staff and keyboard canvases, visually representing a music staff and keyboard, respectively, a scale input, parameter input(s), and a play input. In response to selection of a scale, the staff canvas is updated to visually represent the notes in the scale. In response to the selection of a musical parameter, the staff canvas and/or keyboard canvas are updated to reflect the musical parameter. In response to selection of the play input, a soundtrack of the scale is output, while simultaneously highlighting the note, being played, on the staff canvas and the key, associated with the note being played, on the keyboard canvas.
SEPARATE ISOLATED AND RESONANCE SAMPLES FOR A VIRTUAL INSTRUMENT
A virtual instrument can manage separate static and dynamic samples for various notes that can be played by the virtual instrument. In some cases, the static samples correspond to resonance sounds recorded for an instrument and are the same for every note. However, the dynamic samples may correspond to isolated sounds that are recorded for each variation of a note that can be played. In response to a user's selection of a note on a user interface of the virtual instrument, the virtual instrument may determine a rule for layering the various static and dynamic samples for playback.
USER INTERFACES FOR VIRTUAL INSTRUMENTS
Embodiments of the present disclosure can provide systems, methods, and computer-readable medium for implementing user interfaces for interacting with a virtual instrument. For example, first touch input indicating a string location of a plurality of string locations within the note selection area. Audio output corresponding to the sting location may be presented on a speaker based at least in part on the first touch input. Second touch input corresponding to an ornamental interface element of the user interface may be received. In response to the first and second touch input, a series of two or more audio outputs may be presented on the speaker according to a predetermined pattern.
Synthetic musical instrument with touch dynamics and/or expressiveness control
Notwithstanding practical limitations imposed by mobile device platforms and applications, truly captivating musical instruments may be synthesized in ways that allow musically expressive performances to be captured and rendered in real-time. Synthetic musical instruments that provide a game, grading or instructional mode are described in which one or more qualities of a user's performance are assessed relative to a musical score. By providing a range of modes (from score-assisted to fully user-expressive), user interactions with synthetic musical instruments are made more engaging and tend to capture user interest over generally longer periods of time. Synthetic musical instruments are described in which force dynamics of user gestures (such as finger contact forces applied to a multi-touch sensitive display or surface and/or the temporal extent and applied pressure of sustained contact thereon) are captured and drive the digital synthesis in ways that enhance expressiveness of user performances.