G10H2210/251

ELECTRONIC MUSICAL INSTRUMENT AND CONTROL METHOD
20190096374 · 2019-03-28 · ·

An electronic musical instrument including a musical instrument body which is supported by first finger of a hand of an instrument player and at least one finger other than the first finger, scale keys which are provided at positions that are contacted with fingers other than the first finger on one surface of the musical instrument body, and each of which specifies a scale of a musical sound, a touchpad which is provided in an area that is contacted with the first finger on an other surface of the musical instrument body, and includes a sensor that has a planar detection area for detecting a contact position of the first finger, and a processor which controls emission of the musical sound whose scale has been specified by the scale key in accordance with the contact position of the first finger detected by the touchpad.

ACOUSTIC GUITAR USER INTERFACE
20190066642 · 2019-02-28 · ·

An acoustic guitar is provided that includes a neck and a body. The acoustic guitar also includes a user interface module including an audio effect module configured to implement one or more audio effects, and one or more effect controllers, with each effect controller being configured to set a level of a corresponding audio effect implemented by the audio effect module. The user interface module further includes at least one input blend controller and a voice controller configured to allow a user to select a patch from a plurality of available patches, with each patch of the plurality of available patches comprising a configuration of one or more audio effects set at various levels to arrive at a desired effect template.

Dynamic Audio Signal Processing System
20180357992 · 2018-12-13 ·

A dynamic audio signal processing system for dynamically and continuously applying effects to an audio signal in real-time based on detected audio signal attributes. The dynamic audio signal processing system generally includes a signal processor including an input for receiving an audio signal, such as from an instrument. The signal processor may include an input analyzer adapted to detect one or more attributes of the audio signal continuously and in real-time. The signal processor may also include one or more signal conditioners in parallel with the input analyzer; each of the signal conditioners being adapted to dynamically apply one or more effects to the audio signal based on the one or more attributes detected by the input analyzer. The signal processor may include a selector for selecting which, if any, of the signal conditioners to apply effects to the audio signal based on detected attributes.

ACOUSTIC GUITAR USER INTERFACE
20180315403 · 2018-11-01 · ·

An acoustic guitar is provided that includes a neck and a body. The acoustic guitar also includes a user interface module including an audio effect module configured to implement one or more audio effects, and one or more effect controllers, with each effect controller being configured to set a level of a corresponding audio effect implemented by the audio effect module. The user interface module further includes at least one input blend controller and a voice controller configured to allow a user to select a patch from a plurality of available patches, with each patch of the plurality of available patches comprising a configuration of one or more audio effects set at various levels to arrive at a desired effect template.

Acoustic guitar user interface
10115379 · 2018-10-30 · ·

An acoustic guitar is provided that includes a neck and a body. The acoustic guitar also includes a user interface module including an audio effect module configured to implement one or more audio effects, and one or more effect controllers, with each effect controller being configured to set a level of a corresponding audio effect implemented by the audio effect module. The user interface module further includes at least one input blend controller and a voice controller configured to allow a user to select a patch from a plurality of available patches, with each patch of the plurality of available patches comprising a configuration of one or more audio effects set at various levels to arrive at a desired effect template.

AUDIO DATA PROCESSING METHOD AND DEVICE
20180247629 · 2018-08-30 ·

The present disclosure provides an audio data processing method performed at an electronic apparatus, the method including: obtaining a corresponding lyric file according to audio data to be processed; dividing the audio data according to a sentence in the lyric file to obtain an audio data segment; extracting data corresponding to an end syllable in the audio data segment; and performing harmonic processing on the data corresponding to the end syllable. In addition, further provided is an audio data processing device matching the method. The audio data processing method and device can prevent entire audio data from being attached with a harmonic sound effect during an entire time period, thereby improving authenticity of harmonic simulation.

CONTINUOUS PITCH-CORRECTED VOCAL CAPTURE DEVICE COOPERATIVE WITH CONTENT SERVER FOR BACKING TRACK MIX

Techniques have been developed to facilitate (1) the capture and pitch correction of vocal performances on handheld or other portable computing devices and (2) the mixing of such pitch-corrected vocal performances with backing tracks for audible rendering on targets that include such portable computing devices and as well as desktops, workstations, gaming stations, even telephony targets. Implementations of the described techniques employ signal processing techniques and allocations of system functionality that are suitable given the generally limited capabilities of such handheld or portable computing devices and that facilitate efficient encoding and communication of the pitch-corrected vocal performances (or precursors or derivatives thereof) via wireless and/or wired bandwidth-limited networks for rendering on portable computing devices or other targets.

CONTINUOUS SCORE-CODED PITCH CORRECTION

Vocal musical performances may be captured and continuously pitch-corrected at a mobile device for mixing and rendering with backing tracks in ways that create compelling user experiences. In some cases, the vocal performances of individual users are captured in the context of a karaoke-style presentation of lyrics in correspondence with audible renderings of a backing track. Such performances can be pitch-corrected in real-time at the mobile device in accord with pitch correction settings. In some cases, such pitch correction settings code a particular key or scale for the vocal performance or for portions thereof. In some cases, pitch correction settings include a score-coded melody sequence of note targets supplied with, or for association with, the lyrics and/or backing track. In some cases, pitch correction settings are dynamically variable based on gestures captured at a user interface.

Methods, Devices and Computer Program Products for Interactive Musical Improvisation Guidance
20180144732 · 2018-05-24 · ·

A method and a device provides a user with the ability to freely improvise or play around with a selection of different chords and be provided with visual guidance assisting the improvisation of melody, while also providing accompaniment consistent with the selection of chords. Sound files consistent with a user selection of a chord and/or a dynamic level are selected from an audio library and played, while the user is given visual cues on a user interface keyboard assisting the user to select notes that are consistent with the chord selected for the accompaniment.

System and method of generating music from electrical activity data
09968305 · 2018-05-15 ·

The Plant Choir system comprises a software program and hardware that measures electrical activity of a person, plant, or animal and translates those readings into music on a computing device. The system gathers electrical activity data using electrodermal activity (EDA) measurement devices. The EDA readings of the individual subjects are translated via the software into musical melodies in real time. The individual subject melodies are combined to create interesting harmonies similar to a choir. The music is rendered using a MIDI (Musical Instrument Data Interface) programming interface of the computer operating system. The software allows the user to select program options, set music and program parameters. Variations in the EDA signal are interpreted as music. Each subject connected to the system is assigned a musical voice and the voices are combined to create multi part harmonies similar to a choir.