Patent classifications
G10H2210/525
DYNAMIC MUSIC MODIFICATION
Music may be generated by electronically applying one or more functions that change a compositional nature of a musical input in a first tonality to generate a musical output in a second tonality in response to an event in a videogame. Data corresponding to the output melody may be recorded in a recording medium.
System and method for creating a sensory experience by merging biometric data with user-provided content
Systems and methods are provided for using a common “vocabulary,” predefined or dynamically generated based on user-provided content, to transform biometric and/or neurometric data collected from one or more people into a coherent audio and/or visual result. One method comprises receiving a first incoming signal from a bio-generated data sensing device worn by a first user; determining a first set of output values based on the first incoming signal, a common vocabulary comprising a list of possible output values, and a parameter file comprising a set of instructions for applying the common vocabulary to the first incoming signal to derive the first set of output values; generating a first output array comprising the first set of output values; and providing the first output array to an output delivery system configured to render the first output array as a first audio and/or visual output.
Multidimensional gestures for music creation applications
A graphical user interface for music creation applications, such as score notation applications and digital audio workstations, includes multi-dimensional gestures. To enter a sound event into a musical project, a user uses an input device to select and drag a desired sound event in one or more dimensions. The relative position or rate of movement along a given dimension defines a value of a sound event parameter allocated to the given dimension. The sound event is entered into the project when the selection is released. The user inputs the gesture using a pointing device such as a mouse, stylus with a touch screen, or finger on a touch screen. Stylus dimensions mapped to sound event parameters may include, horizontal and vertical stylus tip positions, vertical and horizontal tilt of the stylus, and stylus tip pressure. Sound event parameters controlled by the gestures may include diatonic pitch, chromatic inflection, and duration.
Electronic musical instrument
Systems and methods are directed to generating music. In one example, an electronic musical instrument includes a first handheld unit. The electronic musical instrument further includes a second handheld unit, the second handheld unit being communicatively coupled to the first handheld unit. The first handheld unit includes a plurality of input controls configured to indicate a selection of a note of a musical scale. The second handheld unit is configured to initiate output of the selected note.
COMPUTER-BASED SYSTEMS, DEVICES, AND METHODS FOR GENERATING AESTHETIC CHORD PROGRESSIONS AND KEY MODULATIONS IN MUSICAL COMPOSITIONS
Computer-based systems, devices, and methods for automatically generating aesthetic chord progressions and key modulations in musical compositions are described. Known harmonic relationships are expanded upon to produce a much richer set of harmonic transition probability models compared to conventional music theory, and these models are leveraged by a computer-based musical composition system to generate new musical compositions and variations of existing musical compositions. Techniques for enabling a computer-based musical composition system to automatically determine when to introduce a key modulation, what key to module to, and what chord progression(s) to use within the new key are all described.
Solfaphone
A 128-note MIDI-range monophonic musical keyboard instrument (100) includes an octave keypad (106) with eleven keys arranged in an analog clock face format for octave selection with the thumb of one hand, a pitch keypad (108) with twelve pitch keys similarly disposed in a clockface arrangement around a central omnivalent thirteenth key (128), enabling the nondisjointed sounding of nonadjacent notes with the thumb of the other hand. Spatial manipulation of the device, such as tilting and jabbing, can switch octaves and activate other functions, enabling one-handed operation and overcoming small-screen space limitation. Aside from producing typical electronic piano or synthesizer sounds, the device can sing in human voice an extended monosyllabic solfege covering all twelve pitch families of the common chromatic 12-tone even-tempered scale. A pictograph-based music notation (156) mirrors the circular geometry of the pitch and octave keyboards and facilitates the intuitive reading and playing of a melody.
ELECTRONIC WIND INSTRUMENT AND KEY OPERATION DETECTION METHOD
An electronic wind instrument and key operation detection method are provided. The electronic wind instrument includes an instrument body and a plurality of keys which have an operation surface operated by a player's finger and are provided on an external surface of the instrument body. Among the plurality of keys, at least two keys disposed to sandwich or surround a predetermined region comprise restriction parts formed on the operation surfaces. The restriction parts restrict escape of the player's finger from between the at least two keys having the restriction parts formed thereon.
SYSTEM AND METHOD FOR CREATING A SENSORY EXPERIENCE BY MERGING BIOMETRIC DATA WITH USER-PROVIDED CONTENT
Systems and methods are provided for using a common vocabulary, predefined or dynamically generated based on user-provided content, to transform biometric and/or neurometric data collected from one or more people into a coherent audio and/or visual result. One method comprises receiving a first incoming signal from a bio-generated data sensing device worn by a first user; determining a first set of output values based on the first incoming signal, a common vocabulary comprising a list of possible output values, and a parameter file comprising a set of instructions for applying the common vocabulary to the first incoming signal to derive the first set of output values; generating a first output array comprising the first set of output values; and providing the first output array to an output delivery system configured to render the first output array as a first audio and/or visual output.
DYNAMIC MUSIC MODIFICATION
A method for electronic music generation comprising electronically applying one or more functions that change one or more compositional elements of a musical input in a first tonality or other musical representation to generate a musical output in a second tonality or other musical representation and recording data corresponding to the musical output in a recording medium or rendering such musical Transformations to a reproductive medium such as an amplifier and speakers or headphones.
METHOD FOR MUSIC COMPOSITION EMBODYING A SYSTEM FOR TEACHING THE SAME
A computer-implemented method for music composition embodying a system for teaching the same is provided. Each composition is retrievably stored as music data selectively broken apart into different elemental portions for generating and teaching the composition of hundreds of thousands of additional songs or accompaniments for over 40 musical instruments and all types of human voices. The present invention is adapted to allow the melodies of the additional songs or accompaniments to automatically change the range in order to accommodate the notes range of the selected instrument or human voice.