Patent classifications
G10H1/38
VIRTUAL TUTORIALS FOR MUSICAL INSTRUMENTS WITH FINGER TRACKING IN AUGMENTED REALITY
Systems, devices, media, and methods are described for presenting a tutorial in augmented reality on the display of a smart eyewear device. The system includes a marker registration utility for setting a marker on a musical instrument, a localization utility for locating the eyewear device relative to the marker location and the instrument, a virtual object rendering utility for presenting a series of virtual tutorial objects on the display near one or more actuators on the instrument, and a hand tracking utility for tracking the performer's finger locations in real time during playback of a song file. A high-definition video camera captures sequences of frames of video data. The series of virtual tutorial objects, in one example, includes graphical elements presented on a virtual scroll that appears to move toward the instrument at a speed correlated with the song tempo. The hand tracking utility calculates a set of expected fingertip coordinates based on a detected hand shape and a library of hand poses and landmarks.
MUSIC GENERATION DEVICE, MUSIC GENERATION METHOD, AND RECORDING MEDIUM
A music generation device includes: an acquisition unit that acquires first stream data and second stream data different from the first stream data; an accompaniment generation unit that generates accompaniment information, which is music data indicating an accompaniment, based on a change in the first stream data; a melody generation unit that generates melody information, which is music data indicating a melody, based on a change in the second stream data; a melody adjustment unit that adjusts the melody information in accordance with a key of the accompaniment indicated by the generated accompaniment information; a music combining unit that combines the accompaniment information and the adjusted melody information to generate musical piece information; and an output unit that outputs the generated musical piece information.
SYSTEM AND METHOD FOR GENERATING HARMONIOUS COLOR SETS FROM MUSICAL INTERVAL DATA
Systems and methods are disclosed for generating color sets based on musical concepts of pitch intervals and harmony. Color sets are derived via a music-to-hue process which analyzes musical pitch data associated with musical input to determine pitch intervals included in the music. Pitch interval angles associated with the pitch intervals are applied to a tuned hue index to identify hue note ordered within the index which are separated by a hue interval angle similar to the pitch angle associated with the analyzed pitch data. The systems and methods provide for the creation of color sets which are analogous to musical chords in that they include multiple hue notes selected based on hue interval angles derived from musical interval angles associated with the received musical input.
ELECTRONIC DEVICE, ELECTRONIC MUSICAL INSTRUMENT, AND METHOD THEREFOR
In an electronic device for an electronic musical instrument, a determination grace period during which a plurality of operations on the electronic musical instrument by a user are determined to be simultaneously performed for the first section is set based on the data included in a first section of the song having a plurality of sections. Automatic accompaniment is advanced from the first section to the next second section when a user operation of the electronic musical instrument is detected outside of the determination grace period for the first section during the playback of the first section of the accompaniment, and automatic accompaniment is not advanced from the first section to the second section when the user operation of the electronic musical instrument is detected within the determination grace period for the first section during the playback of the first section of the accompaniment.
MULTIMEDIA MUSIC CREATION USING VISUAL INPUT
A system for creating music using visual input. The system detects events and metrics (e.g., objects, gestures, etc.) in user input (e.g., video, audio, music data, touch, motion, etc.) and generates music and visual effects that are synchronized with the detected events and correspond to the detected metrics. To generate the music, the system selects parts from a library of stored music data and assigns each part to the detected events and metrics (e.g., using heuristics to match musical attributes to visual attributes in the user input). To generate the visual effects, the system applies rules (e.g., that map musical attributes to visual attributes) to translate the generated music data to visual effects. Because the visual effects are generated using music data that is generated using the detected events/metrics, both the generated music and the visual effects are synchronized with—and correspond to—the user input.
METHOD, INFORMATION PROCESSING APPARATUS AND PERFORMANCE EVALUATION SYSTEM
There is provided a method for a computer to perform evaluating rapid chord playing and/or tonality based on performance actions of a performance; and instructing, during the performance, an outputter to output a reaction sound corresponding to the evaluation.
Apparatus, method, and computer-readable medium for generating musical pieces
An apparatus, method, and computer-readable storage medium that generate a harmonized musical piece. The method includes receiving a chord selection including a musical key and a scale selection, generating, within a digital audio work session, a chord progression sequence based on the received chord selection, in response to a detected chord selection change, modifying the chord progression sequence to include a chord progression corresponding to the chord selection change, setting the chord progression sequence as a master sequence, in response to detecting a second progression sequence within the digital audio work session, transmitting an identifier to the second progression sequence setting it as a slave sequence, and establishing a synchronized communication link between the master and the slave sequences such that changes made in the master sequence are automatically effectuated in the slave sequence, and combining the master sequence and the slave sequence to form a composed musical piece.
CHORD-PLAYING INPUT DEVICE, ELECTRONIC MUSICAL INSTRUMENT, AND CHORD-PLAYING INPUT PROGRAM
An electronic musical instrument includes: a chord-playing input device; a sounding part that emits chord sounds; and a display part that displays an image. The chord-playing input device includes: a chord designating button group assigned with chords; a chord changing button group assigned with change methods for changing an assignment state for the chord designating button group; a sounding information generating part that generates sounding information for making the sounding part emit the sound of the chord corresponding to operation for the chord designating button group and chord changing button group; and a display information generating part that generates display information for making the display part display a plurality of chord images and change method images in the same arrangement order as those of the chord designating button group and chord changing button group.
Systems and Methods for Acoustic Simulation
Systems and methods for acoustic simulation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for simulating acoustic responses, including obtaining a digital model of an object, calculating a plurality of vibrational modes of the object, conflating the plurality of vibrational modes into a plurality of chords, where each chord includes a subset of the plurality of vibrational modes, calculating, for each chord, a chord sound field in the time domain, where the chord sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with the subset of the plurality of vibrational modes, deconflating each chord sound field into a plurality of modal sound fields, where each modal sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with a single vibrational mode, and storing each modal sound field in a far-field acoustic transfer (FFAT) map.
Systems and Methods for Acoustic Simulation
Systems and methods for acoustic simulation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for simulating acoustic responses, including obtaining a digital model of an object, calculating a plurality of vibrational modes of the object, conflating the plurality of vibrational modes into a plurality of chords, where each chord includes a subset of the plurality of vibrational modes, calculating, for each chord, a chord sound field in the time domain, where the chord sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with the subset of the plurality of vibrational modes, deconflating each chord sound field into a plurality of modal sound fields, where each modal sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with a single vibrational mode, and storing each modal sound field in a far-field acoustic transfer (FFAT) map.