Patent classifications
G10H2220/455
MULTIMEDIA MUSIC CREATION USING VISUAL INPUT
A system for creating music using visual input. The system detects events and metrics (e.g., objects, gestures, etc.) in user input (e.g., video, audio, music data, touch, motion, etc.) and generates music and visual effects that are synchronized with the detected events and correspond to the detected metrics. To generate the music, the system selects parts from a library of stored music data and assigns each part to the detected events and metrics (e.g., using heuristics to match musical attributes to visual attributes in the user input). To generate the visual effects, the system applies rules (e.g., that map musical attributes to visual attributes) to translate the generated music data to visual effects. Because the visual effects are generated using music data that is generated using the detected events/metrics, both the generated music and the visual effects are synchronized with—and correspond to—the user input.
MUSIC RECORDING AND COLLABORATION PLATFORM
Methods, systems and non-transitory computer-readable mediums for remote audio project collaboration. The method includes generating a first version of an audio project file including a reference track. The method also includes receiving a first audio track from a first user computing device. The first audio track is synced to the reference track. The method further includes generating a second version of the audio project file by adding the first audio track to the audio project file. The method also includes receiving a second audio track from a second user computing device. The second audio track is synced to the reference track. The second user computing device is remotely located from the first user computing device. The method further includes generating a third version of the audio project file by adding the second audio track to the audio project file.
Systems and Methods for Acoustic Simulation
Systems and methods for acoustic simulation in accordance with embodiments of the invention are illustrated. One embodiment includes a method for simulating acoustic responses, including obtaining a digital model of an object, calculating a plurality of vibrational modes of the object, conflating the plurality of vibrational modes into a plurality of chords, where each chord includes a subset of the plurality of vibrational modes, calculating, for each chord, a chord sound field in the time domain, where the chord sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with the subset of the plurality of vibrational modes, deconflating each chord sound field into a plurality of modal sound fields, where each modal sound field describes acoustic pressure surrounding the object when the object oscillates in accordance with a single vibrational mode, and storing each modal sound field in a far-field acoustic transfer (FFAT) map.
MUSIC INFORMATION GENERATING DEVICE, MUSIC INFORMATION GENERATING METHOD, AND RECORDING MEDIUM
A music information generating device, including: a block color name recognizing part which selects and determines, from among plural color names which are made to correspond to plural ranges of color attribute value having been set beforehand, a color name corresponding to the representative-color of the block as a block color name of the block, by recognizing to which range of the color attribute value the representative-color of the block belongs, where the plural color names have been made or are going to be made to correspond to the plural sound-source names having been stored; and a diagram music-score generating unit which selects, based on a criterion having been set beforehand, a designated block from among the blocks having been arranged in lattice shape, and thereby generates a diagram music-score.
INTERACTIVE MOVEMENT AUDIO ENGINE
A method for generating an audio output is described. Image inputs of interactive movements by a user captured by an image sensor are received. The interactive movements are mapped to a sequence of audio element identifiers. The sequence of audio element identifiers are processed to generate a musical sequence by performing music theory rule enforcement on the sequence of audio element identifiers. An audio output that represents the musical sequence is generated.
Re-timing a video sequence to an audio sequence based on motion and audio beat detection
Embodiments are disclosed for re-timing a video sequence to an audio sequence based on the detection of motion beats in the video sequence and audio beats in the audio sequence. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving a first input, the first input including a video sequence, detecting motion beats in the video sequence, receiving a second input, the second input including an audio sequence, detecting audio beats in the audio sequence, modifying the video sequence by matching the detected motions beats in the video sequence to the detected audio beats in the audio sequence, and outputting the modified video sequence.
SYSTEMS AND METHODS FOR MUSIC SIMULATION VIA MOTION SENSING
The present disclosure relates to systems, methods, and devices for music simulation. The methods may include determining one or more simulation actions based on data associated with one or more simulation actions acquired by at least one sensor. The methods may further include determining, based on at least one of the one or more simulation actions and a mapping relationship between simulation actions and corresponding musical instruments, a simulation musical instrument that matches with the one or more simulation actions. The methods may further include determining, based on the one or more simulation actions, one or more first features associated with the simulation musical instrument. The methods may further include playing music based on the one or more first features.
Computer vision and mapping for audio applications
Systems, devices, media, and methods are presented for playing audio sounds, such as music, on a portable electronic device using a digital color image of a note matrix on a map. A computer vision engine, in an example implementation, includes a mapping module, a color detection module, and a music playback module. The camera captures a color image of the map, including a marker and a note matrix. Based on the color image, the computer vision engine detects a token color value associated with each field. Each token color value is associated with a sound sample from a specific musical instrument. A global state map is stored in memory, including the token color value and location of each field in the note matrix. The music playback module, for each column, in order, plays the notes associated with one or more the rows, using the corresponding sound sample, according to the global state map.
Broad spectrum audio device designed to accelerate the maturation of stringed instruments
The present invention comprises a device and process designed to accelerate the maturation of stringed musical instruments, composed of but not limited to a broad spectrum audio generator coupled with one or more fasteners via one or more armatures dimensioned to allow easy installation, secure attachment, and easy uninstallation from the stringed musical instrument.
METHODS AND APPARATUS TO USE PREDICTED ACTIONS IN VIRTUAL REALITY ENVIRONMENTS
Methods and apparatus to use predicted actions in VR environments are disclosed. An example method includes predicting a predicted time of a predicted virtual contact of a virtual reality controller with a virtual musical instrument, determining, based on at least one parameter of the predicted virtual contact, a characteristic of a virtual sound the musical instrument would make in response to the virtual contact, and initiating producing the sound before the predicted time of the virtual contact of the controller with the musical instrument.