Patent classifications
G10H1/0041
Method and system for accelerated decomposing of audio data using intermediate data
A method for processing audio data, comprising providing song identification data identifying a particular song from among a plurality of songs or identifying a particular position within a particular song, loading intermediate data associated with the song identification data from a storage medium or from a remote device. The method also comprises obtaining input audio data representing audio signals of the song as identified by the song identification data. The audio signals comprise a mixture of different musical timbres, including at least a first musical timbre and a second musical timbre different from said first musical timbre. The method comprises combining the input audio data and the intermediate data with one another to obtain output audio data. The audio data represent audio signals of the first musical timbre separated from the second musical timbre.
COMPUTER VISION AND MAPPING FOR AUDIO APPLICATIONS
Systems, devices, media, and methods are presented for playing audio sounds, such as music, on a portable electronic device using a digital color image of a note matrix on a map. A computer vision engine, in an example implementation, includes a mapping module, a color detection module, and a music playback module. The camera captures a color image of the map, including a marker and a note matrix. Based on the color image, the computer vision engine detects a token color value associated with each field. Each token color value is associated with a sound sample from a specific musical instrument. A global state map is stored in memory, including the token color value and location of each field in the note matrix. The music playback module, for each column, in order, plays the notes associated with one or more the rows, using the corresponding sound sample, according to the global state map.
Song Recording Method, Audio Correction Method, and Electronic Device
A method includes displaying, by an electronic device, a first interface, where the first interface includes a recording button used to record a first song, obtaining, by the electronic device, accompaniment of the first song and feature information of a cappella of an original singer, starting to record a cappella of the user that is sung by the user, and displaying, by the electronic device, guidance information on a second interface based on the feature information of the a cappella of the original singer, where the guidance information guides one or more of breathing and vibrato during the user's singing.
INTERACTIVE PERFORMANCE SYSTEM AND METHOD
A method of processing textual and graphical feedback of a virtual audience of a performance for compensating and providing sonic cues to the performers through monetizing and audially representing, respectively, the textual and graphical feedback.
Method and system for AI controlled loop based song construction
According to an embodiment, there is provided a system and method for automatic AI controlled loop based song construction. It provides and benefits from a machine learning AI in a audio loop selection engine for the generation of a song structure and for the selection of fitting audio loops from a database of audio loops. In one embodiment, the instant method provides a music generation process that utilizes an AI system that has been trained and validated on a music item database to complete the creation of a music item given an incomplete song that was started but not finished by a user.
COMPUTER VISION AND MAPPING FOR AUDIO APPLICATIONS
Systems, devices, media, and methods are presented for playing audio sounds, such as music, on a portable electronic device using a digital color image of a note matrix on a map. A computer vision engine, in an example implementation, includes a mapping module, a color detection module, and a music playback module. The camera captures a color image of the map, including a marker and a note matrix. Based on the color image, the computer vision engine detects a token color value associated with each field. Each token color value is associated with a sound sample from a specific musical instrument. A global state map is stored in memory, including the token color value and location of each field in the note matrix. The music playback module, for each column, in order, plays the notes associated with one or more the rows, using the corresponding sound sample, according to the global state map.
SYSTEM AND METHOD FOR IMPROVING AND ADJUSTING PMC DIGITAL SIGNALS TO PROVIDE HEALTH BENEFITS TO LISTENERS
The present invention is a system and a method for processing and adjusting the PCM digital audio signals using specific reverberation and equalization settings that have been determined to potentially improve certain physical health parameters measurements, as determined by conducting bio-signal testing. The system includes a source of audio signals, producing an analog audio signal as input; a digital to analog converter, converting the analog audio signal to digital; a digital system processor, having a computer processor and memory or circuitry for processing the input audio signal using an equalization, a reverberation and a volume setting that is measured to produce an audio output signal that has more beneficial health response on human physiological functions than an unprocessed PCM digital signal or a base measurement without any audio signal, as measured by at least one bio-sensor attached to at least one listener. As a result, the present invention improves at least one physiological function of the listener, as measured using bio-sensors in the Avatar health testing bio-sensor measuring system.
NON-TRANSITORY COMPUTER READABLE MEDIUM STORING ELECTRONIC MUSICAL INSTRUMENT PROGRAM, METHOD FOR MUSICAL SOUND GENERATION PROCESS AND ELECTRONIC MUSICAL INSTRUMENT
An electronic musical instrument, method for a musical sound generation process and a non-transitory computer readable medium that stores an electronic musical instrument program are provided. The program causes a computer provided with a storage part to execute a musical sound generation process using sound data. The program causes the computer to execute:
acquiring, from the storage part, first sound data and first user identification information indicating a user who has acquired the first sound data from a distribution server; acquiring second user identification information indicating a user who causes the musical sound generation process to be executed using the first sound data; determining whether or not the first user identification information matches the second user identification information; and inhibiting execution of the musical sound generation process using the first sound data in a case when the first user identification information does not match the second user identification information.
Generative composition with texture groups
A computer-implemented method of generating a musical composition containing a plurality of musical texture groups is disclosed. The method includes assembling musical texture groups from musical instrument components and associating therewith a tag expressing emotional textural connotation. The instrument components have musical textural classifiers selected from a set of pre-defined textural classifiers such that different instrument components may have a different subset of pre-defined textural classifiers. The textural classifiers within a texture group possess either no musical feature attribute or a single musical feature attribute and any number of musical accompaniment attributes. The method then generates at least one chord scheme to a narrative brief, to provide an emotional connotation to a series of events, the chord scheme generated by selecting and assembling Form Atoms. The final step includes applying a texture to the chord scheme to generate the musical composition reflecting the narrative brief.
FORM ATOM HEURISTICS AND GENERATIVE COMPOSITION
A Form Atom defined by self-contained constructional properties representing a historical corpus of music and contained within metadata of the Form Atom is disclosed. The Form Atom has a generative set of heuristics to support generation of a set of chords in a chord scheme or many different sets of chords. The generated chords are spaced out within a defined window of musical time by chord spacer heuristics. The Form Atom has a tag describing its compositional heuristics. A chord list of the Form Atom is provided in local tonic and defines branching structures that may be used for the generation of different chords from the local tonic. A progression descriptor is combined with a form function such that the Form Atom expresses musically a question, an answer and a statement. A meta-map of a chord scheme for a musical section is created from the metadata.