Patent classifications
G10H2210/086
METHOD AND SYSTEM FOR GENERATING AN AUDIO OR MIDI OUTPUT FILE USING A HARMONIC CHORD MAP
Techniques are provided for generating an output file. One technique involves the steps of generating audio or MIDI content blocks from one or more musical performances; receiving an input file having audio or MIDI music content; generating a harmonic chord map for the input file; using the harmonic chord map to automatically select a subset of the audio or MIDI content blocks, and generating the output file by combining the selected subset of content blocks and the input file. This technique may enable the creation of unique and new musical accompaniments by re-purposing audio or MIDI content from back catalogs and/or out-takes of musical works. The new arrangement may be provided in multiple music styles, genres, or moods and may contain performances from multiple musical instruments, which may be pre-recorded from live instrument performances and/or of MIDI generated musical content.
MAPPING CHARACTERISTICS OF MUSIC INTO A VISUAL DISPLAY
A method and system for visualizing music using a perceptually conformal mapping system are provided. A music source file is input into a processor configured to carry out a series of steps on audio cues identified within the music and ultimately generate a simultaneous visual representation on a display device. The series of steps include application of one or more perceptually conformal mapping systems that essentially induce a synesthetic experience in which a person can experience music both acoustically and visually at the same time. The device extracts cues from the music that are designed to specifically capture fundamentals of human appreciation, maps them into visual cues, then presents those visual cues synchronized with the source music.
Auto-generated accompaniment from singing a melody
A method for processing a voice signal by an electronic system to create a song is disclosed. The method comprises the steps in the electronic system of acquiring an input singing voice recording (11); estimating a musical key (15b) and a Tempo (15a) from the singing voice recording (11); defining a tuning control (16) and a timing control (17) able to align the singing voice recording (11) with the estimated musical key (15b) and Tempo (15a); applying the tuning control (16) and the timing control (17) to the singing voice recording (11) so that an aligned voice recording (20) is obtained. Next, the method comprises the step of generating an music accompaniment (23) as function of the estimated musical key (15b) and Tempo (15a) and an arrangement database (22) and mixing the aligned voice recording (20) and the music accompaniment (23) to obtain the song (12). A system a server and a device are also disclosed.
INFORMATION PROCESSING METHOD AND INFORMATION PROCESSING SYSTEM
An information processing system includes at least one memory storing a program and at least one processor. The at least one processor implements the program to input a piece of sound source data representative of a sound source, a piece of style data representative of a performance style, and synthesis data representative of sounding conditions into a synthesis model generated by machine learning, and to generate, using the synthesis model, feature data representative of acoustic features of a target sound of the sound source to be generated in the performance style and according to the sounding conditions, and to generate an audio signal corresponding to the target sound using the generated feature data.
Method and apparatus for performing melody detection
A method for performing melody detection comprises interpreting the global perceptual effect of all the sounds at once, to determine what is the melody actually perceived by the human ear, and providing a music sheet or a text printout including a time sequence of single notes describing that melody.
Mapping characteristics of music into a visual display
A method and system for visualizing music using a perceptually conformal mapping system are provided. A music source file is input into a processor configured to carry out a series of steps on audio cues identified within the music and ultimately generate a simultaneous visual representation on a display device. The series of steps include application of one or more perceptually conformal mapping systems that essentially induce a synesthetic experience in which a person can experience music both acoustically and visually at the same time. The device extracts cues from the music that are designed to specifically capture fundamentals of human appreciation, maps them into visual cues, then presents those visual cues synchronized with the source music.
DISPLAY CONTROL METHOD, DISPLAY CONTROL DEVICE, AND PROGRAM
A display control method includes causing a display device to display a processing image in which a first image representing a note corresponding to a synthesized sound and a second image representing a sound effect are arranged in an area, in which a pitch axis and a time axis are set, in accordance with synthesis data that specify the synthesized sound generated by sound synthesis and the sound effect added to the synthesized sound.
Detecting vibrato bar technique for string instruments
Detecting vibrato bar technique for a string instrument can include analyzing, using a processor, a note signal of the string instrument to detect a selected instrumental technique from a plurality of instrumental techniques, analyzing, using the processor, a noise signal of the string instrument to detect a change in frequency of the noise signal, and generating, using the processor, a vibrato bar event responsive to detecting the selected instrumental technique and the change in frequency of the noise signal.
METHOD OF AND SYSTEM FOR AUTOMATICALLY GENERATING DIGITAL PERFORMANCES OF MUSIC COMPOSITIONS USING NOTES SELECTED FROM VIRTUAL MUSICAL INSTRUMENTS BASED ON THE MUSIC-THEORETIC STATES OF THE MUSIC COMPOSITIONS
An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.
Method and apparatus for generating digital score file of song, and storage medium
A method and an information processing apparatus to generate a digital score file of a song are described. The information processing apparatus includes processing circuitry. The processing circuitry is configured to obtain a candidate audio file satisfying a first condition from audio files of unaccompanied singing of the song without instrumental accompaniment. The processing circuitry is configured to divide the candidate audio file into valid audio segments based on timing information of the song, and extract pieces of music note information from the valid audio segments. Each of the pieces of music note information includes at least one data set of a music note in the song. The data set includes an onset time, a duration, and a music note value of the music note. The processing circuitry is configured to generate the digital score file based on the pieces of music note information.