G10H2210/066

METHOD AND APPARATUS FOR MAKING MUSIC SELECTION BASED ON ACOUSTIC FEATURES
20170330540 · 2017-11-16 ·

A method of making audio music selection and creating a mixtape, comprising importing song files from a song repository; sorting and filtering the song files based on selection criteria; and creating the mixtape from the song files sorting and filtering results. The sorting and filtering of the song files comprise: spectral analyzing each of the song files to extract low level acoustic feature parameters of the song file; from the low level acoustic feature parameter values, determining the high level acoustic feature parameters of the analyzed song file; determining a similarity score of each of the analyzed song files by comparing the acoustic feature parameter values of the analyzed song file against desired acoustic feature parameter values determined from the selection criteria; and sorting the analyzed song files according to their similarity scores; and filtering out the analyzed song files with first similarity scores lower than a filter threshold.

Musical instrument effects processor
09812106 · 2017-11-07 ·

A method in accord with certain implementations involves, at a data interface of a musical instrument effects processor, receiving an extracted characteristic of an audible sound that is captured at a microphone; transferring the extracted characteristic to a digital signal processor residing in the musical instrument effects processor; receiving input signals at an input to the musical instrument effects processor; at the digital signal processor of the musical instrument effects processor, modifying the received input signals using the extracted characteristics to create the electronic audio effect; and outputting the modified input signals as an output signal from the musical instrument effects processor. This abstract is not to be considered limiting, since other embodiments may deviate from the features described in this abstract.

Method, device and software for controlling transport of audio data
11488568 · 2022-11-01 · ·

A method for processing music audio data, including providing input audio data representing a first piece of music comprising a mixture of musical timbres. The method also includes decomposing the input audio data to generate at least first-timbre decomposed data representing a first timbre selected from the musical timbres of the first piece of music, and second-timbre decomposed data representing a second timbre selected from the musical timbres of the first piece of music. The method also includes applying a transport control to obtain transport controlled first-timbre decomposed data. The method also includes recombining audio data obtained from the transport controlled first-timbre decomposed data with audio data obtained from the second-timbre decomposed data to obtain recombined audio data.

Music modeling

A computer implemented method is provided for generating a prediction of a next musical note by a computer having at least a processor and a memory. A computer processor system is also provided for generating a prediction of a next musical note. The method includes storing sequential musical notes in the memory. The method further includes generating, by the processor, the prediction of the next musical note based upon a music model and the sequential musical notes stored in the memory. The method also includes updating, by the processor, the music model based upon the prediction of the next musical note and an actual one of the next musical note. The method additionally includes resetting, by the processor, the memory at fixed time intervals.

Unsupervised singing voice conversion with pitch adversarial network
11257480 · 2022-02-22 · ·

A method, a computer readable medium, and a computer system are provided for singing voice conversion. Data corresponding to a singing voice is received. One or more features and pitch data are extracted from the received data using one or more adversarial neural networks. One or more audio samples are generated based on the extracted pitch data and the one or more features.

Communicating data with audible harmonies
09755764 · 2017-09-05 · ·

In some implementations, a process for communicating data over audio is performed. In one aspect, one or more ordered sequences of audio attribute values that are selected based on a musical relationship between the audio attribute values and associated with data values may be played by a first device and received by a second device. This technique may allow for sound-based communications to take place between devices that listeners may find pleasant.

CONTEXT-DEPENDENT PIANO MUSIC TRANSCRIPTION WITH CONVOLUTIONAL SPARSE CODING
20170243571 · 2017-08-24 ·

The present disclosure presents a novel approach to automatic transcription of piano music in a context-dependent setting. Embodiments described herein may employ an efficient algorithm for convolutional sparse coding to approximate a music waveform as a summation of piano note waveforms convolved with associated temporal activations. The piano note waveforms may be pre-recorded for a particular piano that is to be transcribed and may optionally be pre-recorded in the specific environment where the piano performance is to be performed. During transcription, the note waveforms may be fixed and associated temporal activations may be estimated and post-processed to obtain the pitch and onset transcription. Experiments have shown that embodiments of the disclosure significantly outperform state-of-the-art music transcription methods trained in the same context-dependent setting, in both transcription accuracy and time precision, in various scenarios including synthetic, anechoic, noisy, and reverberant environments.

Automatic transcription of musical content and real-time musical accompaniment

Various embodiments provide techniques for generating real-time musical accompaniment for musical content included in an audio signal. A real-time musical accompaniment system receives the audio signal via an audio input device. The system extract, from the audio signal, musical information characterizing at least a portion of the musical content. The system generates musical information that has at least one of a rhythmic relationship and a harmonic relationship with the musical information. The system generates an output audio signal that is complementary to the musical information. The system transmits, substantially immediately after receiving the audio signal, the output audio signal to an audio output device.

AUDIO PROCESSING TECHNIQUES FOR SEMANTIC AUDIO RECOGNITION AND REPORT GENERATION
20220036869 · 2022-02-03 ·

Example methods, apparatus and articles of manufacture to determine semantic information for audio are disclosed. Example apparatus disclosed herein are to process an audio signal obtained by a media device to determine values of a plurality of features that are characteristic of the audio signal, compare the values of the plurality of features to a first template having corresponding first ranges of the plurality of features to determine a first score, the first template associated with first semantic information, compare the values of the plurality of features to a second template having corresponding second ranges of the plurality of features to determine a second score, the second template associated with second semantic information, and associate the audio signal with at least one of the first semantic information or the second semantic information based on the first score and the second score.

Automatically managing the musical tastes and preferences of a population of users requesting digital pieces of music automatically composed and generated by an automated music composition and generation system
11430419 · 2022-08-30 · ·

An automated music composition and generation system having an automated music composition and generation engine for processing musical experience descriptors and space parameters selected by the system user. The engine includes: a user taste generation subsystem for automatically determining the musical tastes and preferences of each system user based on user feedback and autonomous piece analysis, and maintaining a system user profile reflecting musical tastes and preferences of each system user; and a population taste aggregation subsystem for aggregating the musical tastes and preferences of the population of system users, and modifying the musical experience descriptors and/or time and/or space parameters provided to the automated music composition and generation engine, so that the digital pieces of composed music better reflect the musical tastes and preferences of the population of system users and meet future system user requests for automated music compositions.