G10H2250/005

SYSTEMS, DEVICES, AND METHODS FOR MUSICAL CATALOG AMPLIFICATION SERVICES
20230230565 · 2023-07-20 ·

Musical catalog amplification services that leverage or deploy a computer-based musical composition system are described. The computer-based musical composition system employs algorithms and, optionally, artificial intelligence to generate new music based on analyses of existing music. The new music may be wholly distinctive from, or may include musical variations of, the existing music. Rights in the new music generated by the computer-based musical composition system are granted to the rights holder(s) of the existing music. In this way, the musical catalog(s) of the rights holder(s) is/are amplified to include additional music assets. The computer-based musical composition system may be tuned so that the new music sounds more like, or less like, the existing music of the rights holder(s). Revenues generated from the new music are shared between the musical catalog amplification service provider and the rights holder(s).

ESTIMATION MODEL CONSTRUCTION METHOD, PERFORMANCE ANALYSIS METHOD, ESTIMATION MODEL CONSTRUCTION DEVICE, AND PERFORMANCE ANALYSIS DEVICE
20220383842 · 2022-12-01 ·

An estimation model construction method realized by a computer includes preparing a plurality of training data that include first training data that include first feature amount data that represent a first feature amount of a performance sound of a musical instrument and first onset data that represent a pitch at which an onset exists, and second training data that include second feature amount data that represent a second feature amount of sound generated by a sound source of a type different than the musical instrument, and second onset data that represent that an onset does not exist, and constructing, by machine learning using the plurality of training data, an estimation model that estimates, from a feature amount data that represent a feature amount of a performance sound of the musical instrument, estimated onset data that represent a pitch at which an onset exists.

Systems, devices, and methods for musical catalog amplification services
11615772 · 2023-03-28 · ·

Musical catalog amplification services that leverage or deploy a computer-based musical composition system are described. The computer-based musical composition system employs algorithms and, optionally, artificial intelligence to generate new music based on analyses of existing music. The new music may be wholly distinctive from, or may include musical variations of, the existing music. Rights in the new music generated by the computer-based musical composition system are granted to the rights holder(s) of the existing music. In this way, the musical catalog(s) of the rights holder(s) is/are amplified to include additional music assets. The computer-based musical composition system may be tuned so that the new music sounds more like, or less like, the existing music of the rights holder(s). Revenues generated from the new music are shared between the musical catalog amplification service provider and the rights holder(s).

CONTROLLABLE MUSIC GENERATION
20230147185 · 2023-05-11 ·

The present disclosure describes techniques for controllable music generation. The techniques comprise extracting latent vectors from unlabelled data, the unlabelled data comprising a plurality of music note sequences, the plurality of music note sequences indicating a plurality of pieces of music; clustering the latent vectors into a plurality of classes corresponding to a plurality of music styles; generating a plurality of labelled latent vectors corresponding to the plurality of music styles, each of the plurality labelled latent vectors comprising information indicating features of a corresponding music style; and generating a first music note sequence indicating a first piece of music in a particular music style among the plurality of music styles based at least in part on a particular labelled latent vector among the plurality of labelled latent vectors, the particular labelled latent vector corresponding to the particular music style.

AUTOMATED GENERATION OF AUDIO TRACKS
20230197042 · 2023-06-22 ·

Conventionally, significant time and effort are required to construct audio tracks. Disclosed embodiments enable automation of audio tracks using templates that associate sound generator(s) with template section(s). Each template enables a model to automatically generate unique audio tracks in which the sections and/or sounds are probabilistically determined. Certain embodiments introduce additional variability into the automated generation of audio tracks. In addition, the model may generate the audio tracks, note by note, to ensure that no copyrights are infringed.

GENERATING MUSIC OUT OF A DATABASE OF SETS OF NOTES

A method of generating music contents from input music contents that includes development of models of music composition generation on the basis of business rules and composition rules. In parallel, sounds are prepared, which may be saved in the sound repository. Then, models in the form of source code are sent to a melody generator. Firstly, the generator is set with specific parameters using a controller conforming to MIDI standards and supplemented with composition characteristics read from the user preference database. Next, the contents are sent to automatic generation based on artificial intelligence algorithms and the digital score of the composition with the desired characteristics is generated. Sound tracks of individual instruments are rendered and the rendered tracks are mixed into the final music record. Next, the composition and its record are verified by the critic module using algorithms based on neural networks.

Music modeling

A computer implemented method is provided for generating a prediction of a next musical note by a computer having at least a processor and a memory. A computer processor system is also provided for generating a prediction of a next musical note. The method includes storing sequential musical notes in the memory. The method further includes generating, by the processor, the prediction of the next musical note based upon a music model and the sequential musical notes stored in the memory. The method also includes updating, by the processor, the music model based upon the prediction of the next musical note and an actual one of the next musical note. The method additionally includes resetting, by the processor, the memory at fixed time intervals.

MXTZ (Music Exponentially Transformed Through Time)
20220139361 · 2022-05-05 ·

Music instrument and information digital mute system, for capture, processing, and conversion of sound from analog to digital signals using central processing unit (CPU) microcontroller, Bluetooth and Wi-Fi microcontrollers, sound image localization filter, global positioning and geographic information systems, universal serial bus (USB) module, and battery. Mute body is positioned in close proximity to bell or horn, body and/or voice at proximal end and/or configured to occlude sound source. Acoustically designed inner chamber within mute body captures acoustical variations of air pressure. Microphone positioned at distal end of mute, and proximal end of CB position in mute with digital signal processor (DSP) that captures, processes, converts, and transmits digital sound and data. CPU manages and controls components and modules of CB. Bluetooth and Wi-Fi microcontrollers configured to receive and send signals to and from other technological devices, components, and systems (cellphones, tablets, computers, earplugs, smart televisions, server, and cloud platforms). USB module configured to receive and send digital signals, supply power to CB, and charge and recharge battery.

INFORMATION PROCESSING SYSTEM, ELECTRONIC MUSICAL INSTRUMENT, AND INFORMATION PROCESSING METHOD
20230351989 · 2023-11-02 ·

An information processing system includes at least one memory configured to store instructions and at least one processor configured to implement the instructions to acquire first audio data indicative of audio of a target piece of music, and cause a trained model to output first timbre data indicative of a timbre appropriate for the target piece of music by inputting input data into the trained model, the input data including the first audio data, in which the trained model is trained to learn a relationship between second audio data indicative of audio and second timbre data indicative of a timbre for each reference piece of a plurality of reference pieces of music.

Music Enhancement Systems
20230343312 · 2023-10-26 · ·

In implementations of music enhancement systems, a computing device implements an enhancement system to receive input data describing a recorded acoustic waveform of a musical instrument. The recorded acoustic waveform is represented as an input mel spectrogram. The enhancement system generates an enhanced mel spectrogram by processing the input mel spectrogram using a first machine learning model trained on a first type of training data to generate enhanced mel spectrograms based on input mel spectrograms. An acoustic waveform of the musical instrument is generated by processing the enhanced mel spectrogram using a second machine learning model trained on a second type of training data to generate acoustic waveforms based on mel spectrograms. The acoustic waveform of the musical instrument does not include an acoustic artifact that is included in the recorded waveform of the musical instrument.