G10H2210/115

Cognitive music engine using unsupervised learning

A method for generating a musical composition based on user input is described. A first set of musical characteristics from a first input musical piece is received as an input vector. The first set of musical characteristics is perturbed to create a perturbed input vector as input in a first set of nodes in a first visible layer of an unsupervised neural net. The unsupervised neural net comprised of a plurality of computing layers, each computing layer composed of a respective set of nodes. The unsupervised neural net is operated to calculate an output vector from a higher level hidden layer in the unsupervised neural net. The output vector is used to create an output musical piece.

ELECTRONIC MUSICAL INSTRUMENTS, METHOD AND STORAGE MEDIA THEREFOR
20220406282 · 2022-12-22 · ·

An electronic musical instrument includes: a performance controller; and at least one processor, configured to perform the following: instructing sound generation of a first musical tone in response to a first operation on the performance controller; in response to a second operation on the performance controller during the sound generation of the first musical tone, obtaining a first amplitude value of the first musical tone at a time of the second operation, and obtaining a second amplitude value at which a second musical tone is to be sound-produced in response to the second operation on the performance controller; acquiring a parameter value for determining at least one of pitch, timbre, and volume of the second musical tone based on a ratio of the first amplitude value to the second amplitude value; and instructing sound generation of the second musical tone in accordance with the acquired parameter value.

MUSIC GENERATION DEVICE, MUSIC GENERATION METHOD, AND RECORDING MEDIUM
20230053899 · 2023-02-23 ·

A music generation device includes: an acquisition unit that acquires first stream data and second stream data different from the first stream data; an accompaniment generation unit that generates accompaniment information, which is music data indicating an accompaniment, based on a change in the first stream data; a melody generation unit that generates melody information, which is music data indicating a melody, based on a change in the second stream data; a melody adjustment unit that adjusts the melody information in accordance with a key of the accompaniment indicated by the generated accompaniment information; a music combining unit that combines the accompaniment information and the adjusted melody information to generate musical piece information; and an output unit that outputs the generated musical piece information.

Audio techniques for music content generation

Techniques are disclosed relating to implementing audio techniques for real-time audio generation. For example, a music generator system may generate new music content from playback music content based on different parameter representations of an audio signal. In some cases, an audio signal can be represented by both a graph of the signal (e.g., an audio signal graph) relative to time and a graph of the signal relative to beats (e.g., a signal graph). The signal graph is invariant to tempo, which allows for tempo invariant modification of audio parameters of the music content in addition to tempo variant modifications based on the audio signal graph.

METHODS AND SYSTEMS FOR FACILITATING GENERATING MUSIC IN REAL-TIME USING PROGRESSIVE PARAMETERS
20230114371 · 2023-04-13 ·

The invention generates progressive music in real-time for video games using random, seeded random, and manually input variables to affect melody, phrase length, harmonic chords and complexity, and percussive accompaniment. As the game is played, variables may be passed in that change the music to increase or decrease complexity and tension levels and to interpolate between styles. The generated music then progresses from stable, simple, and consonant to more tense, dissonant, and complex melodies, harmonies and rhythms, and back to the original stage as a musical resolution. Through variables controlling musical parameters, music may progressively change from the atonal region where there is no clear resolution or stability, to tonal, where there is only consonance and stability, and anywhere in between. These variables are assigned through a middleware or a game-engine setup that uses the current device as an audio source plugin, or manually coded into the individual video game.

CONTROLLABLE MUSIC GENERATION
20230147185 · 2023-05-11 ·

The present disclosure describes techniques for controllable music generation. The techniques comprise extracting latent vectors from unlabelled data, the unlabelled data comprising a plurality of music note sequences, the plurality of music note sequences indicating a plurality of pieces of music; clustering the latent vectors into a plurality of classes corresponding to a plurality of music styles; generating a plurality of labelled latent vectors corresponding to the plurality of music styles, each of the plurality labelled latent vectors comprising information indicating features of a corresponding music style; and generating a first music note sequence indicating a first piece of music in a particular music style among the plurality of music styles based at least in part on a particular labelled latent vector among the plurality of labelled latent vectors, the particular labelled latent vector corresponding to the particular music style.

AUTOMATIC PERFORMANCE APPARATUS, AUTOMATIC PERFORMANCE METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
20230206889 · 2023-06-29 · ·

The disclosure provides an automatic performance apparatus, an automatic performance method, and a non-transitory computer readable medium. Notes to be sounded are stored in chronological order for each beat position which is a sound generation timing in a performance pattern. A probability of generating sound at the beat position is stored for each beat position in a sound generation probability pattern. According to the probability stored in the sound generation probability pattern, it is determined whether to generate sound or not for each beat position of the performance pattern.

Automatically managing the musical tastes and preferences of a population of users requesting digital pieces of music automatically composed and generated by an automated music composition and generation system
11430419 · 2022-08-30 · ·

An automated music composition and generation system having an automated music composition and generation engine for processing musical experience descriptors and space parameters selected by the system user. The engine includes: a user taste generation subsystem for automatically determining the musical tastes and preferences of each system user based on user feedback and autonomous piece analysis, and maintaining a system user profile reflecting musical tastes and preferences of each system user; and a population taste aggregation subsystem for aggregating the musical tastes and preferences of the population of system users, and modifying the musical experience descriptors and/or time and/or space parameters provided to the automated music composition and generation engine, so that the digital pieces of composed music better reflect the musical tastes and preferences of the population of system users and meet future system user requests for automated music compositions.

SYSTEMS AND METHODS FOR GENERATING A CONTINUOUS MUSIC SOUNDSCAPE USING AUTOMATIC COMPOSITION

Disclosed are systems and techniques for creating a personalized sound environment for a user. Output is received from a plurality of sensors, wherein the sensor output detects a state of a user and an environment in which the user is active. Two or more sound sections for presentation to the user are selected from a plurality of sound sections, the selecting based on the sensor output and automatically determined sound preferences of the user. A first sound phase is generated, wherein the first sound phase includes the two or more selected sound sections. A personalized sound environment for presentation to the user is generated, wherein the personalized sound environment includes at least the first sound phase and a second sound phase. The personalized sound environment is presented to the user on a user device.

Audio Techniques for Music Content Generation
20230259327 · 2023-08-17 ·

Techniques are disclosed relating to implementing audio techniques for real-time audio generation. For example, a music generator system may generate new music content from playback music content based on different parameter representations of an audio signal. In some cases, an audio signal can be represented by both a graph of the signal (e.g., an audio signal graph) relative to time and a graph of the signal relative to beats (e.g., a signal graph). The signal graph is invariant to tempo, which allows for tempo invariant modification of audio parameters of the music content in addition to tempo variant modifications based on the audio signal graph.