G10H2220/441

Method for embedding and executing audio semantics

Aspects of the subject disclosure may include, for example, a device that includes a processing system having a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, where the operations include determining parameters for adapting audio in the content to the device, wherein the device renders the content, and wherein the parameters are based on semantic metadata embedded in the content, adapting the audio in the content based on the parameters, and rendering the content, as adapted by the parameters, to represent a semantic in the semantic metadata. Other embodiments are disclosed.

MUSIC GENERATOR
20230020181 · 2023-01-19 ·

Techniques are disclosed relating to determining composition rules, based on existing music content, to automatically generate new music content. In some embodiments, a computer system accesses a set of music content and generates a set of composition rules based on analyzing combinations of multiple loops in the set of music content. In some embodiments, the system generates new music content by selecting loops from a set of loops and combining selected ones of the loops such that multiple ones of the loops overlap in time. In some embodiments, the selecting and combining loops is performed based on the set of composition rules and attributes of loops in the set of loops.

Music generator
11450301 · 2022-09-20 · ·

Techniques are disclosed relating to determining composition rules, based on existing music content, to automatically generate new music content. In some embodiments, a computer system accesses a set of music content and generates a set of composition rules based on analyzing combinations of multiple loops in the set of music content. In some embodiments, the system generates new music content by selecting loops from a set of loops and combining selected ones of the loops such that multiple ones of the loops overlap in time. In some embodiments, the selecting and combining loops is performed based on the set of composition rules and attributes of loops in the set of loops.

Method and apparatus for generating music

A terminal for generating music may identify, based on execution of scenario recognition, scenarios for images previously received by the terminal. The terminal may generate respective description texts for the scenarios. The terminal may execute keyword-based rhyme matching based on the respective description texts. The terminal may generate respective rhyming lyrics corresponding to the images. The terminal may convert the respective rhyming lyrics corresponding to the images into a speech. The terminal may synthesize the speech with preset background music to obtain image music.

Method implemented by processor, electronic device, and performance data display system
11302296 · 2022-04-12 · ·

A method implemented by a processor includes receiving performance data including pitch data; determining, based on the pitch data that is included in the received performance data, a key among a plurality of keys; selecting, based on the determined key and the pitch data, a first-type image from among a plurality of first-type images; and displaying the selected first-type image.

SYSTEMS AND METHODS FOR VISUAL IMAGE AUDIO COMPOSITION BASED ON USER INPUT
20210319774 · 2021-10-14 ·

The present invention relates to systems and methods for visual image audio composition. In particular, the present invention provides systems and methods for audio composition from a diversity of visual images and user determined sound database sources.

Audio Techniques for Music Content Generation
20210247954 · 2021-08-12 ·

Techniques are disclosed relating to implementing audio techniques for real-time audio generation. For example, a music generator system may generate new music content from playback music content based on different parameter representations of an audio signal. In some cases, an audio signal can be represented by both a graph of the signal (e.g., an audio signal graph) relative to time and a graph of the signal relative to beats (e.g., a signal graph). The signal graph is invariant to tempo, which allows for tempo invariant modification of audio parameters of the music content in addition to tempo variant modifications based on the audio signal graph.

Listener-Defined Controls for Music Content Generation
20210247955 · 2021-08-12 ·

Techniques are disclosed relating to implementing user-created controls to modify music content. A music generator system may be configured to automatically generate output music content by selecting and combining audio tracks based on various parameters. Users may create their own control elements that the music generator system may train (e.g., using AI techniques) to generate output music content according to a user's intended functionality of a user-created control element.

Block-Chain Ledger Based Tracking of Generated Music Content
20210248213 · 2021-08-12 ·

Techniques are disclosed relating to tracking contributions to composed music content. In some embodiments, a computer system determines playback data for a music content mix, where the playback data indicates characteristics of playback of the music content mix and the music content mix includes a determined combination of multiple audio tracks. In some embodiments, the system records, in an electronic block-chain ledger data structure, information specifying individual playback data for one or more of the multiple audio tracks in the music content mix. The information specifying individual playback data for an individual audio track may include usage data for the individual audio track and signature information associated with the individual audio track.

Music Content Generation Using Image Representations of Audio Files
20210248983 · 2021-08-12 ·

Techniques are disclosed relating to automatically generate new music content based on image representations of audio files. A computer system generate image representations of audio files. The image representations may be generated, for example, based on data in the audio files and MIDI representations of the audio files. Audio files for combination may then be selected based on analysis of the image representations. For example, image-based machine learning algorithms may be implemented to assess the image representations and select music for combining.