G10H2220/441

ELECTRONIC WIND INSTRUMENT (ELECTRONIC MUSICAL INSTRUMENT) AND MANUFACTURING METHOD THEREOF
20210201872 · 2021-07-01 · ·

To provide an electronic wind instrument capable of accurately detecting the amount of rotation of a transmission member. During an electronic wind instrument performance by a performer, external light (such as light from lighting) may easily shine on the upper surface side of an instrument main body. However, the light-receiving section of an optical sensor faces toward the bottom surface side of the instrument main body therefore external light from the upper surface side of the instrument main body can be prevented from reaching the light-receiving section of the optical sensor. As a result, erroneous detection of the external light by the optical sensor can be suppressed therefore the rotation amount of a transmission member can be accurately detected by the optical sensor.

METHOD FOR EMBEDDING AND EXECUTING AUDIO SEMANTICS

Aspects of the subject disclosure may include, for example, a device that includes a processing system having a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, where the operations include determining parameters for adapting audio in the content to the device, wherein the device renders the content, and wherein the parameters are based on semantic metadata embedded in the content, adapting the audio in the content based on the parameters, and rendering the content, as adapted by the parameters, to represent a semantic in the semantic metadata. Other embodiments are disclosed.

Systems and methods for visual image audio composition based on user input
11004434 · 2021-05-11 ·

The present invention relates to systems and methods for visual image audio composition. In particular, the present invention provides systems and methods for audio composition from a diversity of visual images and user determined sound database sources.

METHOD OF AND SYSTEM FOR AUTOMATICALLY GENERATING DIGITAL PERFORMANCES OF MUSIC COMPOSITIONS USING NOTES SELECTED FROM VIRTUAL MUSICAL INSTRUMENTS BASED ON THE MUSIC-THEORETIC STATES OF THE MUSIC COMPOSITIONS

An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.

Method of and system for automatically generating digital performances of music compositions using notes selected from virtual musical instruments based on the music-theoretic states of the music compositions

An automated music performance system that is driven by the music-theoretic state descriptors of any musical structure (e.g. a music composition or sound recording). The system can be used with next generation digital audio workstations (DAWs), virtual studio technology (VST) plugins, virtual music instrument libraries, and automated music composition and generation engines, systems and platforms. The automated music performance system generates unique digital performances of pieces of music, using virtual musical instruments created from sampled notes or sounds and/or synthesized notes or sounds. Each virtual music instrument has its own set of music-theoretic state responsive performance rules that are automatically triggered by the music theoretic state descriptors of the music composition or performance to be digitally performed. An automated virtual music instrument (VMI) library selection and performance subsystem is provided for managing the virtual musical instruments during the automated digital music performance process.

Music selections for personal media compositions

In some implementations, a computing device can generate personalized music selections to associate with any collections of visual media (e.g., photos and videos) stored on the computing device. A user may prefer particular genres of music and listen to some genres more frequently than others. The computing device can create measures of the user's genre preferences and use these measures to select music that is preferred by the user and music that may be significant or relevant to the particular collection of visual media. The computing device may also determine music that was being played when and where the visual media were being created. The computing device may store the visual media and music items in association with each other. The computing device may generate composite media items that combine the visual media and music items. When the visual media are viewed, the selected music item is also played.

Method for embedding and executing audio semantics

Aspects of the subject disclosure may include, for example, a device that includes a processing system having a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations, where the operations include determining parameters for adapting audio in the content to the device, wherein the device renders the content, and wherein the parameters are based on semantic metadata embedded in the content, adapting the audio in the content based on the parameters, and rendering the content, as adapted by the parameters, to represent a semantic in the semantic metadata. Other embodiments are disclosed.

MUSIC GENERATOR
20210027754 · 2021-01-28 ·

Techniques are disclosed relating to determining composition rules, based on existing music content, to automatically generate new music content. In some embodiments, a computer system accesses a set of music content and generates a set of composition rules based on analyzing combinations of multiple loops in the set of music content. In some embodiments, the system generates new music content by selecting loops from a set of loops and combining selected ones of the loops such that multiple ones of the loops overlap in time. In some embodiments, the selecting and combining loops is performed based on the set of composition rules and attributes of loops in the set of loops.

Intelligent system for matching audio with video
20210020149 · 2021-01-21 ·

An intelligent system for matching audio with video of the present invention provides a video analysis module targeting color tone, storyboard pace, video dialogue, length and category and director's special requirement, actors expression, movement, weather, scene, buildings, spacial and temporal, things and a music analysis module targeting recorded music form, sectional turn, style, melody and emotional tension, and then uses an AI matching module to adequately match video of the video analysis module with musical characteristics of the music analysis module, so as to quickly complete a creative composition selection function with respect to matching audio with a video.

Automatic song generation

In accordance with implementations of the subject matter described herein, there is provided a solution for supporting a machine to automatically generate a song. In this solution, an input from a user is used to determine a creation intention of the user with respect to a song to be generated. Lyrics of the song are generated based on the creation intention. Then, a template for the song is generated based at least in part on the lyrics. The template indicates a melody matching with the lyrics. In this way, it is feasible to automatically create the melody and lyrics which not only conform to the creation intention of the user but also match with each other.