G10H2210/105

INTERACTIVE MOVEMENT AUDIO ENGINE
20230197040 · 2023-06-22 ·

A method for generating an audio output is described. Image inputs of interactive movements by a user captured by an image sensor are received. The interactive movements are mapped to a sequence of audio element identifiers. The sequence of audio element identifiers are processed to generate a musical sequence by performing music theory rule enforcement on the sequence of audio element identifiers. An audio output that represents the musical sequence is generated.

GENERATING MUSIC OUT OF A DATABASE OF SETS OF NOTES

A method of generating music contents from input music contents that includes development of models of music composition generation on the basis of business rules and composition rules. In parallel, sounds are prepared, which may be saved in the sound repository. Then, models in the form of source code are sent to a melody generator. Firstly, the generator is set with specific parameters using a controller conforming to MIDI standards and supplemented with composition characteristics read from the user preference database. Next, the contents are sent to automatic generation based on artificial intelligence algorithms and the digital score of the composition with the desired characteristics is generated. Sound tracks of individual instruments are rendered and the rendered tracks are mixed into the final music record. Next, the composition and its record are verified by the critic module using algorithms based on neural networks.

Music Generator Generation of Continuous Personalized Music
20220059063 · 2022-02-24 ·

Techniques are disclosed relating to automatically generate new music content. In some embodiments, a computing system receivers user input specifying a user-defined music control element. The computing system may train a machine learning model to change both composition and performance parameters based on user adjustments to the user-defined music control element. In embodiments in which composition and performance subsystems are on different devices, one device may transmit configuration information to another device, where the configuration information specifies how to adjust parameters based on user input to the user-defined music control element. Disclosed techniques may facilitate centralized learning for human-like music production while allowing individualized customization for individual users. Further, disclosed techniques may allow artists to define their own abstract music controls and make those controls available to end-users.

Composing music using foresight and planning

An approach is provided in which an information handling system configures a reinforcement learning model based inspiration selections received from a user. The information handling system performs training iterations using the configured reinforcement learning model, which generates multiple actions and multiple rewards corresponding to multiple actions. The information handling system determines that the multiple rewards reach an empirical threshold and, in turn, generates a musical composition based on the multiple actions.

Automatically managing the musical tastes and preferences of a population of users requesting digital pieces of music automatically composed and generated by an automated music composition and generation system
11430419 · 2022-08-30 · ·

An automated music composition and generation system having an automated music composition and generation engine for processing musical experience descriptors and space parameters selected by the system user. The engine includes: a user taste generation subsystem for automatically determining the musical tastes and preferences of each system user based on user feedback and autonomous piece analysis, and maintaining a system user profile reflecting musical tastes and preferences of each system user; and a population taste aggregation subsystem for aggregating the musical tastes and preferences of the population of system users, and modifying the musical experience descriptors and/or time and/or space parameters provided to the automated music composition and generation engine, so that the digital pieces of composed music better reflect the musical tastes and preferences of the population of system users and meet future system user requests for automated music compositions.

Input Support Apparatus and Method Therefor
20170278495 · 2017-09-28 ·

An input support method is provided for use in an input support apparatus that supports input of a music note. The method includes: controlling a display unit to display a pitch-time plane that includes a pitch-axis and a time-axis, a chord sequence that is associated with the time-axis of the pitch-time plane, and a pointer that indicates a position on the time-axis along the chord sequence; identifying constituent music notes that form a chord corresponding to a display position of the pointer along the chord sequence; and controlling the display unit to display areas on the pitch-time plane, each displayed area indicating a corresponding one of the identified constituent music notes, differently from other areas on the pitch-time plane.

MUSIC CONTEXT SYSTEM AUDIO TRACK STRUCTURE AND METHOD OF REAL-TIME SYNCHRONIZATION OF MUSICAL CONTENT
20220044663 · 2022-02-10 ·

A system is described that permits identified musical phrases or themes to be synchronized and linked into changing real-world events. The achieved synchronization includes a seamless musical transition—achieved using a timing offset, such as relative advancement of an significant musical “onset,” that is inserted to align with a pre-existing but identified music signature, beat or timebase—between potentially disparate pre-identified musical phrases having different emotive themes defined by their respective time signatures, intensities, keys, musical rhythms and/or musical phrasing. The system operates to augment an overall sensory experience of a user in the real world by dynamically changing, re-ordering or repeating and then playing audio themes within the context of what is occurring in the surrounding physical environment, e.g. during different phases of a cardio workout in a step class the music rate and intensity increase during sprint periods and decrease during recovery periods.

METHOD FOR SONG MULTIMEDIA SYNTHESIS, ELECTRONIC DEVICE AND STORAGE MEDIUM

The disclosure provides a method for synthesizing a song multimedia, an electronic device and a storage medium. Material obtaining modes are provided based on a song multimedia synthesis request. User audios provided by a user are obtained based on a selected material obtaining mode. A user timbre output by a timbre extraction model is obtained by inputting the user audios into the timbre extraction model. Lyrics to be synthesized and a tune to be synthesized provided by the user are obtained based on the selected material obtaining mode, and a synthesized song multimedia is obtained by inputting the user timbre, the lyrics to be synthesized and the tune to be synthesized into a song synthesis model.

MUSICAL PERFORMANCE SYSTEM, TERMINAL DEVICE, METHOD AND ELECTRONIC MUSICAL INSTRUMENT
20210407475 · 2021-12-30 · ·

A musical performance system includes an instrument and a term1nal. Terminal includes a processor. Processor executes outputting first track data or first pattern data obtained by arbitrarily combining pieces of track data. Processor executes automatically outputting second track data or second pattern data obtained by arbitrarily combining pieces of track data. Instrument includes a processor. Processor executes acquiring first track/pattern data from terminal. Processor executes generating a sound of a music composition in accordance with first track/pattern data. Processor executes acquiring second track/pattern data from terminal. Processor executes generating a sound of a music composition in accordance with second track/pattern data.

COMPUTER-BASED SYSTEMS, DEVICES, AND METHODS FOR GENERATING MUSICAL COMPOSITIONS THAT ARE SYNCHRONIZED TO VIDEO
20210407483 · 2021-12-30 ·

Computer-based systems, devices, and methods for generating musical compositions that are purposefully synchronized with video are described. A video timeline is defined with various time-markers that demarcate specific events in the video. A music timeline is generated based on the video timeline. The music timeline preserves the various time-markers from the video timeline. A computer-based musical composition system generates a musical composition based on the music timeline. The musical composition includes various musical events that align, synchronize, or coincide with the time-markers such that when the video and musical composition are played together the musical events align, synchronize, or coincide with the demarcated events in the video.