Patent classifications
G10H2210/125
METHOD, DEVICE AND SOFTWARE FOR CONTROLLING TRANSPORT OF AUDIO DATA
A method for processing music audio data, including providing input audio data representing a first piece of music comprising a mixture of musical timbres. The method also includes decomposing the input audio data to generate at least first-timbre decomposed data representing a first timbre selected from the musical timbres of the first piece of music, and second-timbre decomposed data representing a second timbre selected from the musical timbres of the first piece of music. The method also includes applying a transport control to obtain transport controlled first-timbre decomposed data. The method also includes recombining audio data obtained from the transport controlled first-timbre decomposed data with audio data obtained from the second-timbre decomposed data to obtain recombined audio data.
ADAPTIVE AUDIO MIXING
Systems, apparatuses, and methods for performing adaptive audio mixing are disclosed. A trained neural network dynamically selects and mixes pre-recorded, human-composed music stems that are composed as mutually compatible sets. Stem and track selection, volume mixing, filtering, dynamic compression, acoustical/reverberant characteristics, segues, tempo, beat-matching and crossfading parameters generated by the neural network are inferred from the game scene characteristics and other dynamically changing factors. The trained neural network selects an artist's pre-recorded stems and mixes the stems in real-time in unique ways to dynamically adjust and modify background music based on factors such as game scenario, the unique storyline of the player, scene elements, the player's profile, interest, and performance, adjustments made to game controls (e.g., music volume), number of viewers, received comments, player's popularity, player's native language, player's presence, and/or other factors. The trained neural network creates unique music that dynamically varies according to real-time circumstances.
Transition functions of decomposed signals
A device for processing audio signals, including: first and second input units providing first and second input signals of first and second audio tracks, a decomposition unit to decompose the first input audio signal to obtain a plurality of decomposed signals, a playback unit configured to start playback of a first output signal obtained from recombining at least a first decomposed signal at a first volume level with a second decomposed signal at a second volume level, such that the first output signal substantially equals the first input signal, and a transition unit for performing a transition between playback of the first output signal and playback of a second output signal obtained from the second input signal. The transition unit has a volume control section adapted for reducing the first and second volume levels according to first and second transition functions.
Method and system for template based variant generation of hybrid AI generated song
According to an embodiment, there is provided a system and method for automatic AI-based song construction based on ideas of a user. In some embodiments, an embodiment is provided with a database that contains harmony templates which can be used by the user to augment the playback of a given music work. Various embodiments of the instant invention also benefit from a combination of expert knowledge resident in an expert engine which contains rules for musically correct song generation and machine learning in an AI-based audio loop selection engine for the selection of compatible audio loops from a database of audio loops.
Apparatus and Methods for Cellular Compositions
Systems, methods and apparatus for cellular compositions/generating music in real-time using cells are provided. The cellular compositions may be dependent on user data.
Method and system for accelerated decomposing of audio data using intermediate data
A method for processing audio data, comprising providing song identification data identifying a particular song from among a plurality of songs or identifying a particular position within a particular song, loading intermediate data associated with the song identification data from a storage medium or from a remote device. The method also comprises obtaining input audio data representing audio signals of the song as identified by the song identification data. The audio signals comprise a mixture of different musical timbres, including at least a first musical timbre and a second musical timbre different from said first musical timbre. The method comprises combining the input audio data and the intermediate data with one another to obtain output audio data. The audio data represent audio signals of the first musical timbre separated from the second musical timbre.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing device (100) according to the present disclosure is an information processing device that controls a first application (20) and a second application (20) that functions as a plug-in that extends functions of the first application. The first application includes a control unit (161) that controls operation of the second application in the first application, and the second application includes a selection unit (166) that selects setting information for controlling a composition function based on machine learning, and a transmission/reception unit (167) that transmits the setting information to an external server (200) that executes a composition function based on machine learning and receives music data composed by the external server through a network.
Sound effect synthesis
Disclosed herein is a sound synthesis system for generating a user defined synthesised sound effect, the system comprising: a receiver of user defined inputs for defining a sound effect; a generator of control parameters in dependence on the received user defined inputs; a plurality of sound effect objects, wherein each sound effect object is arranged to generate a different class of sound and each sound effect object comprises a sound synthesis model arranged to generate a sound in dependence on one or more of the control parameters; a plurality of audio effect objects, wherein each audio effect object is arranged to receive a sound from one or more sound effect objects and/or one or more other audio effect objects, process the received sound in dependence on one or more of the control parameters and output the processed sound; a scene creation function arranged to receive sound output from one or more sound effect objects and/or audio effect objects and to generate a synthesised sound effect in dependence on the received sound; and an audio routing function arranged to determine the arrangement of audio effect objects, sound effect objects and scene creation function such that one or more sounds received by the scene creation function are dependent on the audio routing function; wherein the determined arrangement of audio effect objects, sound effect objects and the scene creation function by the audio routing function is dependent on the user defined inputs.
System and method for assembling a recorded composition
A system and method for assembling segments of recorded music or video from among various versions or variations of a recording, into a new version or composition, such that a first segment of a first version of a recorded work is attached to a segment of a second segment of a second version of the recorded work, to create a new version of the recorded work.
Method, apparatus, terminal and storage medium for mixing audio
The present disclosure provides a method for mixing audio, pertaining to the technical field of multimedia. The method includes: after acquiring an audio material to be mixed, determining a beat feature of a target audio, performing beat adjustment on the audio material based on the beat feature of the target audio; and performing audio mixing on the target audio based on the audio material adjusted by the beat adjustment.