Patent classifications
G10H2220/101
SYSTEMS AND METHODS FOR GENERATING RECOMMENDATIONS IN A DIGITAL AUDIO WORKSTATION
A method includes displaying a user interface of a digital audio workstation, which includes a first region for generating a composition. The first region includes a first compositional segment that has been added to the composition by a user. Based on the first compositional segment, one or more recommended predefined compositional segments are identified and displayed in a second region. The method includes receiving the selection of a second compositional segment. The method includes adding the compositional segment to the composition.
AUDIO ANALYSIS METHOD, AUDIO ANALYSIS SYSTEM AND PROGRAM
An audio analysis method that is realized by a computer system includes setting a maximum tempo curve representing a temporal change of a maximum tempo value and a minimum tempo curve representing a temporal change of a minimum tempo value in accordance with an instruction from a user, and analyzing an audio signal representing a performance sound of a musical piece, thereby estimating a tempo of the musical piece within a restricted range between a maximum value represented by the maximum tempo curve and a minimum value represented by the minimum tempo curve.
METHODS AND SYSTEMS FOR INTERACTIVE LYRIC GENERATION
Various embodiments of an apparatus, methods, systems and computer program products described herein are directed to a Lyric Engine. In various embodiments, the Lyric Engine receives, at a user interface, a selection of at least one song criteria. The Lyric Engine receives a first set of suggested song lyrics that correspond to the selected song criteria. The Lyric Engine presents, in the user interface, the first set of suggested song lyrics. The Lyric Engine receives, at the user interface, a selection of one or more of the suggested song lyrics in the first set. The Lyric Engine receives a second set of suggested song lyrics that correspond to the selected song criteria and the selected song lyrics. The Lyric Engine concurrently presents, in the user interface, the selected song lyrics and the second set of suggested song lyrics.
METHOD AND DEVICE FOR DETERMINING MIXING PARAMETERS BASED ON DECOMPOSED AUDIO DATA
The present invention provides a method for processing audio data, comprising the steps of providing a first audio track of mixed input data, said mixed input data representing an audio signal containing a plurality of different timbres, decomposing the mixed input data to obtain decomposed data representing an audio signal containing at least one, but not all, of the plurality of different timbres, providing a second audio track, analyzing audio data, including at least the decomposed data, to determine at least one mixing parameter, and generating an output track based on the at least one mixing parameter, said output track comprising first output data obtained from the first audio track and second output data obtained from the second audio track.
METHOD AND SYSTEM FOR INTERACTIVE SONG GENERATION
A method and system may provide for interactive song generation. In one aspect, a computer system may present options for selecting a background track. The computer system may generate suggested lyrics based on parameters entered by the user. User interface elements allow the computer system to receive input of lyrics. As the user inputs lyrics, the computer system may update its suggestions of lyrics based on the previously input lyrics. In addition, the computer system may generate proposed melodies to go with the lyrics and the background track. The user may select from among the melodies created for each portion of lyrics. The computer system may optionally generate a computer-synthesized vocal(s) or capture a vocal track of a human voice singing the song. The background track, lyrics, melodies, and vocals may be combined to produce a complete song without requiring musical training or experience by the user.
METHOD AND DEVICE FOR PROCESSING, PLAYING AND/OR VISUALIZING AUDIO DATA, PREFERABLY BASED ON AI, IN PARTICULAR DECOMPOSING AND RECOMBINING OF AUDIO DATA IN REAL-TIME
The present invention relates to a method for processing and playing audio data comprising the steps of receiving mixed input data and playing recombined output data. Furthermore, the invention relates to a device for processing and playing audio data, preferably DJ equipment, comprising an audio input unit for receiving a mixed input signal, a recombination unit and a playing unit for playing recombined output data. In addition, the present invention relates to a method and a device for representing audio data, i.e. on a display.
SYSTEMS, DEVICES, AND METHODS FOR VARYING DIGITAL REPRESENTATIONS OF MUSIC
Systems, devices, and methods for encoding digital representations of musical compositions are described. Various components of a musical composition that are defined in modern music theory, such as notes and bars, are encoded as respective hierarchically-dependent data objects in a data file. The hierarchically-dependent data objects encode the musical composition in a tree-like data structure with modular nodes and adjustable relationships between nodes. Note start times and beat start times are encoded independently of one another and characterized by a timing relationship that captures the expressiveness imbued when notes and beats are not precisely synchronized. Musical variations that preserve the timing relationship between the notes and beats of the original composition are also generated and encoded.
Computer vision and mapping for audio applications
Systems, devices, media, and methods are presented for playing audio sounds, such as music, on a portable electronic device using a digital color image of a note matrix on a map. A computer vision engine, in an example implementation, includes a mapping module, a color detection module, and a music playback module. The camera captures a color image of the map, including a marker and a note matrix. Based on the color image, the computer vision engine detects a token color value associated with each field. Each token color value is associated with a sound sample from a specific musical instrument. A global state map is stored in memory, including the token color value and location of each field in the note matrix. The music playback module, for each column, in order, plays the notes associated with one or more the rows, using the corresponding sound sample, according to the global state map.
Apparatuses and methodologies relating to the generation and selective synchronized display of musical and graphic information on one or more devices capable of displaying musical and graphic information
A method for the generation and selective display of musical information on one or more devices capable of displaying musical information can include generating a plurality of visual blocks, each block among the plurality having a first dimension and a second dimension corresponding to musical information visible with each block. The method can include selectively displaying, via a first GUI and/or a second GUI, particular blocks among the plurality of visual blocks. The musical information to contained in a quantity of the respective subsets of the particular blocks displayed on the second GUI can be include at least a portion of respective subsets of the particular blocks displayed on the first GUI.
TRANSITION FUNCTIONS OF DECOMPOSED SIGNALS
A device for processing audio signals, including: first and second input units providing first and second input signals of first and second audio tracks, a decomposition unit to decompose the first input audio signal to obtain a plurality of decomposed signals, a playback unit configured to start playback of a first output signal obtained from recombining at least a first decomposed signal at a first volume level with a second decomposed signal at a second volume level, such that the first output signal substantially equals the first input signal, and a transition unit for performing a transition between playback of the first output signal and playback of a second output signal obtained from the second input signal. The transition unit has a volume control section adapted for reducing the first and second volume levels according to first and second transition functions.