Patent classifications
G10H2210/056
Spoken words analyzer
A lyrics analyzer generates tags and explicitness indicators for a set of tracks. These tags may indicate the genre, mood, occasion, or other features of each track. The lyrics analyzer does so by generating an n-dimensional vector relating to a set of topics extracted from the lyrics and then using those vectors to train a classifier to determine whether each tag applies to each track. The lyrics analyzer may also generate playlists for a user based on a single seed song by comparing the lyrics vector or the lyrics and acoustics vectors of the seed song to other songs to select songs that closely match the seed song. Such a playlist generator may also take into account the tags generated for each track.
Method, system, and computer-readable medium for creating song mashups
A system, method and computer product for combining audio tracks. In one example embodiment herein, the method comprises determining at least one music track that is musically compatible with a base music track, aligning those tracks in time, and combining the tracks. In one example embodiment herein, the tracks may be music tracks of different songs, the base music track can be an instrumental accompaniment track, and the at least one music track can be a vocal track. Also in one example embodiment herein, the determining is based on musical characteristics associated with at least one of the tracks, such as an acoustic feature vector distance between tracks, a likelihood of at least one track including a vocal component, a tempo, or musical key. Also, determining of musical compatibility can include determining at least one of a vertical musical compatibility or a horizontal musical compatibility among tracks.
Searching for music
In implementations of searching for music, a music search system can receive a music search request that includes a music file including music content. The music search system can also receive a selected musical attribute from a plurality of musical attributes. The music search system includes a music search application that can generate musical features of the music content, where a respective one or more of the musical features correspond to a respective one of the musical attributes. The music search application can then compare the musical features that correspond to the selected musical attribute to audio features of audio files, and determine similar audio files to the music file based on the comparison of the musical features to the audio features of the audio files.
AI BASED REMIXING OF MUSIC: TIMBRE TRANSFORMATION AND MATCHING OF MIXED AUDIO DATA
The present invention provides a method for processing audio data, comprising the steps of providing input audio data containing a mixture of audio data including first audio data of a first musical timbre and second audio data of a second musical timbre different from said first musical timbre, decomposing the input audio data to provide decomposed data representative of the first audio data, transforming the decomposed data to obtain third audio data.
Vibrotactile control systems and methods
Methods and systems are disclosed to facilitate creating the sensation of vibrotactile movement on the body of a user. Vibratory motors are used to generate a haptic language for music or other stimuli that is integrated into wearable technology. The disclosed system in certain embodiments enables the creation of a family of devices that allow people such as those with hearing impairments to experience sounds such as music or other input to the system. For example, a “sound vest” or other wearable array transforms musical input to haptic signals so that users can experience their favorite music in a unique way, and can also recognize auditory or other cues in the user's real or virtual reality environment and convey this information to the user using haptic signals.
METHOD, SYSTEM, AND COMPUTER-READABLE MEDIUM FOR CREATING SONG MASHUPS
A system, method and computer product for combining audio tracks. In one example embodiment herein, the method comprises determining at least one music track that is musically compatible with a base music track, aligning those tracks in time, and combining the tracks. In one example embodiment herein, the tracks may be music tracks of different songs, the base music track can be an instrumental accompaniment track, and the at least one music track can be a vocal track. Also in one example embodiment herein, the determining is based on musical characteristics associated with at least one of the tracks, such as an acoustic feature vector distance between tracks, a likelihood of at least one track including a vocal component, a tempo, or musical key. Also, determining of musical compatibility can include determining at least one of a vertical musical compatibility or a horizontal musical compatibility among tracks.
Audio generation system and method
A system for generating audio content in dependence upon an input audio track comprising audio corresponding to one or more sound sources, the system comprising an audio input unit operable to input the input audio track to one or more models, each representing one or more of the sound sources, and an audio generation unit operable to generate, using the one or more models, one or more audio tracks each comprising a representation of the audio contribution of the corresponding sound sources of the input audio track, wherein the generated audio tracks comprise one or more variations relative to the corresponding portion of the input audio track.
Beat decomposition to facilitate automatic video editing
The disclosed technology relates to a process for detecting musical artifacts within a musical composition. The detection of musical artifacts is based on analyzing the energy and frequency of the digital signal of the musical composition. The identification of musical artifacts within a musical composition would be used in connection with audio-video editing.
Digital audio workstation with audio processing recommendations
Presentation of a recommendation to a user for individual processing of audio tracks in a digital audio workstation. Training audio tracks are provided to a human sound mixer and responsive to the training audio tracks individually processed training audio tracks are received from the human sound mixer. The training audio tracks and the individually processed training audio tracks are input to a machine to train the machine. Audio processing operations are output from the trained machine and stored in a record of a database.
Method and device for processing music file, terminal and storage medium
Provided are a method and device for processing a music file, a terminal and a storage medium. The method comprises: in response to a received sound effect adjustment instruction, acquiring a music file, the adjustment of which is indicated by the sound effect adjustment instruction; carrying out vocals and accompaniment separation on the music file to obtain vocal data and accompaniment data in the music file; carrying out first sound effect processing on the vocal data to obtain target vocal data, and carrying out second sound effect processing on the accompaniment data to obtain target accompaniment data; and synthesizing the target vocal data and the target accompaniment data to obtain a target music file.