Patent classifications
G10H2210/036
Music compilation systems and related methods
Music compilation methods disclosed herein include providing a database. Data is stored therein associating a user with access credentials for a plurality of music streaming services. A first server is communicatively coupled with the database and with multiple third party servers each of which includes a music library associated with the user. A list is stored in the database listing audio tracks of the libraries. A play selector is displayed on a user interface of a computing device communicatively coupled with the first server. User selection of the play selector initiates playback of a sample set, the sample set including portions of audio tracks in the list. The sample set is determined based on contextual information gathered by the computing device, the contextual information not including any user selection. Music compilation systems disclosed herein include systems configured to carry out the music compilation methods.
Method of producing light animation with rhythm of music
A method of producing a light animation with a rhythm of music is disclosed. An electronic device performs Fourier series transform on a sound signal of music produced from at least one musical instrument, so as to obtain a rhythm diagram of the sound signal. The operation to extract a rhythm change point of the rhythm diagram is performed, and when the intensity of the rhythm diagram has a change from increase to decrease, the time point of the change is used as the rhythm change point and the electronic device transmits a lighting control signal to a light emitting device. After receiving the lighting control signal, the light emitting device emits light based on the lighting control signal, and the light emitted from the light emitting device continues to form the light animation, thereby improving overall performance appreciation of the music for audiences.
CROWD-SOURCED TECHNIQUE FOR PITCH TRACK GENERATION
Digital signal processing and machine learning techniques can be employed in a vocal capture and performance social network to computationally generate vocal pitch tracks from a collection of vocal performances captured against a common temporal baseline such as a backing track or an original performance by a popularizing artist. In this way, crowd-sourced pitch tracks may be generated and distributed for use in subsequent karaoke-style vocal audio captures or other applications. Large numbers of performances of a song can be used to generate a pitch track. Computationally determined pitch trackings from individual audio signal encodings of the crowd-sourced vocal performance set are aggregated and processed as an observation sequence of a trained Hidden Markov Model (HMM) or other statistical model to produce an output pitch track.
Method of combining audio signals
A method for automatically generating an audio signal, the method comprising receiving a source audio signal analyzing the source audio signal to identify a musical parameter characteristic thereof obtaining a supplemental audio signal based on the identified musical parameter characteristic and combining the source audio signal and the supplemental audio signal to form an extended audio signal.
Methods and Apparatus for Audio Equalization Based on Variant Selection
Methods, apparatus, systems and articles of manufacture are disclosed methods and apparatus for audio equalization based on variant selection. An example apparatus includes a processor to obtain training data, the training data including a plurality of reference audio signals each associated with a variant of music and organize the training data into a plurality of entries based on the plurality of reference audio signals, a training model executor to execute a neural network model using the training data, and a model trainer to train the neural network model by updating at least one weight corresponding to one of the entries in the training data when the neural network model does not satisfy a training threshold.
ARTIFICIAL INTELLIGENCE MODELS FOR COMPOSING AUDIO SCORES
A method for training one or more AI models for generating audio scores accompanying visual datasets includes obtaining training data comprising a plurality of audiovisual datasets and analyzing each of the plurality of audiovisual datasets to extract multiple visual features, textual features, and audio features. The method also includes correlating the multiple visual features and textual features with the multiple audio features via a machine learning network. Based on the correlations between the visual features, textual features, and audio features, one or more AI models are trained for composing one or more audio scores for accompanying a given dataset.
Searching for Music
In implementations of searching for music, a music search system can receive a music search request that includes a music file including music content. The music search system can also receive a selected musical attribute from a plurality of musical attributes. The music search system includes a music search application that can generate musical features of the music content, where a respective one or more of the musical features correspond to a respective one of the musical attributes. The music search application can then compare the musical features that correspond to the selected musical attribute to audio features of audio files, and determine similar audio files to the music file based on the comparison of the musical features to the audio features of the audio files.
TYPE ESTIMATION MODEL GENERATION SYSTEM AND TYPE ESTIMATION SYSTEM
A type estimation model generation system is a system that generates a type estimation model used for type estimation of estimating one of a plurality of types to which a user belongs, the system including: a learning data acquiring unit configured to acquire learning time series information that is information of a time series about a plurality of used musical pieces and learning type information representing types to which users who have used the plurality of musical pieces belong that are learning data used for machine learning; and a model generating unit configured to generate the type estimation model by performing machine learning using information based on the learning time series information as an input for the type estimation model in units of musical pieces in order of the time series and information based on the learning type information as an output of the type estimation model.
Methods and apparatus for audio equalization based on variant selection
Methods, apparatus, systems and articles of manufacture are disclosed methods and apparatus for audio equalization based on variant selection. An example apparatus includes a processor to obtain training data, the training data including a plurality of reference audio signals each associated with a variant of music and organize the training data into a plurality of entries based on the plurality of reference audio signals, a training model executor to execute a neural network model using the training data, and a model trainer to train the neural network model by updating at least one weight corresponding to one of the entries in the training data when the neural network model does not satisfy a training threshold.
Searching for music
In implementations of searching for music, a music search system can receive a music search request that includes a music file including music content. The music search system can also receive a selected musical attribute from a plurality of musical attributes. The music search system includes a music search application that can generate musical features of the music content, where a respective one or more of the musical features correspond to a respective one of the musical attributes. The music search application can then compare the musical features that correspond to the selected musical attribute to audio features of audio files, and determine similar audio files to the music file based on the comparison of the musical features to the audio features of the audio files.