Patent classifications
G10H2240/085
Music compilation systems and related methods
Music compilation methods disclosed herein include providing a database. Data is stored therein associating a user with access credentials for a plurality of music streaming services. A first server is communicatively coupled with the database and with multiple third party servers each of which includes a music library associated with the user. A list is stored in the database listing audio tracks of the libraries. A play selector is displayed on a user interface of a computing device communicatively coupled with the first server. User selection of the play selector initiates playback of a sample set, the sample set including portions of audio tracks in the list. The sample set is determined based on contextual information gathered by the computing device, the contextual information not including any user selection. Music compilation systems disclosed herein include systems configured to carry out the music compilation methods.
Methods and apparatus for determining a mood profile associated with media data
Examples described herein may perform various operations based on mood congruency. An example method involves accessing, by a processor, from a database, a score that represents a degree of congruency between a first mood vector that describes first media data and a second mood vector that describes second media data, wherein the score is generated based on (i) a first value that the first mood vector associates with a first mood, (ii) a second value that the second mood vector associates with a second mood, and (iii) a degree of congruency between the first and second moods, based on the score, comparing, by the processor, a first characteristic of the first media data, other than the first mood, with a second characteristic of the second media data, other than the second mood, and based at least in part on an output of the comparing, providing an indicator to a module.
Mood determination of a collection of media content items
Systems, methods, and computer-readable media for determining at least one valid mood for a collection of media content items of a media library are provided.
THERAPEUTIC MUSIC AND MEDIA PROCESSING SYSTEM
Systems, methods, architectures, mechanisms and apparatus for generating an audio segment playlist configured to provoke a physiological response in a listener in accordance with a desired outcome category, comprising: selecting, from a features database, a plurality of audio segments having features associated with both listener information and the desired outcome category; and ordering within the playlist at least a portion of the selected audio segments in accordance with at least one feature progression associated with the outcome category.
Cognitive music engine using unsupervised learning
A method for generating a musical composition based on user input is described. A first set of musical characteristics from a first input musical piece is received as an input vector. The first set of musical characteristics is perturbed to create a perturbed input vector as input in a first set of nodes in a first visible layer of an unsupervised neural net. The unsupervised neural net comprised of a plurality of computing layers, each computing layer composed of a respective set of nodes. The unsupervised neural net is operated to calculate an output vector from a higher level hidden layer in the unsupervised neural net. The output vector is used to create an output musical piece.
SYSTEMS, DEVICES, AND METHODS FOR MUSICAL CATALOG AMPLIFICATION SERVICES
Musical catalog amplification services that leverage or deploy a computer-based musical composition system are described. The computer-based musical composition system employs algorithms and, optionally, artificial intelligence to generate new music based on analyses of existing music. The new music may be wholly distinctive from, or may include musical variations of, the existing music. Rights in the new music generated by the computer-based musical composition system are granted to the rights holder(s) of the existing music. In this way, the musical catalog(s) of the rights holder(s) is/are amplified to include additional music assets. The computer-based musical composition system may be tuned so that the new music sounds more like, or less like, the existing music of the rights holder(s). Revenues generated from the new music are shared between the musical catalog amplification service provider and the rights holder(s).
Autonomous generation of melody
Implementations of the subject matter described herein provide a solution that enables a machine to automatically generate a melody. In this solution, user emotion and/or environment information is used to select a first melody feature parameter from a plurality of melody feature parameters, wherein each of the plurality of melody feature parameters corresponds to a music style of one of a plurality of reference melodies. The first melody feature parameter is further used to generate a first melody that conforms to the music style and is different from the reference melody. Thus, a melody that matches user emotions and/or environmental information may be automatically created.
TRAINED MODEL ESTABLISHMENT METHOD, ESTIMATION METHOD, PERFORMANCE AGENT RECOMMENDATION METHOD, PERFORMANCE AGENT ADJUSTMENT METHOD, TRAINED MODEL ESTABLISHMENT SYSTEM, ESTIMATION SYSTEM, TRAINED MODEL ESTABLISHMENT PROGRAM, AND ESTIMATION PROGRAM
A trained model establishment method realized by a computer includes acquiring a plurality of datasets each of which is formed by a combination of first performance data of a first performance by a performer, second performance data of a second performance performed together with the first performance, and a satisfaction label indicating a degree of satisfaction of the performer, and executing machine learning of a satisfaction estimation model by using the plurality of datasets. In the machine learning, the satisfaction estimation model is trained such that, for each of the datasets, a result of estimating a degree of satisfaction the performer from the first performance data and the second performance data matches the degree of the satisfaction indicated by the satisfaction label.
Intelligent system for matching audio with video
An intelligent system for matching audio with video of the present invention provides a video analysis module targeting color tone, storyboard pace, video dialogue, length and category and director's special requirement, actors expression, movement, weather, scene, buildings, spacial and temporal, things and a music analysis module targeting recorded music form, sectional turn, style, melody and emotional tension, and then uses an AI matching module to adequately match video of the video analysis module with musical characteristics of the music analysis module, so as to quickly complete a creative composition selection function with respect to matching audio with a video.
DEEP LEARNING SYSTEM FOR DETERMINING AUDIO RECOMMENDATIONS BASED ON VIDEO CONTENT
Embodiments are disclosed for determining an answer to a query associated with a graphical representation of data. In particular, in one or more embodiments, the disclosed systems and methods comprise receiving an input including an unprocessed audio sequence and a request to perform an audio signal processing effect on the unprocessed audio sequence. The one or more embodiments further include analyzing, by a deep encoder, the unprocessed audio sequence to determine parameters for processing the unprocessed audio sequence. The one or more embodiments further include sending the unprocessed audio sequence and the parameters to one or more audio signal processing effects plugins to perform the requested audio signal processing effect using the parameters and outputting a processed audio sequence after processing of the unprocessed audio sequence using the parameters of the one or more audio signal processing effects plugins.