G10H2210/031

Systems, devices, and methods for musical catalog amplification services
11615772 · 2023-03-28 · ·

Musical catalog amplification services that leverage or deploy a computer-based musical composition system are described. The computer-based musical composition system employs algorithms and, optionally, artificial intelligence to generate new music based on analyses of existing music. The new music may be wholly distinctive from, or may include musical variations of, the existing music. Rights in the new music generated by the computer-based musical composition system are granted to the rights holder(s) of the existing music. In this way, the musical catalog(s) of the rights holder(s) is/are amplified to include additional music assets. The computer-based musical composition system may be tuned so that the new music sounds more like, or less like, the existing music of the rights holder(s). Revenues generated from the new music are shared between the musical catalog amplification service provider and the rights holder(s).

MULTIMEDIA MUSIC CREATION USING VISUAL INPUT
20220335974 · 2022-10-20 ·

A system for creating music using visual input. The system detects events and metrics (e.g., objects, gestures, etc.) in user input (e.g., video, audio, music data, touch, motion, etc.) and generates music and visual effects that are synchronized with the detected events and correspond to the detected metrics. To generate the music, the system selects parts from a library of stored music data and assigns each part to the detected events and metrics (e.g., using heuristics to match musical attributes to visual attributes in the user input). To generate the visual effects, the system applies rules (e.g., that map musical attributes to visual attributes) to translate the generated music data to visual effects. Because the visual effects are generated using music data that is generated using the detected events/metrics, both the generated music and the visual effects are synchronized with—and correspond to—the user input.

Spoken words analyzer
11636835 · 2023-04-25 · ·

A lyrics analyzer generates tags and explicitness indicators for a set of tracks. These tags may indicate the genre, mood, occasion, or other features of each track. The lyrics analyzer does so by generating an n-dimensional vector relating to a set of topics extracted from the lyrics and then using those vectors to train a classifier to determine whether each tag applies to each track. The lyrics analyzer may also generate playlists for a user based on a single seed song by comparing the lyrics vector or the lyrics and acoustics vectors of the seed song to other songs to select songs that closely match the seed song. Such a playlist generator may also take into account the tags generated for each track.

AUTOENCODER-BASED LYRIC GENERATION
20230104417 · 2023-04-06 ·

Some embodiments of the present disclosure relate to generating novel lyrics lines conditioned on music audio. A bimodal neural network model may learn to generate lyric lines conditioned on a given short audio clip. The bimodal neural network model includes a spectrogram variational autoencoder and a text variational autoencoder. Output from the spectrogram variational autoencoder is used to influence output from text variational autoencoder.

MUSIC RECOMMENDATION SYSTEM BY FACIAL EMOTION USING DEEP LEARNING

The system comprises an input device for collecting sound and sound information or extracting sound information from a music sample; a pre-processor for pre-processing the informational collection to generate an input information test set for a characterization model, wherein the pre-processor utilizes fine-grained division and different techniques to preprocess the example informational collection; a central processor for combining sound feeling data and further developing arrangement speed, such that review makes fine-grained division for genuine music informational collection and results the inclination results by casting a ballot direction, which is configured to promote precision of music feeling grouping; a vocal division device for dividing vocal of the complicated structure of genuine music sound, and voice and foundation sound are incorporated together; and a reviewing device for reviewing the vocal detachment of music and reviewing the grouping impact of vocal and foundation sound individually, which incredibly builds the convergence of sound elements.

METHOD AND APPARATUS FOR IDENTIFYING MUSIC IN CONTENT

The present invention relates to an apparatus and method for identifying music in a content, The present invention includes extracting and storing a fingerprint of an original audio in an audio fingerprint DB; extracting a first fingerprint of a first audio in the content; and searching for a fingerprint corresponding to the fingerprint of the first audio in the audio fingerprint DB, wherein the first audio is audio data in a music section detected from the content.

APPARATUS AND METHOD FOR PROVIDING SENSORY EXPERIENCE
20230185517 · 2023-06-15 ·

Embodiments of the present disclosure relate to a sensory experience providing apparatus for providing a sensory experience based on sound in a vehicle, and a method thereof. The controller is configured to receive a sound played in the vehicle, extract a sound feature from the received sound, generate sensory information based on the extracted sound feature, and provide a sensory experience based on the sensory information.

GENERATING MUSIC OUT OF A DATABASE OF SETS OF NOTES

A method of generating music contents from input music contents that includes development of models of music composition generation on the basis of business rules and composition rules. In parallel, sounds are prepared, which may be saved in the sound repository. Then, models in the form of source code are sent to a melody generator. Firstly, the generator is set with specific parameters using a controller conforming to MIDI standards and supplemented with composition characteristics read from the user preference database. Next, the contents are sent to automatic generation based on artificial intelligence algorithms and the digital score of the composition with the desired characteristics is generated. Sound tracks of individual instruments are rendered and the rendered tracks are mixed into the final music record. Next, the composition and its record are verified by the critic module using algorithms based on neural networks.

Method and system for learning and using latent-space representations of audio signals for audio content-based retrieval

A method and system are provided for extracting features from digital audio signals which exhibit variations in pitch, timbre, decay, reverberation, and other psychoacoustic attributes and learning, from the extracted features, an artificial neural network model for generating contextual latent-space representations of digital audio signals. A method and system are also provided for learning an artificial neural network model for generating consistent latent-space representations of digital audio signals in which the generated latent-space representations are comparable for the purposes of determining psychoacoustic similarity between digital audio signals. A method and system are also provided for extracting features from digital audio signals and learning, from the extracted features, an artificial neural network model for generating latent-space representations of digital audio signals which take care of selecting salient attributes of the signals that represent psychoacoustic differences between the signals.

Music Generator Generation of Continuous Personalized Music
20220059063 · 2022-02-24 ·

Techniques are disclosed relating to automatically generate new music content. In some embodiments, a computing system receivers user input specifying a user-defined music control element. The computing system may train a machine learning model to change both composition and performance parameters based on user adjustments to the user-defined music control element. In embodiments in which composition and performance subsystems are on different devices, one device may transmit configuration information to another device, where the configuration information specifies how to adjust parameters based on user input to the user-defined music control element. Disclosed techniques may facilitate centralized learning for human-like music production while allowing individualized customization for individual users. Further, disclosed techniques may allow artists to define their own abstract music controls and make those controls available to end-users.