Patent classifications
G10H2250/311
METHOD AND DEVICE FOR MANAGING AUDIO BASED ON SPECTROGRAM
Various embodiments herein provide a method for managing an audio based on a spectrogram. The method includes generating, by a transmitter device, the spectrogram of the audio. The method includes identifying a first spectrogram corresponding to vocals in the audio and a second spectrogram corresponding to music in the audio from the spectrogram of the audio, and extracting a music feature from the second spectrogram. The method includes transmitting a signal comprising the first spectrogram, the second spectrogram, the music feature and the audio to a receiver device. The method includes determining, by the receiver device, whether an audio drop is occurring in the received signal based on a parameter associated with the received signal. The method includes generating the audio using the first spectrogram, the second spectrogram, the music feature, in response to determining that the audio drop is occurring in the received signal.
DIGITAL AUDIO SYSTEM
A portable digital audio system for a musician. The digital audio system includes an amplifier for processing an audio signal from a musical instrument or microphone electronically connected to the digital audio system and a speaker for playing a sound associated with the audio signal processed by the amplifier. The portable digital audio system also includes an audio control system providing operational control of the digital audio system and a primary housing for supporting the amplifier, the audio control system, and the speaker. Further, the digital audio system has a touch screen display in electronic communication with the audio control system and supported by the primary housing.
SYSTEMS, DEVICES, AND METHODS FOR MUSICAL CATALOG AMPLIFICATION SERVICES
Musical catalog amplification services that leverage or deploy a computer-based musical composition system are described. The computer-based musical composition system employs algorithms and, optionally, artificial intelligence to generate new music based on analyses of existing music. The new music may be wholly distinctive from, or may include musical variations of, the existing music. Rights in the new music generated by the computer-based musical composition system are granted to the rights holder(s) of the existing music. In this way, the musical catalog(s) of the rights holder(s) is/are amplified to include additional music assets. The computer-based musical composition system may be tuned so that the new music sounds more like, or less like, the existing music of the rights holder(s). Revenues generated from the new music are shared between the musical catalog amplification service provider and the rights holder(s).
Autonomous generation of melody
Implementations of the subject matter described herein provide a solution that enables a machine to automatically generate a melody. In this solution, user emotion and/or environment information is used to select a first melody feature parameter from a plurality of melody feature parameters, wherein each of the plurality of melody feature parameters corresponds to a music style of one of a plurality of reference melodies. The first melody feature parameter is further used to generate a first melody that conforms to the music style and is different from the reference melody. Thus, a melody that matches user emotions and/or environmental information may be automatically created.
Audio Generation Methods and Systems
A method of generating audio assets, comprising the steps of: receiving an input audio asset having a first duration, generating an input image representative of the input audio asset, training a generative model on the input image and implementing the trained generative model to generate an output image representative of an output audio asset having a second duration different to the first duration, and generating the output audio asset based on the output image.
Sound Signal Generation Method, Estimation Model Training Method, and Sound Signal Generation System
A method generates a sound signal in accordance with score data representative of respective durations of a plurality of notes and a shortening indication to shorten a duration of a specific note. The method includes generating a shortening rate, generating a series of control data, and generating a sound signal. The shortening rate is representative of an amount of shortening of the duration of the specific note, and is generated, by inputting, to a first estimation model, condition data representative of a sounding condition specified by score data for the specific note. Each of the series of control data is representative of a control condition of the sound signal corresponding to the score data, and the series of control data reflects a shortened duration of the specific note shortened in accordance with the generated shortening rate. The sound signal is generated in accordance with the series of control data.
TRAINED MODEL ESTABLISHMENT METHOD, ESTIMATION METHOD, PERFORMANCE AGENT RECOMMENDATION METHOD, PERFORMANCE AGENT ADJUSTMENT METHOD, TRAINED MODEL ESTABLISHMENT SYSTEM, ESTIMATION SYSTEM, TRAINED MODEL ESTABLISHMENT PROGRAM, AND ESTIMATION PROGRAM
A trained model establishment method realized by a computer includes acquiring a plurality of datasets each of which is formed by a combination of first performance data of a first performance by a performer, second performance data of a second performance performed together with the first performance, and a satisfaction label indicating a degree of satisfaction of the performer, and executing machine learning of a satisfaction estimation model by using the plurality of datasets. In the machine learning, the satisfaction estimation model is trained such that, for each of the datasets, a result of estimating a degree of satisfaction the performer from the first performance data and the second performance data matches the degree of the satisfaction indicated by the satisfaction label.
PERFORMANCE AGENT TRAINING METHOD, AUTOMATIC PERFORMANCE SYSTEM, AND PROGRAM
A performance agent training method realized by at least one computer includes observing a first performance of a musical piece by a performer, generating, by a performance agent, performance data of a second performance to be performed in parallel with the first performance, outputting the performance data such that the second performance is performed in parallel with the first performance of the performer, acquiring a degree of satisfaction of the performer with respect to the second performance performed based on the output performance data, and training the performance agent by reinforcement learning, using the degree of satisfaction as a reward.
Intelligent system for matching audio with video
An intelligent system for matching audio with video of the present invention provides a video analysis module targeting color tone, storyboard pace, video dialogue, length and category and director's special requirement, actors expression, movement, weather, scene, buildings, spacial and temporal, things and a music analysis module targeting recorded music form, sectional turn, style, melody and emotional tension, and then uses an AI matching module to adequately match video of the video analysis module with musical characteristics of the music analysis module, so as to quickly complete a creative composition selection function with respect to matching audio with a video.
AUDIO STEM IDENTIFICATION SYSTEMS AND METHODS
Methods, systems and computer program products are provided for determining acoustic feature vectors of query and target items in a first vector space, and mapping the acoustic feature vectors to a second vector space having a lower dimension. The distribution of vectors in the second vector space can then be used to identify items from the same songs, and/or items that are complementary. A mapping function is trained using a machine learning algorithm, such that complementary audio items are closer in the second vector space than the first, according to a given distance metric.