Patent classifications
G10H1/368
METHOD AND APPARATUS FOR DISPLAYING LYRIC EFFECTS, ELECTRONIC DEVICE, AND COMPUTER READABLE MEDIUM
The present disclosure provides a method and an apparatus for displaying lyric effects, an electronic device, and a computer-readable medium. The method includes: obtaining, based on a lyric effect display operation of a user, an image sequence and music data to be displayed, the music data including audio data and lyrics; determining a target time point, playing at least one target image corresponding to the target time point in the image sequence, and determining target lyrics corresponding to the target time point in the lyrics, and adding animation effects on the at least one target image, displaying the target lyrics on the at least one target image, and playing a part of the audio data corresponding to the target lyrics.
VIRTUAL-MUSICAL-INSTRUMENT-BASED AUDIO PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
A virtual-musical-instrument-based audio processing method is provided. In the method, a video is played. A virtual musical instrument is displayed in the video when the virtual musical instrument is matched with at least one musical instrument graphic element in the video. Played audio of the virtual musical instrument is outputted according to interactions with the at least one musical instrument graphic element matched with the virtual musical instrument in the video. Apparatus and non-transitory computer-readable storage medium counterpart embodiments are also contemplated.
AUDIO REACTIVE AUGMENTED REALITY
Methods, systems, and storage media for augmenting a video are disclosed. Exemplary implementations may: receive a selection of an effect; receive user-generated content comprising video data and audio data; detect a characteristic of the audio data comprising at least a volume and/or a pitch of the audio data during a period of time; determine a series of numeric values based on the characteristic of the audio data during the period of time, individual numeric values of the series of numeric values being correlated with an amplitude of the volume and/or pitch at a discrete point within the period of time; and augment at least one of the video data and/or the audio data to include the effect based on the series of numeric values at discrete points in time within the period of time.
Methods, systems, devices and computer program products for adapting external content to a video stream
This disclosure falls into the field of adapting external content to a video stream, and more specifically it is related to analyzing the video stream to define a suitable narrative model, and adapting the external content based on this narrative model.
SYSTEM AND METHOD FOR SYNCHRONIZING PERFORMANCE EFFECTS WITH MUSICAL PERFORMANCE
A system and a method for synchronizing performance effects with a musical performance. The method includes receiving a MIDI signal representing an audio track currently being played; determining, in near real-time, a tempo value of the said audio track based on a corresponding clock speed information as obtained from the MIDI signal; receiving information about a video file selected to be played along with the said audio track; determining, in near real-time, a tempo value of the said video file based on the received information thereabout; processing the said video file to determine required playback speed adjustment therefor to cause the tempo value of the said video to be in sync with the determined tempo value of the said audio track; and playing the said video file with the required playback speed adjustment applied thereto, along with the audio track in the musical performance.
Systems and methods for aligning lyrics using a neural network
An electronic device receives audio data for a media item. The electronic device generates, from the audio data, a plurality of samples, each sample having a predefined maximum length. The electronic device, using a neural network trained to predict textal unit probabilities, generates a probability matrix of textual units for a first portion of a first sample of the plurality of samples. The probability matrix includes information about textual units, timing information, and respective probabilities of respective textual units at respective times. The electronic device identifies, for the first portion of the first sample, a first sequence of textual units based on the generated probability matrix.
Musical notation, system, and methods
In one aspect, provided herein is a device for notating a musical composition. The device, in various implementations, is structured so as to be less laborious to notate, easier to read, and more simple to employ in notating, reading, and/or playing the music of a given composition to be composed and/or played. Accordingly, in its most basic form, the device herein disclosed includes a template, upon which template one or more symbols may be notated, where such notation is configured in a manner that more closely relates the note to be played with the mechanical action needed to be performed so as to play that note, such as on an instrument to be or being played.
INTERACTIVE FASHION WITH MUSIC AR
Methods and systems are disclosed for performing operations comprising: receiving a monocular image that includes a depiction of a person wearing an article of clothing; generating a segmentation of the article of clothing worn by the person in the monocular image; obtaining one or more audio-track related augmented reality elements; and applying the one or more audio-track related augmented reality elements to the article of clothing worn by the person based on the segmentation of the article of clothing worn by the person.
System and method for association of a song, music, or other media content with a user's video content
In accordance with an embodiment, described herein is a system and method for association of a song, music, or other media content with a user's video content. The system enables a user to associate a song, music, or other media content that is associated with an audio clip and a song metadata of a media content, with a video they are about to create, or have created, to create a video moment. A recipient of the video moment can hear the audio clip in combination with the video content, and also view the song metadata overlay, to determine the name of the song and artist that was used in the video, or optionally access the song at a media server, for further listening by the recipient.
ENABLING OF DISPLAY OF A MUSIC VIDEO FOR ANY SONG
A system is provided for enabling a music video streaming service to display a music video for any song is provided. The system preferably includes a music video archive, including songs having associated regular music videos in the form of single audiovisual assets including visual content; a song archive, comprising songs in the form of single audiovisual assets which do not include any visual content, i.e. that do not have associated regular music videos; an image archive, including images having metadata that associate the images with artists of songs featured in the song archive; and at least one processing device, arranged to, when a song from the song archive is selected to be played on the music video streaming service, automatically and in real-time create a music video for the song, in the form of a video slide-show, using images from the image archive.