G10H1/368

METHOD FOR SYNTHESIZING VIDEO, TERMINAL AND STORAGE MEDIUM
20220157285 · 2022-05-19 ·

A method for synthesizing a video includes: acquiring audio data and dotting data corresponding to the audio data, the dotting data including a beat time point and a beat value corresponding to the beat time point of the audio data; acquiring a plurality of material images from a local source; and synthesizing, based on the dotting data, the plurality of material images and the audio data to acquire a synthesized video, a switching time point of each of the material images in the synthesized video being the beat time point of the audio data.

METHOD AND APPARATUS FOR SHOWING SPECIAL EFFECT, ELECTRONIC DEVICE, AND COMPUTER-READABLE MEDIUM
20230254436 · 2023-08-10 ·

The present disclosure provides a special effect showing method and an apparatus thereof, an electronic device, and a computer-readable medium. The method includes: opening a special effect showing interface and turning on a video capturing apparatus; obtaining a music feature of background music in the special effect showing interface and second special effect elements generated according to the music feature, showing the second special effect elements in a preset order, and controlling the second special effect elements to move in the special effect showing interface according to the music feature; identifying a target object in a video captured by the video capturing apparatus and controlling a first special effect element in the special effect showing interface to move according to a movement of the target object; and triggering a special effect when the first special effect element and the second special effect elements satisfy a preset condition.

Song Recording Method, Audio Correction Method, and Electronic Device
20220130360 · 2022-04-28 ·

A method includes displaying, by an electronic device, a first interface, where the first interface includes a recording button used to record a first song, obtaining, by the electronic device, accompaniment of the first song and feature information of a cappella of an original singer, starting to record a cappella of the user that is sung by the user, and displaying, by the electronic device, guidance information on a second interface based on the feature information of the a cappella of the original singer, where the guidance information guides one or more of breathing and vibrato during the user's singing.

System and method for assembling a recorded composition
11314936 · 2022-04-26 · ·

A system and method for assembling segments of recorded music or video from among various versions or variations of a recording, into a new version or composition, such that a first segment of a first version of a recorded work is attached to a segment of a second segment of a second version of the recorded work, to create a new version of the recorded work.

Method for making music recommendations and related computing device, and medium thereof

This application discloses a method for making music recommendations. The method for making music recommendations is performed by a server device. The method includes obtaining a material for which background music is to be added; determining at least one visual semantic tag of the material, the at least one visual semantic tag describing at least one characteristic of the material; identifying a matched music matching the at least one visual semantic tag from a candidate music library; sorting the matched music according to user assessing information of a user corresponding to the material; screening the matched music based on a sorting result and according to a preset music screening condition; and recommending matched music obtained through the screening as candidate music of the material.

Augmented Reality Filters for Captured Audiovisual Performances

Visual effects, including augmented reality-type visual effects, are applied to audiovisual performances with differing visual effects and/or parameterizations thereof applied in correspondence with computationally determined audio features or elements of musical structure coded in temporally-synchronized tracks or computationally determined therefrom. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects are based on an audio feature computationally extracted from a captured audiovisual performance or from an audio track temporally-synchronized therewith.

SYSTEM AND METHOD FOR GENERATING HARMONIOUS COLOR SETS FROM MUSICAL INTERVAL DATA
20220122572 · 2022-04-21 ·

Systems and methods are disclosed for generating color sets based on musical concepts of pitch intervals and harmony. Color sets are derived via a music-to-hue process which analyzes musical pitch data associated with musical input to determine pitch intervals included in the music. Pitch interval angles associated with the pitch intervals are applied to a tuned hue index to identify hue note ordered within the index which are separated by a hue interval angle similar to the pitch angle associated with the analyzed pitch data. The systems and methods provide for the creation of color sets which are analogous to musical chords in that they include multiple hue notes selected based on hue interval angles derived from musical interval angles associated with the received musical input.

Method for generating action according to audio signal and electronic device

The disclosure provides a method for generating action according to an audio signal and an electronic device. The method includes: receiving an audio signal and extracting a high-level audio feature therefrom; extracting a latent audio feature from the high-level audio feature; in response to determining that the audio signal corresponds to a beat, obtaining a joint angle distribution matrix based on the latent audio feature; in response to determining that the audio signal corresponds to a music, obtaining a plurality of designated joint angles corresponding to a plurality of joint points based on the joint angle distribution matrix; and adjusting a joint angle of each of the joint points on the avatar according to the designated joint angles.

NON-LINEAR MEDIA SEGMENT CAPTURE AND EDIT PLATFORM

User interface techniques provide user vocalists with mechanisms for forward and backward traversal of audiovisual content, including pitch cues, waveform- or envelope-type performance timelines, lyrics and/or other temporally-synchronized content at record-time, during edits, and/or in playback. Recapture of selected performance portions, coordination of group parts, and overdubbing may all be facilitated. Direct scrolling to arbitrary points in the performance timeline, lyrics, pitch cues and other temporally-synchronized content allows user to conveniently move through a capture or audiovisual edit session. In some cases, a user vocalist may be guided through the performance timeline, lyrics, pitch cues and other temporally-synchronized content in correspondence with group part information such as in a guided short-form capture for a duet. A scrubber allows user vocalists to conveniently move forward and backward through the temporally-synchronized content.

VIDEO CONTROL DEVICE AND VIDEO CONTROL METHOD
20220020348 · 2022-01-20 · ·

This video control device includes: a detection unit that detects a beat timing of audio; and a control unit that updates a display mode of a video on the basis of the beat timing and change information indicating a change in a display mode of a video displayed on a display device.