G10H2220/011

REPRODUCTION CONTROL DEVICE, PROGRAM, AND REPRODUCTION CONTROL METHOD

A playing controller includes: a data acquiring unit configured to acquire audio data associated with information regarding a play position in a music piece and text-related data associated with the information regarding the play position; an operation signal acquiring unit configured to acquire an operation signal indicating an operation for a control of the music piece; an audio data processing unit configured to process, in accordance with the operation signal, the audio data associated with a section within the music piece identified by the information regarding the play position; an image data generating unit configured to generate image data containing a character image based on the text-related data and process the character image showing a lyric in the section based on the information regarding the play position and the operation signal; and a data output unit configured to output the processed audio data and the image data.

Song Recording Method, Audio Correction Method, and Electronic Device
20220130360 · 2022-04-28 ·

A method includes displaying, by an electronic device, a first interface, where the first interface includes a recording button used to record a first song, obtaining, by the electronic device, accompaniment of the first song and feature information of a cappella of an original singer, starting to record a cappella of the user that is sung by the user, and displaying, by the electronic device, guidance information on a second interface based on the feature information of the a cappella of the original singer, where the guidance information guides one or more of breathing and vibrato during the user's singing.

Live stream processing method, apparatus, system, electronic apparatus and storage medium

The present disclosure provides a live stream processing method, apparatus and system, electronic device and a storage medium. The first electronic device acquires target song information provided by second electronic device, and the target song information at least includes a target song identifier. Afterwards, the first electronic device plays the accompaniment audio synchronously with the second electronic device according to a target song identifier when the notification information is received, that is, when the second electronic device plays the accompaniment audio of the target song, and the first electronic device acquires the singing audio sent by the second electronic device through a server. Finally, the first electronic device takes the played accompaniment audio and the singing audio as a live stream and sends the live stream to the server.

Augmented Reality Filters for Captured Audiovisual Performances

Visual effects, including augmented reality-type visual effects, are applied to audiovisual performances with differing visual effects and/or parameterizations thereof applied in correspondence with computationally determined audio features or elements of musical structure coded in temporally-synchronized tracks or computationally determined therefrom. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects are based on an audio feature computationally extracted from a captured audiovisual performance or from an audio track temporally-synchronized therewith.

USER INTERFACES FOR CONTENT APPLICATIONS
20230305799 · 2023-09-28 ·

In some embodiments, an electronic device displays time-synced lyrics of content items playing on an electronic device. In some embodiments, an electronic device displays representations of content items in a playback sequence on an electronic device. In some embodiments, an electronic device shares an item of content with another user account of another electronic device.

AUDIOVISUAL COLLABORATION METHOD WITH LATENCY MANAGEMENT FOR WIDE-AREA BROADCAST

Techniques have been developed to facilitate the livestreaming of group audiovisual performances. Audiovisual performances including vocal music are captured and coordinated with performances of other users in ways that can create compelling user and listener experiences. For example, in some cases or embodiments, duets with a host performer may be supported in a sing-with-the-artist style audiovisual livestream in which aspiring vocalists request or queue particular songs for a live radio show entertainment format. The developed techniques provide a communications latency-tolerant mechanism for synchronizing vocal performances captured at geographically-separated devices (e.g., at globally-distributed, but network-connected mobile phones or tablets or at audiovisual capture devices geographically separated from a live studio).

NON-LINEAR MEDIA SEGMENT CAPTURE AND EDIT PLATFORM

User interface techniques provide user vocalists with mechanisms for forward and backward traversal of audiovisual content, including pitch cues, waveform- or envelope-type performance timelines, lyrics and/or other temporally-synchronized content at record-time, during edits, and/or in playback. Recapture of selected performance portions, coordination of group parts, and overdubbing may all be facilitated. Direct scrolling to arbitrary points in the performance timeline, lyrics, pitch cues and other temporally-synchronized content allows user to conveniently move through a capture or audiovisual edit session. In some cases, a user vocalist may be guided through the performance timeline, lyrics, pitch cues and other temporally-synchronized content in correspondence with group part information such as in a guided short-form capture for a duet. A scrubber allows user vocalists to conveniently move forward and backward through the temporally-synchronized content.

SOUND SOURCE FILE STRUCTURE, RECORDING MEDIUM RECORDING THE SAME, AND METHOD OF PRODUCING SOUND SOURCE FILE
20210358522 · 2021-11-18 ·

The present disclosure relates to a sound source file structure, to output lyrics as audible sounds right before melodies corresponding to the lyrics start, to help a user to remind the lyrics based on accompaniment for a song after the accompaniment starts to be provided, and to help the user to sing based on correct lyrics corresponding to the melodies. The sound source file structure may include one or more backing sound source layers in which backing sounds based on beats and rhythms are placed, a melody sound source layer in which melody notes corresponding to lyrics based on beats and rhythms and a rest section corresponding to a rest are placed, and a lyric voice source layer in which a lyric voice is placed at a position corresponding to a rest section.

SYSTEMS AND METHODS FOR GENERATING AUDIBLE VERSIONS OF TEXT SENTENCES FROM AUDIO SNIPPETS
20210358474 · 2021-11-18 ·

A method is performed at a server system of a media-providing service. The server system has one or more processors and memory storing instructions for execution by the one or more processors. The method includes receiving a text sentence including a plurality of words from a device of a first user and extracting a plurality of audio snippets from one or more audio tracks. A respective audio snippet in the plurality of audio snippets corresponds to one or more words in the plurality of words of the text sentence. The method also includes assembling the plurality of audio snippets in a first order to produce an audible version of the text sentence. The method further includes providing, for playback at the device of the first user, the audible version of the text sentence including the plurality of audio snippets in the first order.

SYSTEM AND METHOD FOR PROVIDING A VIDEO WITH LYRICS OVERLAY FOR USE IN A SOCIAL MESSAGING ENVIRONMENT
20210343264 · 2021-11-04 ·

Some embodiments of the present disclosure provide a server system associated with a media-providing service. The server system receives, from a first client device, video content created by the first client device. The server system receives, from the first client device, an indication that the video content is to be associated with a song provided by the media-providing service. The server system provides, to a second client device, the video content in combination with the song. The server system provides, to the second client device, concurrently with the video content and the song, visual display of metadata about the song, including a name of the song.