Patent classifications
G10H2220/011
MUSICAL PERFORMANCE APPARATUS, MUSICAL PERFORMANCE PHRASE DETERMINING METHOD, AND STORAGE MEDIUM
A musical performance apparatus including at least one processor configured to: determine any phrase as a first phrase that is to be a performance target performed by a user from among a plurality of phrases included in a musical piece, and determine a set of phrases from the first phrase to a second phrase to be phrases that are to be the performance target performed by the user according to operation by the user in order to select the second phrase from the plurality of phrases.
Method and system for interactive song generation
A method and system may provide for interactive song generation. In one aspect, a computer system may present options for selecting a background track. The computer system may generate suggested lyrics based on parameters entered by the user. User interface elements allow the computer system to receive input of lyrics. As the user inputs lyrics, the computer system may update its suggestions of lyrics based on the previously input lyrics. In addition, the computer system may generate proposed melodies to go with the lyrics and the background track. The user may select from among the melodies created for each portion of lyrics. The computer system may optionally generate a computer-synthesized vocal(s) or capture a vocal track of a human voice singing the song. The background track, lyrics, melodies, and vocals may be combined to produce a complete song without requiring musical training or experience by the user.
METHODS AND SYSTEMS FOR SYNCHRONIZING AN AUDIO CLIP EXTRACTED FROM AN ORIGINAL RECORDING WITH CORRESPONDING LYRICS
Methods, systems, and devices for determining an audio portion based on a request received from a consumer or user of an operating device where the requests comprise a set of lyrics, then effect the streaming of the determined audio portion.
ACOUSTIC SYSTEM, COMMUNICATION DEVICE, AND PROGRAM
An acoustic system comprises an output unit, a communication unit, a first output control unit, and a second output control unit. The output unit is configured to be capable of outputting a sound and an image. The communication unit is configured to be capable of communicating with an external device. The first output control unit is configured to receive, from a communication device that transmits a reproduction signal for acoustic data, the reproduction signal via the communication unit and to cause the output unit to output a sound based on the reproduction signal, to thereby perform reproduction of a series of sounds represented by the acoustic data. The second output control unit is configured to cause the output unit to output, as the image, a word corresponding to the sound outputted by the output unit among the series of sounds represented by the acoustic data.
System and method for providing a video with lyrics overlay for use in a social messaging environment
Some embodiments of the present disclosure provide a server system associated with a media-providing service. The server system receives, from a first client device, video content created by the first client device. The server system receives, from the first client device, an indication that the video content is to be associated with a song provided by the media-providing service. The server system provides, to a second client device, the video content in combination with the song. The server system provides, to the second client device, concurrently with the video content and the song, visual display of metadata about the song, including a name of the song.
Entertainment System
This present invention relates to a stringed musical instrument with an integrated touch video screen allowing a user to play the instrument while singing along with a variety of pre-recorded songs. The video screen displays the words and other information of a user selected song. More specifically, the stringed musical instrument has built in karaoke components and the strings of the instrument can be strummed by the user to play a melody, or the user may mimic playing the instrument strings, wherein the sounds emitted from the strings can be muted.
Audiovisual collaboration system and method with seed/join mechanic
User interface techniques provide user vocalists with mechanisms for seeding subsequent performances by other users (e.g., joiners). A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a seed. A seeding user's call invites other users to join the full-length or short-form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc. The resulting group performance, whether full-length or just a chunk, may be posted, livestreamed, or otherwise disseminated in a social network.
Automatic translation using deep learning
Audio data of an original work is received. Text in the audio data is translated to a target language. The audio data is passed to a first deep learning model to learn voice features in the audio data. The audio data is passed to a second deep learning model to learn audio properties in the audio data. The translated text is synchronized to play in the position of original text of the original work in a synthesized voice. A translated audio data of the original work is created by combining the synchronized translated text in the synthesized voice with music of the audio data.
METHOD, DEVICE, AND STORAGE MEDIUM FOR GENERATING VOCAL FILE
The disclosure can provide a method, an electronic device, and a storage medium for generating a vocal file. The method can include the following. A recording control is displayed on a playing interface in response to a video played on the playing interface being a first type. A recording interface is displayed in response to the recording control being triggered. A user audio is recorded on the recording interface based on a target song. The vocal file is generated based on the user audio and the target song.
METHOD FOR GENERATING SONG MELODY AND ELECTRONIC DEVICE
Provided is a method for generating a song melody. The method includes: displaying a melody configuration page; acquiring melody attribute information selected based on the melody configuration page; displaying a melody generation button in a triggerable state on the melody configuration page in response to selection of the melody attribute information being completed; displaying a candidate melody page in response to a triggering operation on the melody generation button; and determining one or more selected candidate melodies from at least one candidate melody as a target melody.