Patent classifications
G10H1/366
Reverberation technique for 3D audio objects
Reverberation techniques for 3D audio are disclosed. In an example method, a three-dimensional (3D) reverberation is applied to a sound object placed at a sound object position in a sound room. The sound object originates from a sound object position. A sound object signal is received. A 3D spatial room response (SRR) signal is computed corresponding to the user-selected position. A time convolution operation is performed between an audio signal of the sound object signal and the computed SRR value to generate a reverberated signal.
METHOD AND APPARATUS FOR DETERMINING VOLUME ADJUSTMENT RATIO INFORMATION, DEVICE, AND STORAGE MEDIUM
A method for determining volume adjustment ratio information comprises acquiring a first singing audio and an original accompaniment audio corresponding to the first singing audio, wherein the first singing audio is a user singing audio; acquiring a first audio of a non-singing part in the first singing audio, and acquiring a loudness characteristic of the first audio; acquiring, in the original accompaniment audio, a second audio whose playback duration corresponds to a playback duration of the first audio, and acquiring a loudness characteristic of the second audio; and determining a ratio of the loudness characteristic of the first audio to the loudness characteristic of the second audio as adjustment ratio information for adjusting an accompaniment volume of the first singing audio.
Song Recording Method, Audio Correction Method, and Electronic Device
A method includes displaying, by an electronic device, a first interface, where the first interface includes a recording button used to record a first song, obtaining, by the electronic device, accompaniment of the first song and feature information of a cappella of an original singer, starting to record a cappella of the user that is sung by the user, and displaying, by the electronic device, guidance information on a second interface based on the feature information of the a cappella of the original singer, where the guidance information guides one or more of breathing and vibrato during the user's singing.
Template-based excerpting and rendering of multimedia performance
Disclosed herein are computer-implemented method, system, and computer-readable storage-medium embodiments for implementing template-based excerpting and rendering of multimedia performances technologies. An embodiment includes at least one computer processor configured to retrieve a first content instance and corresponding first metadata. The first content instance may include a first plurality of structural elements, with at least one structural element corresponding to at least part of the first metadata. The first content instance may be transformed by a rendering engine running on the at least one computer processor and/or transmitted to a content-playback device.
Augmented Reality Filters for Captured Audiovisual Performances
Visual effects, including augmented reality-type visual effects, are applied to audiovisual performances with differing visual effects and/or parameterizations thereof applied in correspondence with computationally determined audio features or elements of musical structure coded in temporally-synchronized tracks or computationally determined therefrom. Segmentation techniques applied to one or more audio tracks (e.g., vocal or backing tracks) are used to compute some of the components of the musical structure. In some cases, applied visual effects are based on an audio feature computationally extracted from a captured audiovisual performance or from an audio track temporally-synchronized therewith.
SYSTEM AND METHOD FOR GENERATING HARMONIOUS COLOR SETS FROM MUSICAL INTERVAL DATA
Systems and methods are disclosed for generating color sets based on musical concepts of pitch intervals and harmony. Color sets are derived via a music-to-hue process which analyzes musical pitch data associated with musical input to determine pitch intervals included in the music. Pitch interval angles associated with the pitch intervals are applied to a tuned hue index to identify hue note ordered within the index which are separated by a hue interval angle similar to the pitch angle associated with the analyzed pitch data. The systems and methods provide for the creation of color sets which are analogous to musical chords in that they include multiple hue notes selected based on hue interval angles derived from musical interval angles associated with the received musical input.
VOCAL TRACK REMOVAL BY CONVOLUTIONAL NEURAL NETWORK EMBEDDED VOICE FINGER PRINTING ON STANDARD ARM EMBEDDED PLATFORM
A vocal removal method and a system thereof are provided. In the vocal removal method, a voice separation model is generated and trained to process a real-time input music to separate the voice and the accompaniment. The vocal removal method further comprises the steps of feature extraction and reconstruction to obtain the voice minimized music.
AUDIOVISUAL COLLABORATION METHOD WITH LATENCY MANAGEMENT FOR WIDE-AREA BROADCAST
Techniques have been developed to facilitate the livestreaming of group audiovisual performances. Audiovisual performances including vocal music are captured and coordinated with performances of other users in ways that can create compelling user and listener experiences. For example, in some cases or embodiments, duets with a host performer may be supported in a sing-with-the-artist style audiovisual livestream in which aspiring vocalists request or queue particular songs for a live radio show entertainment format. The developed techniques provide a communications latency-tolerant mechanism for synchronizing vocal performances captured at geographically-separated devices (e.g., at globally-distributed, but network-connected mobile phones or tablets or at audiovisual capture devices geographically separated from a live studio).
NON-LINEAR MEDIA SEGMENT CAPTURE AND EDIT PLATFORM
User interface techniques provide user vocalists with mechanisms for forward and backward traversal of audiovisual content, including pitch cues, waveform- or envelope-type performance timelines, lyrics and/or other temporally-synchronized content at record-time, during edits, and/or in playback. Recapture of selected performance portions, coordination of group parts, and overdubbing may all be facilitated. Direct scrolling to arbitrary points in the performance timeline, lyrics, pitch cues and other temporally-synchronized content allows user to conveniently move through a capture or audiovisual edit session. In some cases, a user vocalist may be guided through the performance timeline, lyrics, pitch cues and other temporally-synchronized content in correspondence with group part information such as in a guided short-form capture for a duet. A scrubber allows user vocalists to conveniently move forward and backward through the temporally-synchronized content.
MUSICAL INSTRUMENT TUNER, MUSICAL PERFORMANCE SUPPORT DEVICE AND MUSICAL INSTRUMENT MANAGEMENT DEVICE
The musical instrument tuner includes: a sensor device that is attached to a musical instrument; and an operation device that is able to perform wireless communication mutually with the sensor device, in which the sensor device includes an acceleration sensor that has at least two detection axes, frequency detection means for detecting, as a detected frequency, a frequency of a vibration of musical sound generated through an operation of the musical instrument based on an output from the acceleration sensor, and sensor-side communication means for transmitting transmission information including information regarding the detected frequency to the operation device, and the operation device includes operation-side communication means for receiving the transmission information transmitted from the sensor device, display means, and control means for generating tuning information of the musical instrument and causing the display means to display the tuning information based on the transmission information received from the sensor device.