Patent classifications
G10H2240/325
Managing playback of synchronized content
A computing device may provide a control interface that enables the user to manage the synchronized output of companion content (e.g., textual content and corresponding audio content). For example, the computing device may display a visual cue to identify a current location in textual content corresponding to a current output position of companion audio content. As the audio content is presented, the visual cue may be advanced to maintain synchronization between the output position within the audio content and a corresponding position in the textual content. The user may control the synchronized output by dragging her finger across the textual content displayed on the touch screen. Accordingly, the control interface may provide a highlight or other visual indication of the distance between the advancing position in the textual content and the location of a pointer to the textual content indicated by the current position of the user's finger.
Server side crossfading for progressive download media
In exemplary embodiments of the present invention systems and methods are provided to implement and facilitate cross-fading, interstitials and other effects/processing of two or more media elements in a personalized media delivery service so that each client or user has a consistent high quality experience. The effects or crossfade processing can occur on the broadcast, publisher or server-side, but can still be personalized to a specific user, thus still allowing a personalized experience for each individual user, in a manner where the processing burden is minimized on the downstream side or client device. This approach enables a consistent user experience, independent of client device capabilities, both static and dynamic. The cross-fade can be implemented after decoding the relevant chunks of each component clip, processing, recoding and rechunking, or, in a preferred embodiment, the cross-fade or other effect can be implemented on the relevant chunks to the effect in the compressed domain, thus obviating any loss of quality by re-encoding. A large scale personalized content delivery service can be implemented by limiting the processing to essentially the first and last chunks of any file, since there is no need to processing the full clip. In exemplary embodiments of the present invention this type of processing can easily be accommodated in cloud computing technology, where the first and last files may be conveniently extracted and processed within the cloud to meet the required load. Processing may also be done locally, for example, by the broadcaster, with sufficient processing power to manage peak load.
SYSTEM AND METHOD FOR GENERATING AN AUDIO FILE
A system and method for synchronizing an audio or MIDI file with a video file are provided. The method includes receiving a first audio or MIDI file, receiving a video file, and operating an audio synchronization module to perform steps of synchronizing the first audio or MIDI file with the video file, marking an event in the video file at a point on a timeline, detecting a first musical key for the event, retrieving a musical stinger or swell from a library, in which the musical stinger or swell is a second audio or MIDI file and is tagged with a second musical key, and the second musical key is relevant to the first musical key, and placing the musical stinger or swell at the point of the timeline marked for the event.
APPARATUS, METHOD, AND COMPUTER-READABLE STORAGE MEDIUM FOR COMPENSATING FOR LATENCY IN MUSICAL COLLABORATION
An apparatus, method, and computer-readable storage medium that compensate for latency in a musical collaboration. The method includes, setting a tempo for a first client device to follow, receiving a musical piece from the first client device, transmitting the musical piece to a second client device, and instructing the second client device, via an instruction transmitted along with the musical piece, to delay playback of the musical piece a predetermined amount of time to compensate for latency in the musical collaboration, the predetermined amount of time being associated with a measure or a fraction of a measure.
SYNCHRONOUS RECORDING OF AUDIO USING WIRELESS DATA TRANSMISSION
A method of synchronous recording of audio is described. The described method comprises: (a) establishing a first wireless data transmission connection between a master device and a first recording device, (b) establishing a second wireless data transmission connection between the master device and a second recording device, (c) determining a first data transmission delay for the first data transmission connection, (d) determining a second data transmission delay for the second data transmission connection, (e) transmitting predetermined reference data from the master device to the first recording device and to the second audio recording device, (f) at the first recording device: playing back the predetermined reference data and recording first recording data, and (g) at the second recording device: playing back the predetermined reference data and recording second recording data, (h) wherein during the transmission of the predetermined reference data a difference between the first data transmission delay and the second data transmission delay is taken into account in such a manner that the playing back of the predetermined reference data takes place synchronously at the first recording device and at the second recording device. Furthermore, a system for synchronously recording audio is described.
REPRODUCTION CONTROL METHOD, REPRODUCTION CONTROL SYSTEM, AND REPRODUCTION CONTROL APPARATUS
A computer-implemented reproduction control method includes reproducing sound from sound data representing a series of sounds including first sound and second sound that follows the first sound. The method includes starting reproducing the first sound, continuing the reproduction of a first sound until an end of the first sound in response to receiving a first instruction in a reproduction period of the first sound, stopping the reproduction of the first sound, and after the stopping of the reproduction of the first sound, starting reproducing the second sound in response to receiving a second instruction provided by a user.
AUTOMATED CREATION OF VIRTUAL ENSEMBLES
A method creates a virtual ensemble file by receiving, at a central assembler node, recorded performance files from a recording node(s). The recording nodes generate a respective one of the performance files concurrently with playing a backing track and/or nodal metronome signal. Each performance file includes audio and/or visual data. The assembler node generates the ensemble file as a digital output file. Another method creates the ensemble file by receiving input signals inclusive of the backing track and/or metronome signal at the recording node(s), and generating the performance files at the recording node(s) concurrently with playing the backing track and/or metronome signal. The performance files are transmitted to the assembler node. A computer-readable medium or media has instructions for creating the ensemble file, with execution causing a first node to generate the performance files, and a second node to receive the same and generate the ensemble file.
Audiovisual collaboration system and method with seed/join mechanic
User interface techniques provide user vocalists with mechanisms for seeding subsequent performances by other users (e.g., joiners). A seed may be a full-length seed spanning much or all of a pre-existing audio (or audiovisual) work and mixing, to seed further contributions of one or more joiners, a user's captured media content for at least some portions of the audio (or audiovisual) work. A short seed may span less than all (and in some cases, much less than all) of the audio (or audiovisual) work. For example, a verse, chorus, refrain, hook or other limited “chunk” of an audio (or audiovisual) work may constitute a seed. A seeding user's call invites other users to join the full-length or short-form seed by singing along, singing a particular vocal part or musical section, singing harmony or other duet part, rapping, talking, clapping, recording video, adding a video clip from camera roll, etc. The resulting group performance, whether full-length or just a chunk, may be posted, livestreamed, or otherwise disseminated in a social network.
TRANSITION FUNCTIONS OF DECOMPOSED SIGNALS
A device including: first and second input units providing first and second input signals of first and second audio tracks, a decomposition unit to decompose the first input audio signal to obtain decomposed signals, a playback unit to start playback of a first output signal obtained from recombining at least first and second decomposed signals at first and second volume levels, respectively, and a transition unit for performing a transition between playback of the first output signal and playback of a second output signal obtained from the second input signal. The transition unit is adapted for reducing the first/second volume levels according to first/second transition functions. The device includes an analyzing unit to analyze an audio signal to determine a song part junction between two song parts. The transition time interval of at least one of the transition functions is set such as to include the song part junction.
Server side crossfading for progressive download media
Systems and methods are provided to implement and facilitate cross-fading, interstitials and other effects/processing of two or more media elements in a personalized media delivery service. Effects or crossfade processing can occur on the broadcast, publisher or server-side, but can still be personalized to a specific user, in a manner that minimizes processing on the downstream side or client device. The cross-fade can be implemented after decoding the relevant chunks of each component clip, processing, recoding and rechunking, or, the cross-fade or other effect can be implemented on the relevant chunks to the effect in the compressed domain, thus obviating any loss of quality by re-encoding. A large scale personalized content delivery service can limit the processing to essentially the first and last chunks of any file, there being no need to processing the full clip.