Patent classifications
H04N21/233
PRESENTING MOBILE CONTENT BASED ON PROGRAMMING CONTEXT
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating search queries in response to obtaining audio samples on a client device. In one aspect, a method includes the actions of i) receiving audio data from a client device, ii) identifying specific content from captured media based on the received audio data, wherein the identified specific content is associated with the received audio data and the captured media includes at least one of audio media or audio-video media, iii) obtaining additional metadata associated with the identified content, iv) generating a search query based at least in part on the obtained additional metadata, and v) returning one or more search results to the client device, the one or more search results responsive to the search query and associated with the received audio data.
Addition of Virtual Bass
Provided are, among other things, systems, methods and techniques for processing an audio signal to add virtual bass. In one representative embodiment, an apparatus includes: (a) an input line that inputs an original audio signal; (b) a bass extraction filter that extracts a bass portion of such original audio signal; (c) an estimator that estimates a fundamental frequency of a bass sound within such bass portion; (d) a frequency translator that shifts the bass portion by a positive frequency increment that is an integer multiple of the fundamental frequency estimated by the estimator, thereby providing a virtual bass signal; (f) an adder having (i) inputs coupled to the original audio signal and to the virtual bass signal and (ii) an output; and (g) an audio output device coupled to the output of the adder.
Addition of Virtual Bass
Provided are, among other things, systems, methods and techniques for processing an audio signal to add virtual bass. In one representative embodiment, an apparatus includes: (a) an input line that inputs an original audio signal; (b) a bass extraction filter that extracts a bass portion of such original audio signal; (c) an estimator that estimates a fundamental frequency of a bass sound within such bass portion; (d) a frequency translator that shifts the bass portion by a positive frequency increment that is an integer multiple of the fundamental frequency estimated by the estimator, thereby providing a virtual bass signal; (f) an adder having (i) inputs coupled to the original audio signal and to the virtual bass signal and (ii) an output; and (g) an audio output device coupled to the output of the adder.
AUDIOVISUAL COLLABORATION SYSTEM AND METHOD WITH LATENCY MANAGEMENT FOR WIDE-AREA BROADCAST AND SOCIAL MEDIA-TYPE USER INTERFACE MECHANICS
Techniques have been developed to facilitate the livestreaming of group audiovisual performances. Audiovisual performances including vocal music are captured and coordinated with performances of other users in ways that can create compelling user and listener experiences. For example, in some cases or embodiments, duets with a host performer may be supported in a sing-with-the-artist style audiovisual livestream in which aspiring vocalists request or queue particular songs for a live radio show entertainment format. The developed techniques provide a communications latency-tolerant mechanism for synchronizing vocal performances captured at geographically-separated devices (e.g., at globally-distributed, but network-connected mobile phones or tablets or at audiovisual capture devices geographically separated from a live studio).
AUDIOVISUAL COLLABORATION SYSTEM AND METHOD WITH LATENCY MANAGEMENT FOR WIDE-AREA BROADCAST AND SOCIAL MEDIA-TYPE USER INTERFACE MECHANICS
Techniques have been developed to facilitate the livestreaming of group audiovisual performances. Audiovisual performances including vocal music are captured and coordinated with performances of other users in ways that can create compelling user and listener experiences. For example, in some cases or embodiments, duets with a host performer may be supported in a sing-with-the-artist style audiovisual livestream in which aspiring vocalists request or queue particular songs for a live radio show entertainment format. The developed techniques provide a communications latency-tolerant mechanism for synchronizing vocal performances captured at geographically-separated devices (e.g., at globally-distributed, but network-connected mobile phones or tablets or at audiovisual capture devices geographically separated from a live studio).
SYSTEM AND METHOD FOR TRANSMITTING DATA OVER A DIGITAL INTERFACE
Systems and techniques are provided to transmit data over a digital interface between a sender and a receiver. The digital interface is configured for transmitting a primary type of data as opposed to a secondary type of data. Nevertheless, systems and techniques are provided where the secondary type of data can be transmitted in the digital interface. As such, the primary and/or secondary types of data are transmitted from the sender to the receiver via the digital interface. The primary and secondary types of data may be different and/or unrelated and could be any type of data including, but not limited to, audio data, general data, and bulk data. Yet, the received primary and secondary types of data are still useful after the transmission.
COMMUNICATION METHOD AND SYSTEM
A first terminal transmits first event data instructing generation of a first sound to a server. A second terminal transmits second event data instructing generation of a second sound to the server. The server transmits data including the first event data and the second event data to the first terminal. The first terminal controls generation of the first sound and the second sound, based on the data including the first event data and the second event data.
COMMUNICATION METHOD AND SYSTEM
A first terminal transmits first event data instructing generation of a first sound to a server. A second terminal transmits second event data instructing generation of a second sound to the server. The server transmits data including the first event data and the second event data to the first terminal. The first terminal controls generation of the first sound and the second sound, based on the data including the first event data and the second event data.
ANALYSIS OF COPY PROTECTED CONTENT AND USER STREAMS
In one example, a method performed by a processing system including at least one processor includes obtaining a first stream of audio and video data, wherein the first stream of audio and video data comprises a lower-resolution version of a second stream of audio and video data that is transmitted to a first user device over a content distribution network and encrypted using a high-bandwidth digital content protection protocol, performing an analysis technique on the first stream of audio and video data in order to extract audio and video artifacts which from which content of the first stream of audio and video data is inferred, deriving a signature marker from the audio and video artifacts, and sending the signature marker to the first user device.
Broadcast signal transmitting apparatus, broadcast signal receiving apparatus, method for transmitting broadcast signal, and method for receiving broadcast signal
An apparatus for receiving a broadcast signal, includes a receiver configured to receive the broadcast signal including physical layer signaling data, signaling data, content data and service guide information, wherein the signaling data is included in a signal frame indicated by the physical layer signaling data, wherein the signaling data includes mapping information between a service and a PLP, and information supporting channel scanning and service acquisition, wherein the service guide information includes a service fragment having information about the broadcast service and a content fragment having information about content data of the broadcast service, wherein the content fragment further includes a content-level PrivateExt element having component information of the content data, wherein the component information includes information for a component in the broadcast service, and wherein the component is one of a video component, an audio component, and a closed caption component (CC).