H04R3/12

Vehicle-based media system with audio ad and visual content synchronization feature
11581969 · 2023-02-14 · ·

In one aspect, an example method to be performed by a vehicle-based media system includes (a) receiving audio content; (b) causing one or more speakers to output the received audio content; (c) using a microphone of the vehicle-based media system to capture the output audio content; (d) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (e) identifying visual content based at least on the identified reference audio content; and (f) outputting, via a user interface of the vehicle-based media system, the identified visual content.

DISPLAY DEVICE, SOUND-EMITTING CONTROLLING METHOD, AND SOUND-EMITTING CONTROLLING DEVICE

The present disclosure provides a display device, a sound-emitting control method, and a sound-emitting control device. The device includes: a display screen which includes a first display region, a middle display region, and a second display region; a plurality of sound-emitting units, which include: a plurality of first sound-emitting units, a plurality of second sound-emitting units and a plurality of third sound-emitting units; the plurality of first sound-emitting units and the plurality of second sound-emitting units respectively include a sound-emitting unit which emits sounds at a first frequency band, a sound-emitting unit which emits sounds at a second frequency band, and a sound-emitting unit which emits sounds at a third frequency band; the first, second, and third frequency bands increase in turn; and all of the plurality of third sound-emitting units are the sound-emitting units emitting sound in the second frequency band.

DISPLAY DEVICE, SOUND-EMITTING CONTROLLING METHOD, AND SOUND-EMITTING CONTROLLING DEVICE

The present disclosure provides a display device, a sound-emitting control method, and a sound-emitting control device. The device includes: a display screen which includes a first display region, a middle display region, and a second display region; a plurality of sound-emitting units, which include: a plurality of first sound-emitting units, a plurality of second sound-emitting units and a plurality of third sound-emitting units; the plurality of first sound-emitting units and the plurality of second sound-emitting units respectively include a sound-emitting unit which emits sounds at a first frequency band, a sound-emitting unit which emits sounds at a second frequency band, and a sound-emitting unit which emits sounds at a third frequency band; the first, second, and third frequency bands increase in turn; and all of the plurality of third sound-emitting units are the sound-emitting units emitting sound in the second frequency band.

AUDIO DEVICE AUTO-LOCATION

A method for estimating an audio device location in an environment may involve obtaining direction of arrival (DOA) data for each audio device of a plurality of audio devices in the environment and determining interior angles for each of a plurality of triangles based on the DOA data. Each triangle may have vertices that correspond with audio device locations. The method may involve determining a side length for each side of each of the triangles, performing a forward alignment process of aligning each of the plurality of triangles produce a forward alignment matrix and performing a reverse alignment process of aligning each of the plurality of triangles in a reverse sequence to produce a reverse alignment matrix. A final estimate of each audio device location may be based, at least in part, on values of the forward alignment matrix and values of the reverse alignment matrix.

IMMERSIVE SOUND REPRODUCTION USING MULTIPLE TRANSDUCERS
20230042762 · 2023-02-09 ·

One or more embodiments include techniques for generating immersive audio for an acoustic system. The techniques include determining an apparent location associated with a portion of audio; calculating, for each speaker included in a plurality of speakers of the acoustic system, a perceptual distance between the speaker and the apparent location; selecting a subset of speakers included in the plurality of speakers based on the perceptual distances between the plurality of speakers and the apparent location; generating a set of filters based on the subset of speakers and one or more target characteristics of the acoustic system; and generating, for each speaker included in the subset of speakers, a speaker signal using one or more filters included in the set of filters.

IMMERSIVE SOUND REPRODUCTION USING MULTIPLE TRANSDUCERS
20230042762 · 2023-02-09 ·

One or more embodiments include techniques for generating immersive audio for an acoustic system. The techniques include determining an apparent location associated with a portion of audio; calculating, for each speaker included in a plurality of speakers of the acoustic system, a perceptual distance between the speaker and the apparent location; selecting a subset of speakers included in the plurality of speakers based on the perceptual distances between the plurality of speakers and the apparent location; generating a set of filters based on the subset of speakers and one or more target characteristics of the acoustic system; and generating, for each speaker included in the subset of speakers, a speaker signal using one or more filters included in the set of filters.

Media content based on playback zone awareness
11556998 · 2023-01-17 · ·

Systems and methods are provided for providing media content based on playback zone awareness. In one aspect, a computing system receives, via a network interface, zone data from the media playback system, wherein the zone data includes an indication of a particular zone of the media playback system, and wherein the particular zone comprises at least one playback device. The computing system identifies audio content based on (i) the indication of the particular zone and (ii) contextual data associated with the particular zone, and provides, via the network interface, an indication of the identified audio content to the media playback system.

Media content based on playback zone awareness
11556998 · 2023-01-17 · ·

Systems and methods are provided for providing media content based on playback zone awareness. In one aspect, a computing system receives, via a network interface, zone data from the media playback system, wherein the zone data includes an indication of a particular zone of the media playback system, and wherein the particular zone comprises at least one playback device. The computing system identifies audio content based on (i) the indication of the particular zone and (ii) contextual data associated with the particular zone, and provides, via the network interface, an indication of the identified audio content to the media playback system.

Synchronizing playback by media playback devices

Example systems, apparatus, and methods receive audio information including a plurality of frames from a source device, wherein each frame of the plurality of frames includes one or more audio samples and a time stamp indicating when to play the one or more audio samples of the respective frame. In an example, the time stamp is updated for each of the plurality of frames using a time differential value determined between clock information received from the source device and clock information associated with the device. The updated time stamp is stored for each of the plurality of frames, and the audio information is output based on the plurality of frames and associated updated time stamps. A number of samples per frame to be output is adjusted based on a comparison between the updated time stamp for the frame and a predicted time value for play back of the frame.

Synchronizing playback by media playback devices

Example systems, apparatus, and methods receive audio information including a plurality of frames from a source device, wherein each frame of the plurality of frames includes one or more audio samples and a time stamp indicating when to play the one or more audio samples of the respective frame. In an example, the time stamp is updated for each of the plurality of frames using a time differential value determined between clock information received from the source device and clock information associated with the device. The updated time stamp is stored for each of the plurality of frames, and the audio information is output based on the plurality of frames and associated updated time stamps. A number of samples per frame to be output is adjusted based on a comparison between the updated time stamp for the frame and a predicted time value for play back of the frame.