Patent classifications
H04R2227/003
Dynamic Latency Estimation for Audio Streams
Systems and methods for packet dynamic latency estimation for audio streams may include, for example, capturing a first audio signal using a microphone of a computing device; receiving a second audio signal at the computing device via wireless communications from an access point; determining a set of estimates of a delay of the first audio signal relative to the second audio signal based on a cross-correlation at respective analysis steps within the first audio signal and the second audio signal; determining an average delay and a confidence interval for the set of estimates of the delay; comparing the confidence interval to a threshold duration; and, responsive to the confidence interval being less than the threshold duration, playing, using a speaker controlled by the computing device, an audio signal received from the access point with an added delay determined based on the average delay.
Sound Localization for an Electronic Call
During an electronic call between two individuals, a sound localization point simulates a location in empty space from where an origin of a voice of one individual occurs for the other individual.
System and Method for Providing a Quiet Zone
A system and method for quieting unwanted sound. As a non-limiting example, various aspects of this disclosure provide a system and method, for example implemented in a premises-based or home audio system, for quieting unwanted sound at a particular location.
Calibration of a playback device based on an estimated frequency response
An example playback device is configured to receive a first stream of audio comprising source audio content to be played back by the playback device and record, via one or more microphones of the playback device, an audio signal output by the playback device based on the playback device playing the source audio content. The playback device is also configured to determine a transfer function between a frequency-domain representation of the first stream of audio and a frequency-domain representation of the recorded audio signal, and then determine an estimated frequency response of the playback device based on a difference between (i) the transfer function and (ii) a self-response of the playback device, where the self-response of the playback device is stored in a memory of the playback device. Based on the estimated frequency response, the playback device is configured to determine an acoustic calibration adjustment and implement the acoustic calibration adjustment.
Integration of remote audio into a performance venue
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for validating and publishing workflows from remote environments. In some implementations, a link that includes at least one of audio data and video data is established between a wireless device of a remote participant and a computational mixer. A profile for the remote participant is referenced. A venue signal related to the at least one audio data and video data is generated based on the profile for the remote participant and using the computational mixer. The venue signal is transmitted.
Dynamic Player Selection for Audio Signal Processing
In one aspect, a first playback device is configured to (i) receive a set of voice signals, (ii) process the set of voice signals using a first set of audio processing algorithms, (iii) identify, from the set of voice signals, at least two voice signals that are to be further processed, (iv) determine that the first playback device does not have a threshold amount of computational power available, (v) receive an indication of an available amount of computational power of a second playback device, (vi) send the at least two voice signals to the second playback device, (vii) cause the second playback device to process the at least two voice signals using a second set of audio processing algorithms, (viii) receive, from the second playback device, the processed at least two voice signals, and (ix) combine the processed at least two voice signals into a combined voice signal.
Adjusting volume levels
In general, user interfaces for controlling a plurality of multimedia players in groups are disclosed. According to one aspect of the present invention, a user interface is provided to allow a user to group some of the players according to a theme or scene, where each of the players is located in a zone. When the scene is activated, the players in the scene react in a synchronized manner. For example, the players in the scene are all caused to play a multimedia source or music in a playlist, wherein the multimedia source may be located anywhere on a network. The user interface is further configured to illustrate graphically a size of a group, the larger the group appears relatively, the more plays there are in the group.
Crosstalk data detection method and electronic device
A method and an electronic device for detecting crosstalk data are provided. The method for detecting crosstalk data can detect whether an audio data stream includes crosstalk data. The method includes: receiving a first audio data block, a second audio data block, and a reference time difference, wherein the first audio data block and the second audio data block separately include a plurality of audio data segments; using a time difference between an acquisition time of an audio data segment in the first audio data block and a corresponding audio data segment in the second audio data block as an audio segment time difference; and determining that the audio data segment of the first audio data block includes crosstalk data when the audio segment time difference does not match the reference time difference.
Systems and methods for associating playback devices with voice assistant services
Systems and methods for media playback via a media playback system include detecting a first wake word via a first network microphone device of a first playback device, detecting a second wake word via a second network microphone device of a second playback device, and forming a bonded zone that includes the first playback device and the second playback device. In response to detecting the first wake word, a first voice first voice utterance following the first wake word is transmitted a first voice assistant service. In response to detecting the second wake word, a second voice utterance following the second wake word is transmitted to a second voice assistant service. Requested media content received from the first and/or second voice assistant service is played back via the first playback device and the second playback device in synchrony with one another.
Refractive eye examination system
A system and method for conducting a refractive examination of an eye of a patient, has a communication device with a communication module that connects to the internet, a processor that is programmed to connect to a remote computer via the communication module and which has a display screen, a microphone and a speaker. The remote computer has a data storage device that stores images of eye charts. The communication device is mounted in a virtual reality headset configured to be worn by the patient and has at least one screen through which the display screen of the communication device is viewable. The communication device displays images in the form of the eye charts to the patient, who communicates through the communication to a remote examiner who conducts the refractive examination using multiple different eye charts to determine the prescription of the patient.