H04R5/04

Customized automated audio tuning

An example method of operation may include identifying, in a particular room environment, a number of speakers and one or more microphones on a network controlled by a controller and amplifier, providing test signals to play sequentially from each amplifier channel of the amplifier and the speakers, monitoring the test signals from the one or more microphones simultaneously to detect operational speakers and amplifier channels, providing additional test signals to the speakers to determine tuning parameters, detecting the additional test signals at the one or more microphones controlled by the controller, and automatically establishing a background noise level and noise spectrum of the room environment based on the detected additional test signals.

Customized automated audio tuning

An example method of operation may include identifying, in a particular room environment, a number of speakers and one or more microphones on a network controlled by a controller and amplifier, providing test signals to play sequentially from each amplifier channel of the amplifier and the speakers, monitoring the test signals from the one or more microphones simultaneously to detect operational speakers and amplifier channels, providing additional test signals to the speakers to determine tuning parameters, detecting the additional test signals at the one or more microphones controlled by the controller, and automatically establishing a background noise level and noise spectrum of the room environment based on the detected additional test signals.

AUDIO DEVICE INCLUDING JACK DETECTOR
20180014116 · 2018-01-11 ·

An audio device includes a channel detection electrode and a jack detector configured to determine whether the channel detection electrode is in contact with a jack according to voltage variation of a detection node. The jack detector includes a first resistor coupled between the detection node and a first node to which a first voltage is supplied, a second resistor coupled between the detection node and the channel detection electrode, a third resistor coupled between the detection node and a second node, and a comparator configured to compare a voltage at the detection node with a reference voltage.

Moving an emoji to move a location of binaural sound
11711664 · 2023-07-25 · ·

During an electronic communication between a first user and a second user, an electronic device of the second user displays a graphical representation at a located selected by the first user. The graphical representation provides an indication to the second user where binaural sound associated with the graphical representation will externally localize to the second user. Subsequent movement of the graphical representation changes a location where the binaural sound externally localizes to the second user.

Privacy device for smart speakers
11711662 · 2023-07-25 ·

Systems, apparatuses, and methods are described for a privacy blocking device configured to prevent receipt, by a listening device, of video and/or audio data until a trigger occurs. A blocker may be configured to prevent receipt of video and/or audio data by one or more microphones and/or one or more cameras of a listening device. The blocker may use the one or more microphones, the one or more cameras, and/or one or more second microphones and/or one or more second cameras to monitor for a trigger. The blocker may process the data. Upon detecting the trigger, the blocker may transmit data to the listening device. For example, the blocker may transmit all or a part of a spoken phrase to the listening device.

Privacy device for smart speakers
11711662 · 2023-07-25 ·

Systems, apparatuses, and methods are described for a privacy blocking device configured to prevent receipt, by a listening device, of video and/or audio data until a trigger occurs. A blocker may be configured to prevent receipt of video and/or audio data by one or more microphones and/or one or more cameras of a listening device. The blocker may use the one or more microphones, the one or more cameras, and/or one or more second microphones and/or one or more second cameras to monitor for a trigger. The blocker may process the data. Upon detecting the trigger, the blocker may transmit data to the listening device. For example, the blocker may transmit all or a part of a spoken phrase to the listening device.

ENVIRONMENT ACOUSTICS PERSISTENCE

Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors. A method performed by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.

ENVIRONMENT ACOUSTICS PERSISTENCE

Disclosed herein are systems and methods for storing, organizing, and maintaining acoustic data for mixed reality systems. A system may include one or more sensors of a head-wearable device, a speaker of the head-wearable device, and one or more processors. A method performed by the one or more processors may include receiving a request to present an audio signal. An environment may be identified via the one or more sensors of the head-wearable device. One or more audio model components associated with the environment may be retrieved. A first audio model may be generated based on the audio model components. A second audio model may be generated based on the first audio model. A modified audio signal may be determined based on the second audio model and based on the request to present an audio signal. The modified audio signal may be presented via the speaker of the head-wearable device.

METHOD FOR GENERATING CUSTOMIZED SPATIAL AUDIO WITH HEAD TRACKING

A headphone for spatial audio rendering includes a first database having an impulse response pair corresponding to a reference speaker location. A head sensor provides head orientation information to a second database having rotation filters, the filters corresponding to different azimuth and elevation positions relative to the reference speaker location. A digital signal processor combines the rotation filters with the impulse response pair to generate an output binaural audio signal to transducers of the headphone. Efficiencies in creating impulse response or HRTF databases are achieved by sampling the impulse response less frequently than in conventional methods. This sampling at coarser intervals reduces the number of data measurements required to generate a spherical grid and reduces the time involved in capturing the impulse responses. Impulse responses for data points falling between the sampled data points are generated by interpolating in the frequency domain.

METHOD FOR GENERATING CUSTOMIZED SPATIAL AUDIO WITH HEAD TRACKING

A headphone for spatial audio rendering includes a first database having an impulse response pair corresponding to a reference speaker location. A head sensor provides head orientation information to a second database having rotation filters, the filters corresponding to different azimuth and elevation positions relative to the reference speaker location. A digital signal processor combines the rotation filters with the impulse response pair to generate an output binaural audio signal to transducers of the headphone. Efficiencies in creating impulse response or HRTF databases are achieved by sampling the impulse response less frequently than in conventional methods. This sampling at coarser intervals reduces the number of data measurements required to generate a spherical grid and reduces the time involved in capturing the impulse responses. Impulse responses for data points falling between the sampled data points are generated by interpolating in the frequency domain.