H04R5/027

Distributed audio capturing techniques for virtual reality (VR), augmented reality (AR), and mixed reality (MR) systems

Systems, devices, and methods for capturing audio which can be used in applications such as virtual reality, augmented reality, and mixed reality systems. Some systems can include a plurality of distributed monitoring devices. Each monitoring device can include a microphone and a location tracking unit. The monitoring devices can capture audio signals in an environment, as well as location tracking signals which respectively indicate the locations of the monitoring devices over time during capture of the audio signals. The system can also include a processor to receive the audio signals and the location tracking signals. The processor can determine one or more acoustic properties of the environment based on the audio signals and the location tracking signals.

Distributed audio capturing techniques for virtual reality (VR), augmented reality (AR), and mixed reality (MR) systems

Systems, devices, and methods for capturing audio which can be used in applications such as virtual reality, augmented reality, and mixed reality systems. Some systems can include a plurality of distributed monitoring devices. Each monitoring device can include a microphone and a location tracking unit. The monitoring devices can capture audio signals in an environment, as well as location tracking signals which respectively indicate the locations of the monitoring devices over time during capture of the audio signals. The system can also include a processor to receive the audio signals and the location tracking signals. The processor can determine one or more acoustic properties of the environment based on the audio signals and the location tracking signals.

Wearer identification based on personalized acoustic transfer functions

A wearable device includes an audio system. In one embodiment, the audio system includes a sensor array that includes a plurality of acoustic sensors. When a user wears the wearable device, the audio system determines an acoustic transfer function for the user based upon detected sounds within a local area surrounding the sensor array. Because the acoustic transfer function is based upon the size, shape, and density of the user's body (e.g., the user's head), different acoustic transfer functions will be determined for different users. The determined acoustic transfer functions are compared with stored acoustic transfer functions of known users in order to authenticate the user of the wearable device.

Wearer identification based on personalized acoustic transfer functions

A wearable device includes an audio system. In one embodiment, the audio system includes a sensor array that includes a plurality of acoustic sensors. When a user wears the wearable device, the audio system determines an acoustic transfer function for the user based upon detected sounds within a local area surrounding the sensor array. Because the acoustic transfer function is based upon the size, shape, and density of the user's body (e.g., the user's head), different acoustic transfer functions will be determined for different users. The determined acoustic transfer functions are compared with stored acoustic transfer functions of known users in order to authenticate the user of the wearable device.

Audio system for dynamic determination of personalized acoustic transfer functions

An eyewear device includes an audio system. In one embodiment, the audio system includes a microphone array that includes a plurality of acoustic sensors. Each acoustic sensor is configured to detect sounds within a local area surrounding the microphone array. For a plurality of the detected sounds, the audio system performs a direction of arrival (DoA) estimation. Based on parameters of the detected sound and/or the DoA estimation, the audio system may then generate or update one or more acoustic transfer functions unique to a user. The audio system may use the one or more acoustic transfer functions to generate audio content for the user.

Audio system for dynamic determination of personalized acoustic transfer functions

An eyewear device includes an audio system. In one embodiment, the audio system includes a microphone array that includes a plurality of acoustic sensors. Each acoustic sensor is configured to detect sounds within a local area surrounding the microphone array. For a plurality of the detected sounds, the audio system performs a direction of arrival (DoA) estimation. Based on parameters of the detected sound and/or the DoA estimation, the audio system may then generate or update one or more acoustic transfer functions unique to a user. The audio system may use the one or more acoustic transfer functions to generate audio content for the user.

Own voice reinforcement using extra-aural speakers

A system including an audio source device having a first microphone and a first speaker for directing sound into an environment in which the audio source device is located and a wireless audio receiver device having a second microphone and a second speaker for directing sound into a user's ear. The audio source device is configured to 1) capture, using the first microphone, speech of the user as a first audio signal, 2) reduce noise in the first audio signal to produce a speech signal, and 3) drive the first speaker with the speech signal. The wireless audio receiver device is configured to 1) capture, using the second microphone, a reproduction of the speech produced by the first speaker as a second audio signal and 2) drive the second speaker with the second audio signal to output the reproduction of the speech.

Own voice reinforcement using extra-aural speakers

A system including an audio source device having a first microphone and a first speaker for directing sound into an environment in which the audio source device is located and a wireless audio receiver device having a second microphone and a second speaker for directing sound into a user's ear. The audio source device is configured to 1) capture, using the first microphone, speech of the user as a first audio signal, 2) reduce noise in the first audio signal to produce a speech signal, and 3) drive the first speaker with the speech signal. The wireless audio receiver device is configured to 1) capture, using the second microphone, a reproduction of the speech produced by the first speaker as a second audio signal and 2) drive the second speaker with the second audio signal to output the reproduction of the speech.

Two way communication assembly
11523196 · 2022-12-06 ·

A two way communication assembly includes a first puck that is positionable on a first surface of a partition. A first microphone is integrated into the first puck to capture spoken words from a first user. A first speaker is disposed in the first puck to emit audible sounds outwardly from the first puck. A second puck is positionable on a second surface of the partition. A second microphone is integrated into the second puck to capture spoken words from a second user. Moreover, the first speaker audibly emits words spoken by the second user. A second speaker is disposed in the second puck to emit audible sounds outwardly from the second puck. Additionally, the second speaker audibly emits words spoken by the first user.

Two way communication assembly
11523196 · 2022-12-06 ·

A two way communication assembly includes a first puck that is positionable on a first surface of a partition. A first microphone is integrated into the first puck to capture spoken words from a first user. A first speaker is disposed in the first puck to emit audible sounds outwardly from the first puck. A second puck is positionable on a second surface of the partition. A second microphone is integrated into the second puck to capture spoken words from a second user. Moreover, the first speaker audibly emits words spoken by the second user. A second speaker is disposed in the second puck to emit audible sounds outwardly from the second puck. Additionally, the second speaker audibly emits words spoken by the first user.