H04R2430/21

LOCATION DETERMINATION SYSTEM, METHOD FOR DETERMINING A LOCATION AND DEVICE FOR DETERMINING ITS LOCATION

A system for determining a location of a device includes at least two speakers and the device, where each speaker is configured to produce a unique sound. The device includes microphones for receiving sound and for providing signals corresponding thereto, a memory configured to store for each speaker a fingerprint of each unique sound and speaker location information, and a processor connected to the outputs. The processor is configured to determine, by comparing microphone signals to fingerprints, a difference in arrival time of a sound at two microphones, and determine, based on the differences in arrival time, an orientation of the device with respect to the speakers, and to determine the location of the device in the space.

Audio-based detection and tracking of emergency vehicles

Techniques are provided for audio-based detection and tracking of an acoustic source. A methodology implementing the techniques according to an embodiment includes generating acoustic signal spectra from signals provided by a microphone array, and performing beamforming on the acoustic signal spectra to generate beam signal spectra, using time-frequency masks to reduce noise. The method also includes detecting, by a deep neural network (DNN) classifier, an acoustic event, associated with the acoustic source, in the beam signal spectra. The DNN is trained on acoustic features associated with the acoustic event. The method further includes performing pattern extraction, in response to the detection, to identify time-frequency bins of the acoustic signal spectra that are associated with the acoustic event, and estimating a motion direction of the source relative to the array of microphones based on Doppler frequency shift of the acoustic event calculated from the time-frequency bins of the extracted pattern.

ACOUSTIC OUTPUT APPARATUS

The present disclosure discloses an acoustic output apparatus including at least one acoustic driver, a controller, and a supporting structure. The at least one acoustic driver may be configured to output sounds through at least two sound guiding holes. The at least two sound guiding holes may include a first sound guiding hole and a second sound guiding hole. The controller may be configured to control a phase and an amplitude of the sounds generated by the at least one acoustic driver using a control signal such that the sounds output by the at least one acoustic driver through the first and second sound guiding holes have opposite phases. The supporting structure may be provided with a baffle and configured to support the at least one acoustic driver such that the first and second sound guiding holes are located on both sides of the baffle.

Movable robot and method for tracking position of speaker by movable robot
11565426 · 2023-01-31 · ·

Proposed is a method for determining, by a movable robot, a position of a speaker, wherein the movable robot includes first to fourth microphones installed at four vertexes of a quadrangle of a horizontal cross section of the robot respectively, wherein the method includes: receiving a wake-up voice through first and third microphones disposed respectively at first and third vertices in a diagonal direction; obtaining a first reference value of the first microphone and a second reference value of the third microphone based on the received wake-up voice; comparing the obtained first and second reference values to select the first microphone; selecting a second microphone disposed at a second vertex, wherein the first and second microphones are on a front side of the quadrangle; calculating a sound source localization (SSL) value based on the selected first and second microphones; and tracking a position of the speaker based on the SSL value.

Acoustic output apparatus

The present disclosure provides an acoustic output apparatus including one or more status sensors, at least one low-frequency acoustic driver, at least one high-frequency acoustic driver, at least two first sound guiding holes, and at least two second sound guiding holes. The status sensors may detect status information of a user. The low-frequency acoustic driver may generate at least one first sound, a frequency of which is within a first frequency range. The high-frequency acoustic driver may generate at least one second sound, a frequency of which is within a second frequency range including at least one frequency exceeding the first frequency range. The first and second sound guiding holes may output the first and second spatial sound, respectively. The first and second sound may be generated based on the status information, and may simulate a target sound coming from at least one virtual direction with respect to the user.

SYSTEM AND METHOD FOR MODIFYING SIGNALS TO DETERMINE AN INCIDENCE ANGLE OF AN ACOUSTIC WAVE
20230015976 · 2023-01-19 ·

Systems and methods for virtually coupled resonators to determine an incidence angle of an acoustic wave are described herein. In one example, a system includes a processor and first and second transducers in communication with the processor. The first transducer produces a first signal in response to detecting an acoustic wave, while the second transducer produces a second signal in response to detecting the acoustic wave. The system may also include a memory in communication with the processor and having machine-readable instructions that cause the processor to modify the first signal and the second signal using a virtual resonator mapping function to generate a modified first signal and a modified second signal. The virtual resonator mapping function changes the first signal and the second signal to be representative of signals produced by transducers located within a hypothetical chamber of a hypothetical resonator.

Audio communication device

An audio communication device includes: a sound position determiner that determines sound localization positions for N audio signals in a virtual space having first and second walls; N sound localizers each performing sound localization processing to localize sound in the sound localization position determined by the sound position determiner, and outputting localized sound signals; an adder that sums the N localized sound signals, and outputs a summed localized sound signal. Each sound localizer performs the processing using: a first head-related transfer function (HRTF) assuming that a sound wave emitted from the sound localization position of the sound localizer determined by the sound position determiner directly reaches each ear of a hearer virtually present at the hearer position; and a second HRTF assuming that the sound wave emitted from the sound localization position reaches each ear of the hearer after being reflected by closer one of the first and second walls.

ELECTRONIC DEVICE FOR RESPONDING TO USER REACTION AND OUTSIDE SOUND AND OPERATING METHOD THEREOF
20220405045 · 2022-12-22 ·

Disclosed is a wireless audio device, which includes at least one microphone, at least one speaker, at least one sensor, a processor, and a memory storing instructions, and the instructions that, when executed by the processor, cause the wireless audio device, while the wireless audio device outputs a sound for reducing an outside sound acquired through the at least one microphone through the at least one speaker, to identify a specified outside sound of the outside sound acquired through the at least one microphone, to output a notification sound for indicating that the specified outside sound is identified through the at least one speaker, to identify a motion of a user of the wireless audio device through the at least one sensor, in response to outputting the notification sound, and to stop the output of the sound for reducing the outside sound based on the identified motion.

Distributed audio capturing techniques for virtual reality (VR), augmented reality (AR), and mixed reality (MR) systems

Systems, devices, and methods for capturing audio which can be used in applications such as virtual reality, augmented reality, and mixed reality systems. Some systems can include a plurality of distributed monitoring devices. Each monitoring device can include a microphone and a location tracking unit. The monitoring devices can capture audio signals in an environment, as well as location tracking signals which respectively indicate the locations of the monitoring devices over time during capture of the audio signals. The system can also include a processor to receive the audio signals and the location tracking signals. The processor can determine one or more acoustic properties of the environment based on the audio signals and the location tracking signals.

Pattern-forming microphone array

Embodiments include a microphone array with a plurality of microphone elements comprising a first set of elements arranged along a first axis, comprising at least two microphone elements spaced apart by a first distance; a second set of elements arranged along the first axis, comprising at least two microphone elements spaced apart by a second, greater distance, such that the first set is nested within the second set; a third set of elements arranged along a second axis orthogonal to the first axis, comprising at least two microphone elements spaced apart by the second distance; and a fourth set of elements nested within the third set along the second axis, comprising at least two microphone elements spaced apart by the first distance, wherein each set includes a first cluster of microphone elements and a second cluster of microphone elements spaced apart by the specified distance.