H04R2430/23

APPARATUS, METHOD AND COMPUTER-READABLE STORAGE MEDIUM FOR MIXING COLLECTED SOUND SIGNALS OF MICROPHONES
20220394382 · 2022-12-08 ·

An apparatus comprising: one or more processors; and one or more memory devices configured to store one or more computer programs executable by the one or more processors. The one or more programs, when executed by the one or more processors, cause the apparatus to function as: a setting unit configured to set an angle section at a single sound collection position, selected by a user; a analysis unit configured to convert each of M collected sound signals into a frequency component; a beamforming unit configured to multiply M frequency components obtained through conversion by the analysis unit by respective beamforming matrices to generate a plurality of acoustic signals of two channels; and a signal generation unit configured to synthesize the acoustic signals per channel and outputting an acoustic signal for every channel.

Audio system for dynamic determination of personalized acoustic transfer functions

An eyewear device includes an audio system. In one embodiment, the audio system includes a microphone array that includes a plurality of acoustic sensors. Each acoustic sensor is configured to detect sounds within a local area surrounding the microphone array. For a plurality of the detected sounds, the audio system performs a direction of arrival (DoA) estimation. Based on parameters of the detected sound and/or the DoA estimation, the audio system may then generate or update one or more acoustic transfer functions unique to a user. The audio system may use the one or more acoustic transfer functions to generate audio content for the user.

Extrapolation of acoustic parameters from mapping server

Determination of a set of acoustic parameters for a headset is presented herein. The set of acoustic parameters can be determined based on a virtual model of physical locations stored at a mapping server. The virtual model describes a plurality of spaces and acoustic properties of those spaces, wherein the location in the virtual model corresponds to a physical location of the headset. A location in the virtual model for the headset is determined based on information describing at least a portion of the local area received from the headset. The set of acoustic parameters associated with the physical location of the headset is determined based in part on the determined location in the virtual model and any acoustic parameters associated with the determined location. The headset presents audio content using the set of acoustic parameters received from the mapping server.

TRAINING DATA EXTENSION APPARATUS, TRAINING DATA EXTENSION METHOD, AND PROGRAM

An input of a first observation signal corresponding to an incoming signal from a first direction is received, an angular rotation operation of the first observation signal is performed to obtain a second observation signal corresponding to an incoming signal from a second direction that is different from the first direction and the second observation signal is added to a set of training data.

CONFERENCE ROOM SYSTEM AND AUDIO PROCESSING METHOD
20220375486 · 2022-11-24 ·

An audio processing method includes the following steps of capturing audio data by a microphone array to compute frequency array data of the audio data; computing a power sequence of degrees by using the frequency array data; and computing a difference value between a maximum value of the power sequence of degrees and a minimum value of the power sequence of degrees to determine whether the degree corresponding to the maximum value is a source degree relative to the microphone array.

Acoustic output device and buttons thereof

The present disclosure relates to an acoustic output device including an earphone core, a controller, a Bluetooth module, and a button module. The earphone core may include at least one low-frequency acoustic driver configured to output sounds from at least two first guiding holes and at least one high-frequency acoustic driver configured to output sounds from at least two second guiding holes. The controller may be configured to direct the at least one low-frequency acoustic driver to output the sounds in a first frequency range and direct the at least one high-frequency acoustic driver to output the sounds in a second frequency range. The Bluetooth module may be configured to connect the acoustic output device with at least one terminal device. The button module may be configured to implement an interaction between a user of the acoustic output device and the acoustic output device.

Semiconductor device

A semiconductor device with a novel structure which can identify the sound source is provided. The semiconductor device includes a microphone array, delay circuits, and a signal processing circuit. The delay circuit includes a first selection circuit, which selects a microphone, signal retention circuits, which retain voltages depending on the sound signal, and a second selection circuit, which selects a signal retention circuit. Each signal retention circuit includes a transistor which includes a semiconductor layer including an oxide semiconductor in its channel formation region. The first selection circuit writes the voltage of discreet sound signals to the signal retention circuit. The second selection circuit selects at different timings the voltages which are retained in the signal retention circuit and generates the output signal corresponding to the delayed sound signal.

Microphone array system
11589158 · 2023-02-21 · ·

A microphone array system includes first microphones disposed along a first axis, second microphones disposed at equal intervals of a first distance from the first axis along a second axis orthogonal to the first axis, a beamforming processor that performs beamforming by filtering and combining audio signals from microphones, and, when the second microphones are projected onto the first axis, the first microphones and projected second microphones are disposed at equal intervals of a second distance, a distance between two microphones disposed at opposite ends, among the first microphones and the projected second microphones arranged along the first axis when the second microphones are projected onto the first axis, is larger than a distance between two microphones disposed at opposite ends, among the first microphones and the projected second microphones arranged along the second axis when the first microphones are projected onto the second axis.

NOISE REDUCTION METHOD AND APPARATUS FOR MICROPHONE ARRAY OF EARPHONE, EARPHONE AND TWS EARPHONE

Disclosed are a noise reduction method for a microphone array of an earphone, an apparatus, and an earphone comprising: acquiring, when an earphone wearer speaks, a first sound signal collected by a bone conduction microphone arranged on the earphone and second sound signals collected respectively by a preset number of microphones arranged on the earphone; determining, according to the first sound signal and the second sound signal, a delay time from a time when the voice signal arrives at each microphone to a time when the voice signal arrives at the bone conduction microphone; computing, according to the delay time, a pointing angle of the microphone array formed by the microphones relative to the wearer's mouth; and adjusting a beam pointing angle of the microphone array according to the pointing angle, such that the microphone array forms a beam by an adjusted beam pointing angle.

TRANSPARENT AUDIO MODE FOR VEHICLES
20230096496 · 2023-03-30 ·

In general, techniques are described by which to enable a transparency mode in vehicles. A device comprising one or more microphones and one or more processors may be configured to perform the techniques. The microphones may capture audio data representative of a sound scene external to a vehicle. The processors may perform beamforming with respect to the audio data to obtain object audio data representative of an audio object in the sound scene external to the vehicle. The processors may next reproduce, by interfacing with one or more speakers included within the vehicle and based on the object audio data, the audio object in the sound scene external to the vehicle.