H04S2420/00

User Experience Localizing Binaural Sound During a Telephone Call
20220217491 · 2022-07-07 ·

Methods and apparatus improve a user experience during telephone calls or other forms of communication in which a listener localizes electronically generated binaural sounds. The sound is convolved or processed to a location that is behind or near a source of the sound so that the listener perceives the location of the sound as originating from the source of the sound.

Providing binaural sound behind an image being displayed with an electronic device
11290836 · 2022-03-29 ·

A electronic device displays an image that has sound. One or more processors process the sound for the image into binaural sound for a user. The binaural sound has a sound localization point (SLP) with a coordinate location that occurs behind the image while the image is located a near-field distance from a head of the user.

Audio processing method and audio processing system
10939221 · 2021-03-02 · ·

An audio processing method and an audio processing system are provided. In the audio processing method, an audio signal is first provided. Then, plural predetermined categories are provided. Then, a classification step is performed on the audio signal according to the predetermined categories. Thereafter, a transform step is performed on the audio signal to convert the audio signal into a frequency domain. Then, a panning step and a summing step are performed on amplitude signals of the audio signal to obtain a total amplitude signal. Thereafter, a separation step and a summing step are performed on phase signals of the audio signal to obtain a total phase signal. Then, an inverse transform step is performed on the total amplitude signal and the total phase signal to obtain an optimized audio signal in a time domain.

Personalized three-dimensional audio

A headphone system includes a calibration microphone for performing a calibration routine with a user. The calibration microphone receives a stimulus signal emitted by the headphone system and generates a response signal indicating variations in the stimulus signal that arise due to physiological attributes of the user. Based on the stimulus signal and the response signal, the calibration engine generates response data. The calibration engine processes the response data based on a headphone transfer function (HPTF) associated with the headphone system in order to create an inverse filter that can reduce or remove acoustic variations caused by the headphone system. The calibration engine generates a personalized HRTF for the user based on the response data and the inverse filter. The personalized HRTF can be used to implement highly accurate 3D audio and is thereby well-suited for applications to immersive audio and audio-visual entertainment.

User Experience Localizing Binaural Sound During a Telephone Call
20210021952 · 2021-01-21 ·

A electronic device displays an image that has sound. One or more processors process the sound for the image into binaural sound for a user. The binaural sound has a sound localization point (SLP) with a coordinate location that occurs behind the image while the image is located a near-field distance from a head of the user.

Audio spatialization and reinforcement between multiple headsets

A shared communication channel allows for the transmitting and receiving audio content between multiple users. Each user is associated with a headset configured to transmit and receive audio data to and from headsets of other users. After the headset of a first user receives audio data corresponding to a second user, the headset spatializes the audio data based upon the relative positions of the first and second users such that when the audio data is presented to the first user, the sounds of the audio data appear to originate at a location corresponding to the second user. The headset reinforces the audio data based upon a deviation between the location of the second user and a gaze direction of the first user, allowing for the first user to more clearly hear audio data from other users that they are paying attention to.

Audio Spatialization and Reinforcement Between Multiple Headsets

A shared communication channel allows for the transmitting and receiving audio content between multiple users. Each user is associated with a headset configured to transmit and receive audio data to and from headsets of other users. After the headset of a first user receives audio data corresponding to a second user, the headset spatializes the audio data based upon the relative positions of the first and second users such that when the audio data is presented to the first user, the sounds of the audio data appear to originate at a location corresponding to the second user. The headset reinforces the audio data based upon a deviation between the location of the second user and a gaze direction of the first user, allowing for the first user to more clearly hear audio data from other users that they are paying attention to.

AUDIO PROCESSING METHOD AND AUDIO PROCESSING SYSTEM
20200304934 · 2020-09-24 ·

An audio processing method and an audio processing system are provided. In the audio processing method, an audio signal is first provided. Then, plural predetermined categories are provided. Then, a classification step is performed on the audio signal according to the predetermined categories. Thereafter, a transform step is performed on the audio signal to convert the audio signal into a frequency domain. Then, a panning step and a summing step are performed on amplitude signals of the audio signal to obtain a total amplitude signal. Thereafter, a separation step and a summing step are performed on phase signals of the audio signal to obtain a total phase signal. Then, an inverse transform step is performed on the total amplitude signal and the total phase signal to obtain an optimized audio signal in a time domain.

Audio spatialization and reinforcement between multiple headsets

A shared communication channel allows for the transmitting and receiving audio content between multiple users. Each user is associated with a headset configured to transmit and receive audio data to and from headsets of other users. After the headset of a first user receives audio data corresponding to a second user, the headset spatializes the audio data based upon the relative positions of the first and second users such that when the audio data is presented to the first user, the sounds of the audio data appear to originate at a location corresponding to the second user. The headset reinforces the audio data based upon a deviation between the location of the second user and a gaze direction of the first user, allowing for the first user to more clearly hear audio data from other users that they are paying attention to.

Methods and systems for automatically equalizing audio output based on room position

The various implementations described herein include methods, devices, and systems for automatic audio equalization. In one aspect, a method is performed at an electronic device that includes speakers, microphones, processors and memory. The electronic device outputs audio user content from the speakers and automatically equalizes subsequent audio output of the device without user input. The automatic equalization includes: (1) obtaining audio content signals, including receiving outputted audio content at each microphone; (2) determining from the audio content signals phase differences between microphones; (3) obtaining a feature vector based on the phase differences; (4) obtaining a frequency correction from a correction database based on the obtained feature vector; and (5) applying the obtained frequency correction to the subsequent audio output.