Patent classifications
H04S2420/00
Providing binaural sound behind a virtual image being displayed with a wearable electronic device (WED)
A wearable electronic device (WED) worn on a head of a user displays a virtual image that has sound. One or more processors process the sound for the virtual image into binaural sound for the user. The binaural sound has a sound localization point (SLP) with a coordinate location that occurs behind the virtual image while the image is located a near-field distance from the head of the user.
PERSONALIZED THREE-DIMENSIONAL AUDIO
A headphone system includes a calibration microphone for performing a calibration routine with a user. The calibration microphone receives a stimulus signal emitted by the headphone system and generates a response signal indicating variations in the stimulus signal that arise due to physiological attributes of the user. Based on the stimulus signal and the response signal, the calibration engine generates response data. The calibration engine processes the response data based on a headphone transfer function (HPTF) associated with the headphone system in order to create an inverse filter that can reduce or remove acoustic variations caused by the headphone system. The calibration engine generates a personalized HRTF for the user based on the response data and the inverse filter. The personalized HRTF can be used to implement highly accurate 3D audio and is thereby well-suited for applications to immersive audio and audio-visual entertainment.
User Experience Localizing Binaural Sound During a Telephone Call
Methods and apparatus improve a user experience during telephone calls or other forms of communication in which a listener localizes electronically generated binaural sounds. The sound is convolved or processed to a location that is behind or near a source of the sound so that the listener perceives the location of the sound as originating from the source of the sound.
User Experience Localizing Binaural Sound During a Telephone Call
Headphones provide binaural sound to a location behind a portable electronic device. The sound is convolved or processed to a location that is behind the portable electronic device so that the listener perceives the location of the sound as originating from the location of the portable electronic device.
User Experience Localizing Binaural Sound During a Telephone Call
Methods and apparatus improve a user experience localizing binaural sound to an augmented reality (AR) or virtual reality (VR) image. The sound is convolved or processed to a location that is behind the AR or VR image so that the listener perceives the location of the sound as originating from the AR or VR image.
Methods and Systems for Automatically Equalizing Audio Output based on Room Characteristics
The various implementations described herein include methods, devices, and systems for automatic audio equalization. In one aspect, a method is performed at a computing system that includes speaker(s), microphones, processors and memory. The computing system outputs audio user content and automatically equalizes the audio output of the computing system. The equalizing includes: (1) receiving the outputted audio content at each microphone of the plurality of microphones; (2) based on the received audio content, determining an acoustic transfer function for the room; (3) based on the determined acoustic transfer function, obtaining a frequency response for the room; and (4) adjusting one or more properties of the speakers based on the determined frequency response.
Methods and Systems for Automatically Equalizing Audio Output based on Room Position
The various implementations described herein include methods, devices, and systems for automatic audio equalization. In one aspect, a method is performed at an electronic device that includes speakers, microphones, processors and memory. The electronic device outputs audio user content from the speakers and automatically equalizes subsequent audio output of the device without user input. The automatic equalization includes: (1) obtaining audio content signals, including receiving outputted audio content at each microphone; (2) determining from the audio content signals phase differences between microphones; (3) obtaining a feature vector based on the phase differences; (4) obtaining a frequency correction from a correction database based on the obtained feature vector; and (5) applying the obtained frequency correction to the subsequent audio output.
Sound processing for a bilateral cochlear implant system
According to an embodiment, a method for producing stimulation pulses in a bilateral cochlear implant (CI) is disclosed. The method includes receiving a sound at a first microphone or a first microphone array positioned at or in the vicinity of a first ear of a user of the bilateral CI and receiving the sound at a second microphone or a second microphone array positioned at or in the vicinity of a second ear of the user of the bilateral CI. Furthermore, generating, using the first microphone or first microphone array, a first microphone signal in response to the sound received at the first microphone or first microphone array and generating, using the second microphone or second microphone array, a second microphone signal in response to the sound received at the second microphone or second microphone array. This is followed by filtering the first microphone signal into a plurality of band limited first microphone signals and a filtering the second microphone signals into a plurality of band limited second microphone signals. Later, determining a major sound based on analysis of the first microphone signal and/or the second microphone signal and/or at least one of the plurality of band limited first microphone signals and/or at least one of the plurality of band limited second microphone signals and extracting direction of arrival of the major sound. Based on the determined major source, a primary pulse pattern is generated and a secondary pulse pattern comprising for example a delayed and/or attenuated copy of the generated primary pulse pattern is then generated. The amount of delay and/or attenuation is based on the extracted direction of arrival. Finally, stimulating one cochlea using a primary stimulation pulse that is based on the primary pulse pattern and stimulating the other cochlea using a secondary stimulation pulse that is based on the secondary pulse pattern.
User Experience Localizing Binaural Sound During a Telephone Call
Methods and apparatus improve a user experience during telephone calls or other forms of communication in which a listener localizes electronically generated binaural sounds. The sound is convolved or processed to a location that is behind or near a source of the sound so that the listener perceives the location of the sound as originating from the source of the sound.
METHOD AND APPARATUS FOR ASSIGNING MULTI-CHANNEL AUDIO TO MULTIPLE MOBILE DEVICES AND ITS CONTROL BY RECOGNIZING USER'S GESTURE
A method or apparatus for multi-channel audio data control using plural mobile devices comprising steps of automatically calculate positions of plural devices, transmit audio data to plural devices based on calculated positions, and execute control based on transmitted audio channel data. Control of the data can be executed by automatically decided or by recognized user gestures from the mobile devices.