Patent classifications
H04R2460/07
REVERBERATION FINGERPRINT ESTIMATION
Examples of the disclosure describe systems and methods for estimating acoustic properties of an environment. In an example method, a first audio signal is received via a microphone of a wearable head device. An envelope of the first audio signal is determined, and a first reverberation time is estimated based on the envelope of the first audio signal. A difference between the first reverberation time and a second reverberation time is determined. A change in the environment is determined based on the difference between the first reverberation time and the second reverberation time. A second audio signal is presented via a speaker of a wearable head device, wherein the second audio signal is based on the second reverberation time.
HEARING DEVICE COMPRISING AN OWN VOICE PROCESSOR
A hearing device, e.g. a hearing aid or a headset, configured to be worn at or in an ear of a user, the hearing device comprising at least one input transducer for converting a sound in an environment of the hearing device to at least one electric input signal representing said sound; an own voice detector configured to estimate whether or not, or with what probability, said sound originates from the voice of the user, and to provide an own voice control signal indicative thereof, a face mask detector configured to estimate whether or not, or with what probability, said user wears a face mask while speaking, and to provide face mask control signal indicative thereof. A method of operating a hearing device is further disclosed. Thereby an improved hearing aid or headset may be provided.
Earpiece with GPS receiver
An earpiece includes an earpiece housing, a processor disposed within the earpiece, a speaker operatively connected to the processor, a microphone operatively connected the processor, and a global navigation satellite system (GNSS) receiver disposed within the earpiece. A system may includes a first earpiece having a connector with earpiece charging contacts, a charging case for the first earpiece, the charging case having contacts for connecting with the earpiece charging contacts, and a glob& navigation satellite system (GNSS) receiver disposed within the charging case.
EFFICIENT RENDERING OF VIRTUAL SOUNDFIELDS
An audio system and method of spatially rendering audio signals that uses modified virtual speaker panning is disclosed. The audio system may include a fixed number F of virtual speakers, and the modified virtual speaker panning may dynamically select and use a subset P of the fixed virtual speakers. The subset P of virtual speakers may be selected using a low energy speaker detection and culling method, a source geometry-based culling method, or both. One or more processing blocks in the decoder/virtualizer may be bypassed based on the energy level of the associated audio signal or the location of the sound source relative to the user/listener, respectively. In some embodiments, a virtual speaker that is designated as an active virtual speaker at a first time, may also be designated as an active virtual speaker at a second time to ensure the processing completes.
HEARING AID CONFIGURED TO BE OPERATING IN A COMMUNICATION SYSTEM
The invention relates to a communication system comprising a hearing aid, a communication unit, a relay server, a rule processing server, and at least one external device, wherein the rule processing server comprises a data communication interface to communicate with said relay server and with a plurality of external devices over a plurality of data communication channels, a rule processor, and a rule base comprising a set of rules, each rule defining an action to be triggered in response to a trigger event. Said rule processor is configured to generate an action request signal in response to an event signal representing a trigger event. Said action request signal is configured to cause an action of at least one of the hearing aid, the communication unit, the relay server or the external device, and wherein said action request signal carries information that designates at least one of said devices and at least one action to be performed of said at least one device. Said communication system further comprises an event detector that is configured to detect a trigger event and to generate the event signal in response to a detection of the trigger event.
Validation of Audio Calibration Using Multi-Dimensional Motion Check
Examples described herein involve validating motion of a microphone during calibration of a playback device. An example implementation involves a mobile device detecting, via one or more microphones, audio signals emitted from one or more playback devices as part of a calibration process. After the one or more playback devices emit the audio signals, the mobile device determines whether the detected audio signals indicate that sufficient horizontal translation of the mobile device occurred during the calibration process. When the detected audio signals indicate that insufficient horizontal translation occurred, the mobile device displays a prompt to move the mobile device more while the one or more playback devices emit one or more additional audio signals as part of the calibration process. When the detected audio signals indicate that sufficient horizontal translation occurred, the mobile device calibrates the one or more playback devices with a calibration based on the detected audio signals.
SYSTEM AND METHOD FOR DIFFERENTIALLY LOCATING AND MODIFYING AUDIO SOURCES
A system and method for differentially locating and modifying audio sources that includes receiving multiple audio inputs from a set of distinct locations; determining a multi-dimensional audio map from the audio inputs; acquiring a set of positional audio control inputs applied to the audio map, each audio control input comprising a location and audio processing property; and generating an audio output according to the audio control inputs and the audio inputs. The audio control inputs capable of configuration through manual, automatic, computer vision analysis, and other configuration modes.
VEHICLE AND METHOD FOR CONTROLLING THEREOF
A vehicle selects various sound sources and outputs driving sound based on information received from the navigation system. A database stores sound sources classified as first and second emotions. A controller outputs a landmark name based on a route guidance text and outputs a ratio of the first and second emotions corresponding to the landmark name. The controller selects a first sound source from the sound sources classified as the first emotion based on the first emotion ratio, selects a second sound source from the sound sources classified as the second emotion based on the second emotion ratio and determines first and second playback periods based on the ratios of the first and second emotions. The controller outputs a first sound generated based on the first sound source during the first playback period and a second sound generated based on the second sound source during the second playback period.
SOUND FIELD ADJUSTMENT
A device includes one or more processors configured to receive, via wireless transmission from a playback device, data associated with a pose of the playback device. The one or more processors are also configured to select, based on the data, a particular representation of a sound field from a plurality of representations of the sound field. Each respective representation of the sound field corresponds to a different sector of a set of sectors. A sector represents a range of values associated with movement of the playback device. The one or more processors are further configured to generate audio data corresponding to the selected representation of the sound field. one or more processors are also configured to send, via wireless transmission, the audio data as streaming data to the playback device.
Headset sound leakage mitigation
An audio system for a headset includes a plurality of speakers and an audio controller. The plurality of speakers may be in a dipole configuration that cancel sound leakage into a local area of the headset. The controller filters audio content presented by the plurality of speakers to further mitigate leakage of audio content into the local area. The audio determines sound filters based on environmental conditions, such as ambient noise levels, as well as based on the audio content being presented.