Patent classifications
H04R2205/041
Sound playback system and output sound adjusting method thereof
A sound playback system and an output sound adjusting method thereof are disclosed. The method includes the following steps: receiving an input sound signal from a user, wherein the input sound signal includes a voice signal indicating the age of the user; transmitting the input sound signal to a remote voice system; performing a voice recognition process according to the voice signal of the input sound signal to obtain a voice recognition result; adjusting a gain value of each frequency band of an output sound signal according to the voice recognition result; and transmitting the output sound signal to a near-end electronic device to output the output sound signal to be heard by the user.
Audio signal processing for sound compensation
Signal energy in auditory sub-bands of an input audio signal is determined. Sound pressure level, SPL, in those sub-bands is then determined, based on the signal energy and based on sound output sensitivity of the against-the-ear audio device. In one instance, at least first and second gain lookup tables are determined based on a hearing profile of a user of the against-the-ear audio device. Sub-band gains that are to be applied to the input audio signal are determined based on the determined SPL. When the input audio signal is for example telephony the sub-band gains are computed using the first gain lookup table, and when the input audio signal is for example media the sub-band gains are computed using the second gain lookup table. Other aspects are also described and claimed.
AUDIO SYSTEM FOR ARTIFICIAL REALITY APPLICATIONS
Embodiments relate to an audio system for various artificial reality applications. The audio system performs large scale filter optimization for audio rendering, preserving spatial and intra-population characteristics using neural networks. Further, the audio system performs adaptive hearing enhancement-aware binaural rendering. The audio includes an in-ear device with an inertial measurement unit (IMU) and a camera. The camera captures image data of a local area, and the image data is used to correct for IMU drift. In some embodiments, the audio system calculates a transducer to ear response for an individual ear using an equalization prediction or acoustic simulation framework. Individual ear pressure fields as a function of frequency are generated. Frequency-dependent directivity patterns of the transducers are characterized in the free field. In some embodiments, the audio system includes a headset and one or more removable audio apparatuses for enhancing acoustic features of the headset.
SYSTEMS AND METHODS FOR ADJUSTING CLARITY OF AN AUDIO OUTPUT
A method for adjusting the clarity of an audio output in a changing environment, including: receiving a content signal; applying a customized gain to the content signal; and outputting the content signal with the customized gain to at least one speaker for transduction to an acoustic signal, wherein the customized gain is applied on a per frequency bin basis such that frequencies of a lesser magnitude are enhanced with respect to frequencies of a greater magnitude and an intelligibility of the acoustic signal is set approximately at a desired level, wherein the customized gain is determined according to at least one of a gain applied to the content signal, a bandwidth of the content signal, and a content type encoded by the content signal.
MEDIA SYSTEM AND METHOD OF ACCOMMODATING HEARING LOSS
A media system and a method of using the media system to accommodate hearing loss of a user, are described. The method includes selecting a personal level-and-frequency dependent audio filter that corresponds to a hearing loss profile of the user. The personal level-and-frequency dependent audio filter can be one of several level-and-frequency-dependent audio filters having respective average gain levels and respective gain contours. An accommodative audio output signal can be generated by applying the personal level-and-frequency dependent audio filter to an audio input signal to enhance the audio input signal based on an input level and an input frequency of the audio input signal. The audio output signal can be played by an audio output device to deliver speech or music that the user perceives clearly, despite the hearing loss of the user. Other aspects are also described and claimed.
SMART GLASS INTERFACE FOR IMPAIRED USERS OR USERS WITH DISABILITIES
A headset designed for inclusion of users with impairments is provided. The headset includes a frame, two eyepieces mounted on the frame, and at least one microphone and a speaker, mounted on the frame. The headset also includes a camera, a memory configured to store multiple instructions, and a processor configured to execute the instructions, wherein the instructions comprise to provide to a user an environmental context from a signal provided by the microphone and the camera. A method for using the above headset and a system for performing the method are also provided.
METHOD AND ELECTRONIC DEVICE FOR PERSONALIZED AUDIO ENHANCEMENT
Embodiments herein disclose a method and electronic device for personalized audio enhancement. The method includes: receiving, by the electronic device, a plurality of inputs, in response to an audiogram test. The method includes generating, by the electronic device, a first audiogram representative of a first personalized audio setting to suit a first ambient context, based on the received inputs. The method also includes determining a change from the first ambient context to a second ambient context for an audio playback, analyzing a plurality of contextual parameters during the audio playback in the second ambient context, and generating a second audiogram representative of a second personalised audio setting to suit the second ambient context based on the analysis of the plurality of contextual parameters, by the electronic device.
DEVICES, SYSTEMS AND PROCESSES FOR AN ADAPTIVE AUDIO ENVIRONMENT BASED ON USER LOCATION
A process, for adapting an audio environment based on a current user location includes initializing a wearable device with a hub, determining a device location, generating a sound property for content based on the location, adjusting a sound based on the sound property, obtaining device motion data, obtaining an updated device location, generating a second sound property, and adjusting a second sound based on the second sound property. The first location and the updated first location for may be determined by establishing a connection between the device and the hub, establishing a second connection between the device and a first access point, establishing a third connection between the device and a second access point, and calculating the locations by triangulating timing signals received by the device from the hub, the first access point, and the second access point. The sound properties may include first and second volume settings.
Systems and methods for assisting the hearing-impaired using machine learning for ambient sound analysis and alerts
Systems and Methods for assisting the hearing-impaired are described. The methods rely on obtaining audio signals from the ambient environment of a hearing-impaired person. The audio signals are analyzed by a machine learning model that can classify audio signals into audio categories (e.g. Emergency, Animal Sounds) and audio types (e.g. Ambulance Siren, Dog Barking) and notify the user leveraging a mobile or wearable device. The user can configure notification preferences and view historical logs. The machine learning classifier is periodically trained externally based on labelled audio samples. Additional system features include an audio amplification option and a speech to text option for transcribing human speech to text output.
HEARING ASSIST DEVICE EMPLOYING DYNAMIC PROCESSING OF VOICE SIGNALS
Various implementations include systems for processing audio signals. In particular implementations, a system includes at least one microphone configured to capture acoustic signals; a wearable hearing assist device configured to amplify captured acoustic signals from the at least one microphone and output amplified audio signals to a transducer; a voice activity detector (VAD) configured to detect voice signals of a user from the captured acoustic signals; and a voice suppression system configured to suppress the voice signals of the user from the amplified audio signals being output to the transducer.