Patent classifications
H04R25/50
GENERATING A HEARING ASSISTANCE DEVICE SHELL
Systems and methods may be used to determine a fit for a hearing assistance device shell model. For example, a method may include receiving an image of anatomy of a patient including at least a portion of a canal aperture of an ear of the patient, generating a patient model of a portion of the anatomy of the patient, the patient model indicating at least one of a height or width of the canal aperture, and determining, using the patient model, a best fit model from a set of hearing assistance device shell models generated using a machine learning technique. The method may include outputting an identification of the best fit model.
Hearing device and a method of selecting an optimal transceiver channel in a wireless network
A method in a wireless network comprising a plurality of frequency channels and a receiving participant, the method includes: receiving data on a first subset of the plurality of frequency channels, wherein the frequency channels in the first subset are utilized at least once; receiving data on a second subset of the plurality of frequency channels; determining packet error rates for the respective frequency channels in the first and the second subsets; and selecting one of the plurality of frequency channel as an optimal frequency channel based on a result from the act of determining.
HEARING AID WITH VOICE OR IMAGE RECOGNITION
A system for selectively amplifying audio signals may include a wearable camera configured to capture a plurality of images from an environment of a user and a microphone configured to capture sounds from an environment of the user. The system may also include a processor programmed to: receive the plurality of images captured by the camera; identify a representation of at least one recognized individual in at least one of the plurality of images; receive audio signals representative of the sounds captured by the microphone; cause selective conditioning of at least one audio signal received by the microphone from a region associated with the at least one recognized individual; and cause transmission of the at least one conditioned audio signal to a hearing interface device configured to provide sound to an ear of the user.
LIP-TRACKING HEARING AID
A system may include a wearable camera configured to capture a plurality of images from an environment of a user and a microphone configured to capture sounds from an environment of the user. The system may also include a processor programmed to receive the images; identify a representation of one individual in one of the images; identify a lip movement associated with a mouth of the individual, based on analysis of the images; receive audio signals representative of the sounds; identify, based on analysis of the sounds, a first audio signal associated with a first voice and a second audio signal associated with a second voice; cause selective conditioning of the first audio signal based on a determination that the first audio signal is associated with the identified lip movement; and cause transmission of the selectively conditioned first audio signal to a hearing interface device.
HEARING SYSTEM COMPRISING A PERSONALIZED BEAMFORMER
A hearing system configured to be located at or in the head of a user, comprises a) at least two microphones providing at least two electric input signals, b) an own voice detector, c) access to a database (O.sub.l, H.sub.l) comprising c1) relative or absolute own voice transfer function(s), and corresponding c2) absolute or relative acoustic transfer functions for a multitude of test-persons, d) a processor connectable to the at least two microphones, to the own voice detector, and to the database. The processor is configured A) to estimate an own voice relative transfer function for sound from the user's mouth to at least one of the at least two microphones, and B) to estimate personalized relative or absolute head related acoustic transfer functions from at least one spatial location other than the user's mouth to at least one of the microphones of the hearing system in dependence of the estimated own voice relative transfer function(s) and the database (O.sub.l, H.sub.l). The hearing system further comprises e) a beamformer configured to receive the at least two electric input signals, or processed versions thereof, and to determine personalized beamformer weights based on the personalized relative or absolute head related acoustic transfer functions or impulse responses. A method of determining personalized beamformer coefficients (w.sub.k) is further disclosed.
SYSTEMS AND METHODS FOR CAMERA AND MICROPHONE-BASED DEVICE
A hearing aid and related systems and methods. In one implementation, a hearing aid system may selectively amplify sounds emanating from a detected look direction of a user of the hearing aid system. The system may include a wearable camera configured to capture a plurality of images from an environment of the user; at least one microphone configured to capture sounds from an environment of the user; and at least one processor programmed to receive the plurality of images captured by the camera, receive audio signals representative of sounds received by the at least one microphone from the environment of the user, determine a look direction for the user based on analysis of at least one of the plurality of images, cause selective conditioning of at least one audio signal received by the at least one microphone from a region associated with the look direction of the user, and cause transmission of the at least one conditioned audio signal to a hearing interface device configured to provide sound to an ear of the user.
SELECTIVE AMPLIFICATION OF SPEAKER OF INTEREST
A system may include a camera configured to capture images from an environment of a user and a microphone configured to capture sounds from an environment of the user. The system may also include a processor programmed to: receive the images; identify a representation of a first individual and a representation of a second individual in the images; receive, from the microphone, a first audio signal associated with a voice of the first individual and a second audio signal associated with a voice of the second individual; detect an amplification criteria indicative of a voice amplification priority between the first individual and the second individual; selectively amplify the first audio signal relative to the second audio signal when the amplification criteria indicates that the first individual has voice amplification priority over the second individual; and cause transmission of the selectively amplified first audio signal to a hearing interface device.
Sensory-based environmental adaption
Presented herein are techniques for monitoring the sensory outcome of a recipient of a sensory prosthesis in an ambient environment that includes one or more controllable network connected devices. The sensory outcome of the recipient in the environment is used to make operational changes to the one or more controllable network connected devices in order to create an improved environment for recipient.
METHOD AND APPARATUS FOR HEARING IMPROVEMENT BASED ON COCHLEAR MODEL
There is provided an apparatus for hearing measurement based on a cochlear model comprising: a processor; and a memory connected to the processor, wherein the memory stores program instructions executable by the processor to output an interface in which n buttons corresponding to n frequency bands into which an audible frequency band is divided at 1/k octave resolution are arranged in a cochlear model, output acoustic signals corresponding to a predetermined hearing threshold in each of the n frequency bands, receive a user's input for whether each of the n frequency bands is inaudible at the predetermined hearing threshold, and output acoustic stimulation signals corresponding to the inaudible frequency band input by the user in predetermined sizes.
Ear-worn device configured for over-the-counter and prescription use
According to some embodiments, an ear-worn device, e.g., a hearing aid, is provided that operates both as an over-the-counter device, as well as a prescription device. Features stored on the ear-worn device may be used to amplify, enhance, de-noise, or otherwise process audio signals in a manner desired by the user. Some features, or settings of those features, process audio signals in a manner that is unsafe for users with mild-to-moderate hearing loss and thus are disabled when the ear-worn device is operating as an over-the-counter device. Such features and settings may be enabled after the ear-worn device is fit to the user by a licensed professional.