Patent classifications
H04R25/45
MACHINE LEARNING BASED SELF-SPEECH REMOVAL
Various implementations include systems for processing audio signals. In particular implementations, a process includes receiving an audio signal, wherein the audio signal includes a speech component of the user and a noise component; filtering the audio signal with a self-speech filter that utilizes an intrinsic user vector to filter out the speech component, wherein the intrinsic user vector is determined based on a voice input of the user; and outputting a filtered audio signal in which the speech component of the user has been substantially removed from the audio signal.
HEARING DEVICE FOR OCCLUSION REDUCTION AND COMPONENTS THEREOF
An earpiece includes: a first end; a second end opposite from the first end; a first channel extending from a first location that is closer to the first end than to the second end, to a second location that is closer to the second end than to the first end; and a first diaphragm, wherein the first diaphragm has a first surface and a second surface opposite the first surface, the first surface of the diaphragm configured to be in fluid communication with a lumen in the first channel, wherein the first diaphragm extends in a direction that is parallel to, or that forms an acute angle with, a longitudinal axis of the first channel.
INNER EAR APPARAUTS
A hearing-aid apparatus includes an audio collector, an audio processor, a speaker and an inner-ear implant device. The audio collector collects audio information. The speaker generates a sound wave by reference to the audio information. The sound wave is transmitted to an inner ear of a user. The inner-ear implant device is implanted aside a round window of the user for generating an electrical signal encoded by the audio processor based on the audio information to stimulate high frequency auditory nerve cells of the user with an electrode of the inner-ear implant device. The audio wave sent to the inner ear of and the electrical signal stimulating the auditory nerve cells together form an integrated auditory recognition corresponding to the audio information.
EARRING HEARING AID USING RIT
The present disclosure relates to an earring hearing aid using a receiver in the tube (RIT) including a case which receives therein a microphone, an interface socket, an acoustic processing means, a memory button, a volume controller, a battery and a battery electrode, a receiver which outputs an acoustic signal through a lead wire having one end connected to the acoustic processing means, a hearing aid shell which receives an end of the receiver tube disposed at an opposite side to a side where the receiver is inserted in the receiver tube along a lengthwise direction, a face plate covering an inlet of the hearing aid shell, and an earring hook having one side inserted into an outer peripheral surface of an exit hole of the case to cover an outer peripheral surface of the lead wire exiting through the exit hole.
ULTRASONIC HEARING SYSTEM AND RELATED METHODS
A hearing system to activate an auditory system using cerebrospinal fluids includes at least one processor configured to receive an audio signal captured using a sound sensor (e.g., a microphone), extract temporal and spectral features from the audio signal, and create modulated ultrasound signals in a range of 20 Hz to 20 kHz with ultrasound carrier frequencies in the range of 50 kHz to 4 MHz, which are ultrasound frequencies that are well-suited to reach the cerebrospinal fluids (e.g., can pass across the skull/bones to reach the cerebrospinal fluids). The system further includes at least one ultrasonic transducer which receives the modulated signal and delivers the modulated signal to the body and activates the auditory system via vibration of cerebrospinal fluids that vibrate cochlear fluids, bypassing the normal conductive pathway that uses middle ear bones and minimizing bone conduction and distortion through the skull.
IDENTIFYING INFORMATION AND ASSOCIATED INDIVIDUALS
A hearing aid system for individual identification of a hearing aid system may include a wearable camera, a microphone, and at least one processor. The processor may be programmed to receive a plurality of images captured by the wearable camera; receive audio signals representative of sounds captured by the microphone; and identify a first audio signal, from among the received audio signals, representative of a voice of a first individual. The processor may transcribe and store, in a memory, text corresponding to speech associated with the voice of the first individual and determine whether the first individual is a recognized individual. If the first individual is a recognized individual, the processor may associate an identifier of the first recognized individual with the stored text corresponding to the speech associated with the voice of the first individual.
USING VOICE AND VISUAL SIGNATURES TO IDENTIFY OBJECTS
Disclosed is a system for identifying sound-emanating objects in an environment of a user. The system may comprise at least one memory device and at least one processor programmed to: receive a plurality of images captured by a wearable camera; analyze the received at least one of the plurality of images to determine one or more visual characteristics associated with the at least one sound-emanating object; identify within a database in view of the one or more visual characteristics, the at least one sound-emanating object and determine a degree of certainty of identification; receive audio signals acquired by a wearable microphone; analyze the received audio signals to determine a voiceprint of the at least one sound emanating object; identify the at least one sound-emanating object based on the determined voiceprint: and initiate at least one action based on an identity of the at least one sound-emanating object.
SELECTIVE INPUT FOR A HEARING AID BASED ON IMAGE DATA
Disclosed is a hearing aid system for selectively conditioning audio signals. The system may comprise a processor programmed to: receive a plurality of images captured by a wearable camera, wherein the plurality of images depict objects in an environment of a user; receive audio signals acquired by a wearable microphone, wherein the audio signals are representative of sounds emanating from the objects; analyze the plurality of images to identify at least one sound-emanating object in the environment of the user; retrieve, from a database, information about the at least one identified sound-emanating object; causing, based on the retrieved information, selective conditioning of at least one audio signal received by the wearable microphone from a region associated with the at least one sound-emanating object; and cause transmission of the at least one conditioned audio signal to a hearing interface device configured to provide sounds to an ear of the user.
DIFFERENTIAL AMPLIFICATION RELATIVE TO VOICE OF SPEAKERPHONE USER
A system may include a wearable camera configured to capture images and a microphone configured to capture sounds. The system may also include a processor programmed to receive the images; identify a representation of one or more individuals in the images; receive from the microphone a first audio signal associated with a voice; determine, based on analysis of the images, that the first audio signal is not associated with a voice of any of the one or more individuals; receive from the microphone a second audio signal associated with a voice; determine, based on analysis of the images, that the second audio signal is associated with a voice of one of the one or more individuals; and cause a first amplification of the first audio signal and a second amplification of the second audio signal. The first amplification may differ from the second amplification in one aspect.
SELECTIVELY CONDITIONING AUDIO SIGNALS
A system may include a wearable camera configured to capture images and a microphone configured to capture sounds. The system may also include a processor programmed to receive the images; identify a representation of an individual in at least one of the images; determine whether the individual is a recognized individual; if the individual is determined to be a recognized individual, cause an image of the individual to be shown on a display and selectively condition at least one audio signal and determined to be associated with the recognized individual; and cause transmission of the at least one conditioned audio signal to a hearing interface device configured to provide sound to an ear of a user.