Systems and Methods for Assisting the Hearing-Impaired Using Machine Learning for Ambient Sound Analysis and Alerts
20210225365 · 2021-07-22
Inventors
Cpc classification
G06F18/214
PHYSICS
G10L21/06
PHYSICS
H04R5/04
ELECTRICITY
H04R2205/041
ELECTRICITY
G06F18/241
PHYSICS
G06F9/542
PHYSICS
H04R2430/01
ELECTRICITY
International classification
Abstract
Systems and Methods for assisting the hearing-impaired are described. The methods rely on obtaining audio signals from the ambient environment of a hearing-impaired person. The audio signals are analyzed by a machine learning model that can classify audio signals into audio categories (e.g. Emergency, Animal Sounds) and audio types (e.g. Ambulance Siren, Dog Barking) and notify the user leveraging a mobile or wearable device. The user can configure notification preferences and view historical logs. The machine learning classifier is periodically trained externally based on labelled audio samples. Additional system features include an audio amplification option and a speech to text option for transcribing human speech to text output.
Claims
1. A system comprising: an audio receiver; a processing system connected to the audio receiver; a notification system connected to the processing system, wherein the processing system is configured to obtain audio signal from the audio receiver; process the audio signal to reduce noise and interference; run a machine learning based classifier to analyze the audio signal; classify the audio signal into an audio category and audio type based on the machine learning based classifier; and notify a user via the notification system of the detected audio category and type; wherein, for the notification, the user is presented with text associated with the classified audio, and, for the specific type of audio, the user is presented with a meaningful description of what the machine learning process characterized the isolated signals as.
2. The system of claim 1, wherein the processing system has a filter and an amplifier to output an improved copy of the received audio signal to a user's hearing device or store it digitally.
3. The system of claim 1, wherein the processing system is further configured to, responsive to the audio category and audio type being speech, convert the speech received via the audio receiver to text output.
4. The system of claim 1, wherein the notification system is a mobile device push notification configured by the user.
5. The system of claim 1, wherein the notification system is a wearable device that can generate vibration alerts and display information on a digital screen.
6. The system of claim 1, wherein the notification preferences can be configured by the user based on audio category and audio type.
7. The system of claim 1, wherein the machine learning classifier is periodically trained externally based on labelled audio sample data and updated in the system.
8. The system of claim 7, where the machine learning training system is further configured to receive feedback from the user that the detected audio category and type were incorrect or unknown, and process the feedback for the labelled audio sample data.
9. The system of claim 1, where the entire system is running as an application on a mobile phone, wherein the audio receiver is the microphone on the mobile device, the processing system is the CPU on the mobile device and the notification system is the screen and vibration alerts.
10. The system of claim 1, wherein the audio receiver is a separate device communicatively coupled to the processing system running on mobile device.
11. A method comprising: obtaining audio signal from the audio receiver; processing the audio signal to reduce noise and interference; running a machine learning based classifier to analyze the audio signal; classifying the audio signal into an audio category and audio type; and notifying a user via the notification system of the detected audio category and type; wherein, for the notification, the user is presented with text associated with the classified audio, and, for the specific type of audio, the user is presented with a meaningful description of what the machine learning process characterized the isolated signals as.
12. The method of claim 11, further comprising of an amplifier and filter to output an improved copy of the received audio signal to a user's hearing device or store it digitally
13. The method of claim 11, wherein the processing step includes conversion of speech to text, responsive to the audio category and audio type being speech.
14. The method of claim 11, wherein the notification method is a mobile device push notification.
15. The method of claim 11, wherein the notification method uses a wearable device that can generate vibration alerts and display information on a digital screen.
16. The method of claim 11, wherein the notification preferences can be configured by the user based on audio category and audio type.
17. The method of claim 11, wherein the machine learning classifier is periodically trained externally based on labelled audio sample data and updated.
18. The method of claim 11, where the machine learning training includes steps to receive feedback from the user that the detected audio category and type were incorrect or unknown, and process the feedback for the labelled audio sample data.
19. A non-transitory computer-readable medium comprising instructions that, when executed, cause a processing system to perform the steps of: obtaining audio signal from the audio receiver; processing the audio signal to reduce noise and interference; running a machine learning based classifier to analyze the audio signal; classifying the audio signal into an audio category and audio type; and notifying a user via the notification system of the detected audio category and type; wherein, for the notification, the user is presented with text associated with the classified audio, and, for the specific type of audio, the user is presented with a meaningful description of what the machine learning process characterized the isolated signals as.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The present disclosure is illustrated and described herein with reference to the various drawings, in which like reference numbers are used to denote like system components/method steps, as appropriate, and in which:
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
DETAILED DESCRIPTION OF THE DISCLOSURE
[0017] In various embodiments, the present disclosure relates to systems and methods for assisting the deaf and hearing-impaired. The systems and methods may use mobile devices or other smart technology (e.g. mobile devices—iPhone, Android device, tablets, smart watches, etc.) that can detect and process ambient sounds, output information, respond to user signals (e.g. via audio or touch) and store data sets. These features combined helps develop a system where the hearing-impaired can utilize technology to inform them of nearby sounds by classifying them into audio categories and types. Examples of audio categories include Animal Sounds, Emergency, Devices, Vehicles, Speech, Music, etc. Each audio category can have multiple specific audio types, e.g., for the audio categories listed above, specific audio types could be Dog Barking, Ambulance Siren, Telephone Ring, Garbage Truck, English Conversation, Piano, etc.
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024] It will be appreciated that some embodiments described herein may include or utilize one or more generic or specialized processors (“one or more processors”) such as microprocessors; Central Processing Units (CPUs); Digital Signal Processors (DSPs): customized processors such as Network Processors (NPs) or Network Processing Units (NPUs), Graphics Processing Units (GPUs), or the like; Field-Programmable Gate Arrays (FPGAs); and the like along with unique stored program instructions (including both software and firmware) for control thereof to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or systems described herein. Alternatively, some or all functions may be implemented by a state machine that has no stored program instructions, or in one or more Application-Specific Integrated Circuits (ASICs), in which each function or some combinations of certain of the functions are implemented as custom logic or circuitry. Of course, a combination of the approaches may be used. For some of the embodiments described herein, a corresponding device in hardware and optionally with software, firmware, and a combination thereof can be referred to as “circuitry configured to,” “logic configured to,” etc. perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. on digital and/or analog signals as described herein for the various embodiments.
[0025] Moreover, some embodiments may include a non-transitory computer-readable medium having instructions stored thereon for programming a computer, server, appliance, device, processor, circuit, etc. to perform functions as described and claimed herein. Examples of such non-transitory computer-readable medium include, but are not limited to, a hard disk, an optical storage device, a magnetic storage device, a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically EPROM (EEPROM), Flash memory, and the like. When stored in the non-transitory computer-readable medium, software can include instructions executable by a processor or device (e.g., any type of programmable circuitry or logic) that, in response to such execution, cause a processor or the device to perform a set of operations, steps, methods, processes, algorithms, functions, techniques, etc. as described herein for the various embodiments.
[0026] Although the present disclosure has been illustrated and described herein with reference to preferred embodiments and specific examples thereof, it will be readily apparent to those of ordinary skill in the art that other embodiments and examples may perform similar functions and/or achieve like results. All such equivalent embodiments and examples are within the spirit and scope of the present disclosure, are contemplated thereby, and are intended to be covered by the following claims.
REFERENCES
[0027] World Health Organization: WHO. (2019, March 20). Deafness and hearing loss. https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss [0028] WebMD. (2012, May 14). Treatments for Hearing Loss. https://www.webmd.com/a-to-z-guides/hearing-loss-treatment-options [0029] National Institute on Deafness and Other Communication Disorders: NIDCD. (2019, November 12). Assistive Devices for People with Hearing, Voice, Speech, or Language. https://www.nidcd.nih.gov/health/assistive-devices-people-hearing-voice-speech-or-language-disorders [0030] Department of Health (2005). Mental health and deafness—Towards equity and access: Best practice guidance. London, UK: HMSO [0031] Hearing Loss Association of America: HLAA. (2019). Types, Causes and Treatments, https://www.hearingloss.org/hearing-help/hearing-loss-basics/ypes-causes-and-treatment/ [0032] National Institute on Deafness and Other Communication Disorders: NIDCD. (2018, June 15). Hearing Aids. https://www.nidcd.nih.gov/health/hearing-aids [0033] Rains, T. (2019, September 13). How much do hearing aids cost?https://www.consumeraffairs.com/health/hearing-aid-cost.html [0034] Wikipedia. (2019b, November 24). Cochlear implant. https://en.wikipedia.org/wiki/Cochlear_implant [0035] Gallaudet University and Clerc Center. (2019). Assistive Technologies for Individuals Who are Deaf or Hard of Hearing. https://www3.gallaudet.edu/clerc-center/info-to-go/assistive-technology/assistive-technologies.html [0036] Apple. (2019, September 19). Use Live Listen with Made for iPhone hearing aids. https://support.apple.com/en-us/HT203990 [0037] Gemmeke, J. (2017). Audio Set: An ontology and human-labeled dataset for audio events. https://research.google.com/audioset/ [0038] Salamon, J. (2014). A Dataset and Taxonomy for Urban Sound Research. https://urbansounddataset.weebly.com/ [0039] Fonseca, E. (2019). Freesound Datasets: A Platform for the Creation of Open Audio Datasets. https://annotator.freesound.org/fsd/explore/