Method and system for smart navigation for the visually impaired
20210369545 · 2021-12-02
Inventors
Cpc classification
G06V10/255
PHYSICS
G06F3/167
PHYSICS
G10L13/027
PHYSICS
G01S15/86
PHYSICS
International classification
G01S15/52
PHYSICS
Abstract
In 2019, the World Health Organization stated that globally, approximately 2.2 billion people live with some form of vision impairment. Visual impairment limits the ability to perform everyday tasks and adversely affects the ability to interact with the surrounding world, thus discouraging individuals navigating unpredictable and unknown environments. The present invention is a method and a system to define and develop a smart navigation intelligent cane (i-Cane) that enables a visually impaired person to navigate his or her environment. The method and the system detects objects along the path of the visually impaired person, measures the distance of the objects from the person, identifies the objects, uses speech to alert the person of the approaching objects, the type of objects obstructing the path, and the distance between the objects and the person.
Claims
1. A method to define and develop a smart navigation intelligent cane (i-Cane) that aids a visually impaired person to move around the surroundings, the method comprising of: first, detecting the approaching objects along the path of the visually impaired person carrying the i-Cane using an ultrasonic sensor and then calculate the distance of the objects from the person carrying the i-Cane (Detect Object) next, identifying the objects, if the distance between the approaching objects and the visually impaired person carrying the i-Cane meets a certain distance threshold (Identify Object): by capturing an image of the approaching objects (Capture Image) and by labeling and classifying the image of the approaching objects using computer vision technology (Classify Image) finally, generating a voice alert using speech synthesis technology to indicate the type of the object and the distance between the object and the visually impaired person carrying the i-Cane, forewarning the person of the approaching object in a natural language (Generate Voice Alert) and continuing the flow and repeating steps of object detection, object identification (image capture and classification), voice alert generation, as the virtually impaired person continues with his/her path and as the objects appear in the path.
2. A system for implementing and demonstrating the method, as described above, to define and develop a smart navigation intelligent cane (i-Cane) that enables a visually impaired person to navigate the environment, the system comprising of: a computing runtime that consists of: a single-board mini portable computing platform such as the Raspberry Pi 3, mounted on an intelligent cane, i-Cane, providing an execution environment for the software components implementing the method described above an ultrasonic sensor such as HC-SR04 connected to single-board mini portable computing platform using a circuitry a camera such as the Pi Camera connected to the camera port on the single-board mini portable computing platform using a camera cable a software program implementing multiple software components that detects the approaching object in the path of the visually impaired person carrying the i-Cane using an ultrasonic sensor and calculates the distance of the object from the person carrying the i-Cane triggers signals to the ultrasonic sensor to measure the distance of the obstacle and then waits to receive the echo back from the sensor calculates the distance between the ultrasonic sensor on the i-Cane and the approaching object using the formula
S=2D/t, therefore, D=(S×t)/2 where, S is Speed of sound, so S=34030 cm/s D is Distance between the approaching object and sensor t is Time taken for the sensor to receive the echo back continues to detect the subsequent approaching objects and doesn't attempt to identify the approaching object or generate a voice alert, if the distance between the approaching object and the visually impaired person carrying the i-Cane is greater than a distance threshold value that is configurable for a given person captures an image of the approaching objects by interfacing with a camera such as the Pi Camera, if the distance between the approaching object and the visually impaired person carrying the i-Cane is less than the distance threshold value identifies the objects by calling the computer vision software passing the captured image classifying the image based on the label annotations and the corresponding relevancy scores returned back from the computer vision software generates an audio alert using speech synthesis software indicating the type of the approaching object and the distance between the approaching object and the visually impaired person carrying the i-Cane, thereby alerting the person of the approaching object.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0017]
[0018]
[0019]
DETAILED DESCRIPTION OF THE INVENTION
[0020] Visual impairment has a severe impact on the course of daily living, discouraging individuals from moving freely in an unknown environment. The world is full of dangers and wonders which are avoided or appreciated with our vision. The physical world poses the greatest challenge for the visually impaired person. How does one know what and where things are and how to obtain them? How does one understand where he/she wants to go without the danger of colliding with things around them?
[0021] Blind individuals may be discouraged from moving freely and comfortably. What can help them to identify the approaching objects in their path of navigation and determine the distance of the objects from the individuals, when they are moving in a house or walking in a mall or strolling through isles in a grocery store?
[0022] The purpose of this invention is to define a method and a system to develop a simple but affordable way to assist visually impaired persons to navigate around their environment. The method defines an approach to develop a smart navigation intelligent cane (i-Cane) that aids a visually impaired person to move around the surroundings: [0023] by first detecting the approaching objects in the path of the visually impaired person carrying the i-Cane, finding the distance between the approaching objects and the person, and then identifying the objects leveraging ultrasonic sensor, camera, and computer vision technology and [0024] finally, by generating a voice/speech alert for the visually impaired person in natural language using speech synthesis technology
[0025] In
[0032] As part of this invention, a system is also defined to demonstrate the method developed in this invention.
[0033] The Software Components (as shown by 205 in
[0037] In
[0038] 301 in
[0039] Connecting Ultrasonic Sensor to Raspberry Pi 3 [0040] The 5V Power pin of the ultrasonic sensor is connected to the GPIO 5V pin (Pin number 2) of the Raspberry Pi 3 as shown by 305 in
[0045] A software program using Python programming language is run on the Raspberry Pi 3 mini portable computing platform, as shown by 202 in
S=2D/t, therefore, D=(S×t)/2 [0047] where, [0048] S is Speed of sound, so S=34030 cm/s [0049] D is Distance between the approaching object and sensor [0050] t is Time taken for the sensor to receive the echo back [0051] lithe distance between the approaching object and the visually impaired person carrying the i-Cane is greater than a distance threshold value (e.g. 150 cm) the system does not attempt to identify the approaching object or generate a voice alert, continuing to detect the subsequent approaching object. The distance threshold value is configurable by visually impaired person. [0052] Object Identification is composed of Image Capture and Image Classification sub-components. The Image Capture sub-component takes the picture of the approaching object using the Pi Camera. The Image Classification sub-component calls the Computer Vision Software component on a cloud platform to determine the label annotation of the image and classifies the image based on the labels with top relevancy scores. [0053] Voice Alert Generation component generates an audio alert using the Speech Synthesis software indicating the type of the approaching object and the distance between the approaching object and the visually impaired person carrying the i-Cane, thereby alerting the person of the approaching object so that the person can take some corrective actions to avoid the potential collision with the object.
NON-PATENT CITATIONS
[0054] WHO. World report on vision. World Health Organization, 2019. [0055] Blackwell, Debra L, Lucas, Jacqueline W, and Clarke, Tainya C. “Summary Health Statistics for U.S. Adults: National Health Interview Survey, 2012”. National Center for Health Statistics. Vital and Health Statistics 10(260), 2014. [0056] Upton, Eben, and Gareth Halfacree. Raspberry Pi: User Guide. John Wiley & Sons, 2013. [0057] Monk, Simon. Programming the Raspberry Pi, Second Edition: Getting Started with Python. McGraw-Hill Education, 2015. [0058] McManus, Sean, and Mike Cook. Raspberry Pi for Dummies. John Wiley & Sons, 2013.