EDGE AND GENERATIVE AI-BASED SUSTAINABLE GPS NAVIGATED WEARABLE DEVICE FOR BLIND AND VISUALLY IMPAIRED PEOPLE
20230350073 · 2023-11-02
Assignee
- Sivaramapillai; Sujith (Doha, QA)
- Saji; Janita (Bangalore, IN)
- Sahu; Sangeeta (Raipur, IN)
- Dubey; Vikas (Raipur, IN)
- Dubey; Neha (Chhattisgarh, IN)
- Miri; Rohit (Chhattisgarh, IN)
Inventors
- Sujith Sivaramapillai (Doha, QA)
- Janita Saji (Bangalore, IN)
- Sangeeta Sahu (Raipur, IN)
- Vikas Dubey (Raipur, IN)
- Neha Dubey (Chhattisgarh, IN)
- Rohit Miri (Chhattisgarh, IN)
Cpc classification
G08B7/00
PHYSICS
International classification
Abstract
The present invention relates to an EdgeGenAI based sustainable GPS navigated wearable device (100) for blind and visually impaired people. The device (100) comprises a GPS navigation unit, a plurality of sensor, an obstacle detection unit, a haptic feedback unit, an audio prompts unit, a central processing unit, a power sources and a user interface unit. The GPS navigation unit is configured to provide a real-time positioning and route guidance to the user. The audio prompts unit is configured to provide auditory instructions and information to the user during navigation. The power sources are configured to supply electrical power to the GPS navigation unit, plurality of sensor, obstacle detection unit, audio prompts unit and haptic feedback unit. The user interface unit is configured to provide an intuitive and accessible interface for blind and visually impaired individuals.
Claims
1. A EdgeGenAI based sustainable GPS navigated wearable device (100) for blind and visually impaired people, comprising: a GPS navigation unit configured to provide a real-time positioning and route guidance to the user; a plurality of sensor configured to detect the types of obstacles in the user's path; an obstacle detection unit configured to detect the presence of types of obstacles in the user's path; a haptic feedback unit configured to provide tactile feedback to the user and generate based on the detected obstacles; a audio prompts unit configured to provide auditory instructions and information to the user during navigation; a central processing unit operationally connected with the GPS navigation unit, plurality of sensor, obstacle detection unit, audio prompts unit and haptic feedback unit, configured to control the function perform by the GPS navigation unit, plurality of sensor, obstacle detection unit, audio prompts unit and haptic feedback unit; a power sources operationally connected with the GPS navigation unit, plurality of sensor, obstacle detection unit, audio prompts unit and haptic feedback unit, configured to supply electrical power to the GPS navigation unit, plurality of sensor, obstacle detection unit, audio prompts unit and haptic feedback unit; a user interface unit operationally connected with the central processing unit, configured to provide an intuitive and accessible interface for blind and visually impaired individuals.
2. The wearable device (100) as claimed in claim 1, wherein the GPS navigation unit is configured for efficient and fast processing of location data, ensuring reliable and up-to-date navigation assistance.
3. The wearable device (100) as claimed in claim 1, wherein the haptic feedback unit configured to convey information to the user as vibrations or gentle pulses, to convey information to the user.
4. The wearable device (100) as claimed in claim 1, wherein the power sources includes battery and solar panel.
5. The wearable device (100) as claimed in claim 1, wherein the power sources configured to optimize power usage, ensuring prolonged battery life and reducing the need for frequent charging.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] So that the manner in which the above recited features of the present invention can be understood in detail, a more particular description of the invention, briefly summarized above, may have been referred to by embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.
[0021] These and other features, benefits, and advantages of the present invention will become apparent by reference to the following text figure, with like reference numbers referring to like structures across the views, wherein
[0022]
[0023]
DETAILED DESCRIPTION OF THE INVENTION
[0024] The following description is of exemplary embodiments only and is not intended to limit the scope, applicability or configuration of the invention in any way. Rather, the following description provides a convenient illustration for implementing exemplary embodiments of the invention. Various changes to the described embodiments may be made in the function and arrangement of the elements described without departing from the scope of the invention.
[0025] While the present invention is described herein by way of example using embodiments and illustrative drawings, those skilled in the art will recognize that the invention is not limited to the embodiments of drawing or drawings described, and are not intended to represent the scale of the various components. Further, some components that may form a part of the invention may not be illustrated in certain figures, for ease of illustration, and such omissions do not limit the embodiments outlined in any way. It should be understood that the drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents, and alternatives falling within the scope of the present invention as defined by the appended claim. As used throughout this description, the word “may” is used in a permissive sense (i.e. meaning having the potential to), rather than the mandatory sense, (i.e. meaning must). Further, the words “a” or “an” mean “at least one” and the word “plurality” means “one or more” unless otherwise mentioned. Furthermore, the terminology and phraseology used herein is solely used for descriptive purposes and should not be construed as limiting in scope. Language such as “including,” “comprising,” “having,” “containing,” or “involving,” and variations thereof, is intended to be broad and encompass the subject matter listed thereafter, equivalents, and additional subject matter not recited, and is not intended to exclude other additives, components, integers or steps. Likewise, the term “comprising” is considered synonymous with the terms “including” or “containing” for applicable legal purposes.
[0026]
[0027] In accordance with an embodiment of the present invention, the novel EdgeGenAI (Edge Artificial Intelligence and Generative Artificial Intelligence) based sustainable GPS navigated wearable device (100) for blind and visually impaired people. The Edge generative AI is a technology that combines the power of edge AI and generative AI to produce new creative output from existing data. Edge AI incorporates AI capabilities on the edge device (100)s while Generative AI is a type of artificial Intelligence that focuses on creating new data, such as images, videos, audio, or text, that resemble human-made content. EdgeGenAI can run deep-learning algorithms locally, allowing for real-time decisions and create new digital images, video, audio, and text generation using Generative AI. Envision.ai is a harness worn on the shoulders, equipped with ultra-wide angle cameras on the left of your chest, a battery behind your neck, and a small computer having Edge AI processor on the right of your chest. Envision.ai has in-built 3D cameras that can be paired with headphones to warn users about the position of obstacles around them. Placed on the wearer's shoulders, it can predict the trajectories of obstacles around the wearer, similar to an autonomous vehicle. It then provides feedback to the wearer through sound. It can be used for up to six hours at a time and also works in dark locations. It uses wide-angle cameras and AI to generate short sounds to warn blind people about the position of important obstacles, such as branches, holes, vehicles or pedestrians. It also provides GPS instructions.
[0028] In accordance with an embodiment of the present invention, the features and application are mentioned below: [0029] The device (100) identifies objects and predicts obstacle positions. Never hit an obstacle again. [0030] It identifies more than 20 classes of objects. Get warned through spatial sounds of any important object that is on the way. [0031] Get guided in new environments with simple audio feedback. Person can also connect smartphone's GPS to guide to new places. [0032] It can even works at night by using its infrared cameras. Avoid any obstacle, any time. [0033] It can provide Virtual Volunteer which can read text in multiple language and provide text to speech and speech to speech conversion. [0034] It can recognize (Fire/Smoke) in case of early warning. [0035] It will help in fall detection and providing alarm system which will be sent to relative
[0036] In accordance with an embodiment of the present invention, the intention behind for developing this system is to create a device (100) that is based on Edge Computing and Generative AI for helping Blind and Visually Impaired people using natural language-powered computer vision AI service. The main components of device (100) system are a smart harness with bone conduction headset and six different software modules, namely obstacle detection, distance estimation, position estimation, motion detection, and scene recognition and Virtual Volunteer. Using two relatively lightweight hardware components, such as a smart harness device (100) for capturing and processing information and the bone conduction headset to output navigation. The software behind this device (100) will help in following areas: [0037] 1. Avoid Obstacles: The device (100) identifies and predicts obstacle positions. Never hit an obstacle again. [0038] 2. Identify important Objects: It identifies more than 20 classes of objects. Get warned through spatial sounds of any important object that is on your way. [0039] 3. Navigate unknown places: Get guided in new environments with simple audio feedback. You can also connect your smartphone's GPS. [0040] 4. Safety: It can even works at night due to its infrared cameras. Avoid any obstacle, any time. [0041] 5. Intuitiveness: It is intuitive to use from the start, and comes with a short training provided via an app. [0042] 6. Discover: It can connects to the smartphone's GPS to guide you to new places.
[0043] In accordance with an embodiment of the present invention, If the user wearing the system stumbles over, an alarm is triggered to alert people in the surroundings, and an SMS is sent to the family and caregivers reporting the incident. Likewise, if the user requires help, she can say the word “Help” for the system to trigger the corresponding alarm, which alerts people in the surroundings and sends an SMS to the family and caregivers with information on the user location and asking to contact her. Moreover, the family and caregivers can request the system location by sending an SMS, thus allowing to locate and track the device (100) when needed, such as when the user should be located or when the system is stolen or lost.
[0044] In accordance with an embodiment of the present invention, the AI based camera systems use video-based fire detection to quickly identify smoldering and small fires directly at the source. This means the fire alarm doesn't have to wait for smoke to physically reach its sensors, thus wasting valuable time before alerting safety teams. The goal is to utilize a state-of-the-art deep neural network for detecting fire and smoke in outdoor and indoor environments using cameras on embedded system.
[0045] In accordance with an embodiment of the present invention, the AI processing is used to train the object images in the processor. If any objects/obstacles approach the blind person, then the processor alerts the person by a voice message. This can make the blind person more cautious and thereby lowers the possibilities of accidents.
[0046] In accordance with an embodiment of the present invention, the Edge AI system consists of a camera, AI processing, controller and voice alert. The ultimate goal of this smart navigation system is to detect the obstacles coming in front of the visually impaired person and to inform them about the object. A camera is used to capture the indoor and outdoor images which help the smart navigation system to detect the obstacle.
[0047] While considerable emphasis has been placed herein on the specific features of the preferred embodiment, it will be appreciated that many additional features can be added and that many changes can be made in the preferred embodiment without departing from the principles of the disclosure. These and other changes in the preferred embodiment of the disclosure will be apparent to those skilled in the art from the disclosure herein, whereby it is to be distinctly understood that the foregoing descriptive matter is to be interpreted merely as illustrative of the disclosure and not as a limitation