REMINDER OR GESTURE CONTROL DEVICE WEARABLE BY A USER

20190179421 ยท 2019-06-13

    Inventors

    Cpc classification

    International classification

    Abstract

    A wearable reminder or gesture control device is provided using two different types of gestures obtained from two different types of sensors. The device outputs audio to a user but does not have a display, a keypad, or speech recognition software, therewith significantly reducing size, storage requirements and power consumption. Hence the device is suitable for use by the blind, persons with a speech impediment, visually impaired persons, and others who are unable to read small fonts.

    Claims

    1. A method for gesture recognition, comprising of: receiving from a first gesture sensor a first type of gesture; analyzing with a gesture recognition module the first type of gesture to recognize a state machine command; receiving from an external and separate second gesture sensor a second type of gesture; analyzing with a gesture recognition module the second type of gesture to recognize letters, numbers and symbols; and upon recognizing the first or second type of gesture, providing feedback to a user indicating a recognition of a gesture.

    2. The method as set forth in claim 1, further comprising providing audio feedback to the user.

    3. The method as set forth in claim 1, further comprising wirelessly providing visual feedback to a separate device or display.

    4. The method as set forth in claim 1, further comprising the gesture recognition module determining that the second type of gesture was not recognized; and providing feedback to the user to reenter the gesture using the first or the second gesture sensor.

    5. The method as set forth in claim 1, further comprising determining the recognition determined by the second type of gesture to be incorrect, and using the first gesture sensor to reenter the gesture.

    6. The method as set forth in claim 1, further comprising the user entering audio recordings of reminders.

    7. A system wearable by a user, comprising: (a) a gesture control device comprising a first gesture sensor for contactless detecting a first type of gestures, the first gesture sensor recognizing gestures made by the user's hand, wherein the control device does not have a keypad, display or speech recognition, and wherein the control device does not have inertial sensors; (b) an external and separate sensing device with a second gesture sensor for detecting a second type of gestures, and the gesture control device wirelessly receiving data from the external and separate sensing device and recognizing a second type of gestures made by the user using the external and separate sensing device; (c) a gesture recognition module executable by a processor on the gesture control device, the gesture recognition module receiving the sensor data for the first and second types of gestures, wherein the gesture recognition module is programmed to recognize gestures based on letters, numbers and symbols; and (d) a state machine executable by a processor on the gesture control device programmed to receive the recognized gestures from the gesture recognition module, wherein the state machine changes state based on the recognized gesture, and wherein the state machine depending on its current state outputs the recognized gesture.

    8. The system as set forth in claim 7, wherein the control device provides audio feedback to the user.

    9. The system as set forth in claim 7, wherein the control device wirelessly provides visual feedback to a display which is separately located from the control device.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0020] FIG. 1 shows components of the reminder device according to an exemplary embodiment of the invention.

    [0021] FIG. 2 shows a schematic of a circuit of a processor according to an exemplary embodiment of the invention.

    [0022] FIG. 3 shows a state machine according to an exemplary embodiment of the invention with transitions between alarm and non-alarm states. When gesture 311 is performed, the machine transitions to the Set time state. In this state, the user can program the current date and time using either the gesture sensor or the digital pen. The mode of programming the time is similar to that shown in the flowchart of FIG. 4 for setting an Alarm, except no audio reminder is recorded.

    [0023] FIG. 4 shows a flow chart for recording a new reminder using the device and according to an exemplary embodiment of the invention.

    [0024] FIG. 5 shows a flow chart describing how a user can get help according to an exemplary embodiment of the invention.

    [0025] FIG. 6 shows a top-view of the reminder device according to an exemplary embodiment of the invention.

    [0026] FIG. 7 shows a flow chart describing involving a gesture detected to delete an alarm according to an exemplary embodiment of the invention. The user is prompted to confirm or cancel the operation. The appropriate gestures are described to the user. Translate gestures to prompts informing user of impending action, and delete the alarm if the appropriate gestures is performed. This process is repeated for each alarm.

    DETAILED DESCRIPTION

    [0027] FIG. 1 shows an example of components of reminder device 100. Each component may have hardware, software and/or firmware. The components include: gesture sensor 101, a speaker 102, a microphone 103, a Wi-Fi network interface 104, a clock 105 for providing time-based reminders, software and hardware for recognizing gestures 106, software and hardware for recording and playing audio 107, a storage element 108, and a controller 109.

    [0028] The gesture sensor 101 is used for detecting a specific set of gestures and is located on the top of the device (FIG. 6). The speaker 102 provides voice prompts to the user, for example, describing the gesture to be performed next to program the device or the current alarm time selected. The microphone 103 is used for inputting voice reminders. The network interface 104 allows the device to connect to a Wi-Fi network so that it can display calendar and alarm information in a mobile device or computer and store compliance information in another machine. The clock 105 is used for providing time-based reminders. When the device is connected to the network, the clock 105 is automatically synchronized with the clock server. The time zone information may be entered in directly through gestures. The device has hardware and software elements for detecting and analyzing gestures 106, such as signal processing elements. In addition, hardware and software elements 107 (such as filters and analog to digital converters) are used for recording audio reminders on device. The storage unit 108 stores voice reminders, voice prompts, and other programs. The RFID sensor 110 can be configured as a receiver to enable an alternate means of creating a calendar schedule. The controller 109 can contain one or more processors and is responsible for interfacing with the sensors and actuators to perform the desired functions.

    [0029] FIG. 2 shows the schematic of the reminder device. It has a microcontroller 201 (CC3200 breakout board) connected to a gesture sensor 202 (APDS-9960 breakout board), an RFID sensor 203 (nRF24L01+ breakout board) and an SD-card or other memory 204. The APDS-9960 is a digital proximity and gesture sensor device with an I.sup.2C compatible interface and a detection distance of 100 mm. It has four photodiodes for gesture sensing and can detect several gestures such as UP, DOWN, RIGHT, LEFT, among others. The gesture sensor communicates with the microcontroller through an I.sup.2C interface, and its interrupt pin triggers an interrupt on the microcontroller when a gesture is performed. The microcontroller (CC3200) has an integrated Wi-Fi network connectivity. It contains an 80 MHz ARM Cortex-M4 Core, 256 KB of RAM, SPI and I.sup.2C interfaces, and several general purpose timers and GPIO pins. In addition, the microcontroller (CC3200) has a real-time clock. The nRF24L01+ is a 2 Mbps ultra-low power RF transceiver for the 2.4 GHz ISM band. When the RFID receiver receives any data, it generates an interrupt on the microcontroller. The RFID sensor and SD-card use the SPI interface to communicate with the microcontroller. Apart from low power consumption, the RFID sensor and gesture sensor have small form factor. There is additional circuitry for recording spoken audio and playing it back. The audio playback is done through a filter 205, audio amplifier 206 and a speaker 207. The audio recording circuit includes a microphone 208, a built-in analog-to-digital converter (ADC), and ADC prescaler 209.

    [0030] The user can program the device either through the gesture sensor APDS-9960 or by using a mobile device (such as a digital pen or wand) that transmits motion trajectory information of a traced number or shape to the device for further processing. The gesture sensor can recognize several simple motions such as UP, DOWN, LEFT, NEAR, and FAR. To start recording an alarm, a user can perform a gesture such as FAR. This triggers an interrupt on the microcontroller and it then prompts the user to enter a date for the alarm. Alternately, if the user is using a digital pen, the RFID receiver on the device triggers an interrupt, and the microcontroller stores the motion trajectory information received by the RFID receiver (nRF24L01+) in memory for further analysis and number/character recognition. Otherwise, if the user makes an UP or DOWN gesture on the gesture sensor, the device starts to count up or down, starting from the current date, until the user makes a selection using a FAR gesture when the appropriate date is reached. This process is continued until the user has entered all the parameters (such as date and time) for the alarm. The user is then prompted to start recording the voice memo, and the audio data is stored in the SD-card. To conserve power, peripheral devices such as the amplifier and speaker are turned off when they are not used. The RFID receiver is also placed in deep sleep and is only woken up when there is data to be received.

    [0031] FIG. 3 shows the state machine of the reminder device. The device is initially in an Idle State 305 during which it consumes very power. When an alarm is triggered internally, the device goes into an Alarm State 301. In this state, the device plays the audio reminder for the corresponding time to the user. The reminder is turned off and device returns back to the Idle State 305 when the user indicates compliance through a gesture or a predetermined time period has elapsed 306. In the Alarm State 301, the device provides a voice reminder of the gesture used to turn off the reminder. A simple gesture 308 can transition the device into the Help State 304 in which the device outputs voice prompts describing the gestures needed for creating or deleting an alarm and synchronizing the device. When a gesture 309 is performed, the device moves into the Create Alarm state 302, and the gesture 310 transitions it into the Delete Alarm state 303. The device returns back to the Idle State 305 when the desired operation is completed or canceled by the user. When the device detects a network, mobile device or computer, it can perform a clock synchronization operation and/or display its preset alarms on the other devices by moving into a synchronization state 311.

    [0032] FIG. 4 shows the flowchart of the device for recording a reminder when the device is in the Create Alarm state 302. The device prompts the user to set some information for when the alarm is to be set 4-1. For example, the device could produce the voice output Set the date for the alarm when the user performs the gesture FAR. If the gesture sensor is triggered, the device then produces a sequence of voice prompts informing the user of the selection 4-2. For example, if the current date is the 10.sup.th, the device produces a sequence of audio prompts counting up from 10 when the user performs a second gesture UP. If a DOWN gesture is detected, the audio prompts count in reverse. When the user performs the gesture FAR, the counting stops and the user is informed of the selection You have selected 12. The duration or frequency of the gestures can be used to control the frequency of the audio prompts. Alternately, the user may trace a number in the air using a digital pen, and the latter transmits information via an RFID transmitter to the device. When the RFID sensor on the device is triggered, it stores the trajectory motion information in the memory of the device 4-6. This data is processed and analyzed, and the number traced by the user is recognized 4-7. The process to store the alarm time and other information pertinent to the alarm is continued 4-3. After the user has completed entering alarm parameters, the device prompts the user to record the voice memo 4-4 and records it 4-5. The device may optionally include a security check before the user can program it. It may also detect if another reminder was scheduled for the same time, inform the user and take appropriate action desired by the user.

    [0033] FIG. 5 depicts a flowchart on how the user can obtain assistance with regards to device operation. In 5-1, the user performs a device-recognizable HELP gesture (for example, NEAR). The device then lists topics in areas where help can be given, such as how to set or delete an alarm (5-2). In this step, the device also indicates the appropriate gesture (NEAR) to be performed to select the HELP topic of choice. Once the user has completed the gesture, the device confirms the user's selection through a prompt (i.e. Select Topic X?). If the correct topic has been selected, the user performs the NEAR gesture again, and the device proceeds to activate a list of instructions on the HELP topic (5-3a). If the device does not detect the aforementioned gesture within the first five seconds after the prompt delivery, the device returns to the HELP MENU, in which it lists topics in areas where help can be given (5-3b). Finally, after step 5-3a has been executed, the device returns to its Idle State, as seen in 503 of FIG. 3.

    [0034] FIG. 6 shows one possible top view of the device with an array of sensors 601, a microphone 603 and a speaker 602.

    [0035] Embodiments of the invention could be varied or complemented by: [0036] Using data mining techniques to create future reminders (prediction). [0037] Using GPS to obtain location information (geographical coordinates) of the user. [0038] Using real time updates on a user's mobile phone or computer. [0039] Reminding pet caregivers when it is time to give their pet a meal. [0040] Integrating with a smart home to remind the people of every day tasks such as locking doors and turning off the stove and other appliances. [0041] Integrating with mobile devices to provide real-time updates to the user on the phone. [0042] Wirelessly outputting video data from the reminder device to a separate (remote) device or display.

    [0043] Embodiments of the invention can be structurally enabled as one or more chips or processors in a wearable device (e.g. at the wrist, hip or where the user prefers) executing computer programs, methods or code defining the state machine, the sensory recognition and detection, the output data and/or the objective/goals defined for the wearable device. The embodiments could be envisioned as devices, methods, systems, computer programs and/or products.

    [0044] Embodiment of the invention can further be varied as follows, as shown in FIG. 4 When a gesture is input by a user using the second gesture sensor, one of the following three cases may occur, wherein the gesture is: [0045] not recognized by the device, or [0046] recognized incorrectly by the device, or [0047] recognized correctly by the device.

    [0048] In case of (a), when the device fails to recognize the input letter, it prompts the user to reenter the gesture through an alternate method. For example, the device informs the user the gesture could not be recognized and gives instructions on how to make a selection using the first gesture sensor that accepts device commands. It then speaks out numbers/letters/symbols and the user can perform a gesture, such as NEAR, when the correct number or letter is spoken to select it. Other gestures, such as move up (move down), can be used to speed up (slow down) the rate at which the numbers or letters are spoken out. As another example, the device may speak out the gestures that are nearest matches to the gesture performed. Suppose that the user traced the number 2, and the closest matches are 7, 1, L, and 2. The device announces each of the closest matches, and the user can make a gesture in the air, such as NEAR, to select the desired number/letter/symbol when it is announced.

    [0049] In case of (b), when the gesture recognition is incorrect, the user can use the first type of sensor to enter a device command gesture, such as by the gesture FAR, to indicate the gesture has to be cancelled. The device asks the user to confirm that the gesture should be cancelled and then prompts the user to reenter it.

    [0050] In other words, the reminder/control device can use an alternate method for gesture identification using the first gesture sensor if the second gesture sensor does not provide correct identification of the gesture.

    [0051] The first gesture sensor can be infrared, photoelectric, capacitive, electric-field based, or RFID tag array.

    [0052] The second gesture sensor may be a camera that has the capability to recognize the gesture from a distance or an inertial sensor.

    [0053] In some embodiments, the invention can be used to support the control of electronic devices with gestures. We will refer to the Reminder Device as the Gesture Control Device when it is used in this embodiment. Cell phones and watches can be used to control TV and home appliances today through apps with speech recognition or the graphical user interface. The Gesture Control Device may be located in a TV remote control, cell phone, watch, TV or elsewhere to further support this functionality via gestures The first gesture sensor on the Gesture Control Device detects device commands. The second gesture sensor is the external sensor that detects the user's gesture. For example it could be a TV camera o or an inertial sensor held in the user's hand. When the user enters a gesture using the second gesture sensor, and the gesture is not recognized or is recognized incorrectly by the Control Device, the latter provides an alternate method for the user to reenter the gesture using the first gesture sensor and provides audio feedback to guide the user through the process. The user may use gestures to enter a channel number for viewing, search for a particular show by entering its name with gestures, record a show, and more. For each of these operations, the Gesture Control Device is used to improve the gesture recognition.

    [0054] The state machine of this device is modified with new states depending on the application. For example, consider a Gesture Control device in a smart watch or phone to further support the control of a TV using gestures. The user can use the camera in a TV as the external sensor to enter the second type of gestures, such as the name of a show to watch. Data from each gesture will be transmitted from the TV to the Gesture Control Device in the smart watch, and spoken out to the user. The Gesture Control Device will provide audio prompts of the gestures made by the user and directions to complete the process of entering the name of the show via gestures. The gesture information is then provided to the smart watch app by the Gesture Control Device and the appropriate command is then sent from the smart watch to the TV to switch to the new show In another embodiment of the invention, the user may use gestures to trace commands such as Switch channel to 1, increase volume, and off that are detected by the second gesture sensor (TV camera) and used by the smart watch or cell phone and the Gesture Control Device to control the TV.

    [0055] In another embodiment, the Gesture Control Device is used in a smart watch or cell phone to support the control of a smart home system using gestures. The smart watch may perform operations in the home, such as turning appliances on or off through the graphical user interface, and the Gesture Control Device can enhance this functionality by supporting the control operations via gestures. Suppose that the user wants to turn off a light. The user moves a device with an inertial sensor (second type of sensor) and traces the command OFF. This gesture information is transmitted to the Gesture Control Device. The Gesture Control Device could then communicate with the smart home controller application in the smart watch, and provide audio feedback to the user. For example, the audio feedback could be provided to the user as follows: [0056] Turn off the living room lights? If yes, move your hand from R to L across Gesture Control Device [0057] Turn off the stove? If yes, move your hand from R to L across Gesture Control Device.

    [0058] After the user performs the desired gesture control gesture by moving the hand in the desired manner on the first type of sensor to select the desired operation, this information is transmitted by the smart watch to the smart home controller, which can turn off the appliance. In another embodiment of the device, the visual feedback to the user can be projected from the device onto a separate display, say, the user's arm, clothing, or a wall. In another embodiment, the functionality of the Gesture Control Device can be implemented through a software application in a smart watch, cell phone or other device.