Realtime AI Sign Language Recognition
20220327961 · 2022-10-13
Assignee
Inventors
Cpc classification
G06F40/58
PHYSICS
International classification
G06F40/58
PHYSICS
Abstract
A real time sign language recognition method that allows Deaf and Hard of Hearing individuals to sign into any apparatus with a camera to extract target information (such as a translation in a target language) is proposed.
Claims
1. A method for converting from a sign language to a target output space (e.g. interpreting to a target second language such as English), comprising the steps of: (a) Capturing an image or a sequence of images on an input device (minimally a single lens camera). (b) Optionally extracting out initial features. (c) Transmitting said features or image or sequence of features or images to a server or some external computation device. (d) Optionally extracting out additional features on this separate computing device. (e) Executing an algorithm on the resultant features on this separate computing device. (f) Optionally transferring the output to one or more recipient devices. Where the method is capable of utilizing any standard computation device equipped with a camera (such as a smartphone or tablet) as a capture device.
2. A method as in claim 1, wherein the output comprises a value indicating if the individual is signing.
3. A method as in claim 1, wherein the output comprises the translation of the signed information contained within the signed input.
4. A method as in claim 3, where the input is streamed through the system in real time thereby producing real time captioning of the signing.
5. A method as in claim 3, where the input is sent after the individual is finished signing through the system thereby producing a translation of the signed input.
6. A method as in claim 1, where the output comprises the most likely translations of the signed information selected from a list of possible translations.
7. A method as in claim 5 in which the user is then prompted to confirm the automated translation of their input.
8. A method as in claim 6 in which the user is then prompted to confirm the automated translation of their input.
9. A method as in claim 1, where the output comprises an automated response to the users input.
10. A apparatus for converting from a sign language to a target output space (e.g. interpreting to a target second language such as English), comprising the steps of: (g) Capturing an image or a sequence of images on an input device (minimally a single lens camera). (h) Optionally extracting out initial features. (i) Transmitting said features or image or sequence of features or images to a server or some external computation device. (j) Optionally extracting out additional features on this separate computing device. (k) Executing an algorithm on the resultant features on this separate computing device. (l) Optionally transferring the output to one or more recipient devices. Where the method is capable of utilizing any standard computation device equipped with a camera (such as a smartphone or tablet) as a capture device.
11. A apparatus as in claim 10, wherein the output comprises a value indicating if the individual is signing.
12. A apparatus as in claim 10, wherein the output comprises the translation of the signed information contained within the signed input.
13. A apparatus as in claim 12, where the input is streamed through the system in real time thereby producing real time captioning of the signing.
14. A apparatus as in claim 12, where the input is sent after the individual is finished signing through the system thereby producing a translation of the signed input.
15. A apparatus as in claim 10, where the output comprises the most likely translations of the signed information selected from a list of possible translations.
16. A apparatus as in claim 14 in which the user is then prompted to confirm the automated translation of their input.
17. A apparatus as in claim 15 in which the user is then prompted to confirm the automated translation of their input.
18. A apparatus as in claim 10, where the output comprises an automated response to the users input.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0006]
[0007]
[0008]
[0009] Brought into focus when they are signing
[0010] Brought out of focus when they are not signing
[0011]
[0012]
[0013]
[0014]
[0015]
DETAILED DESCRIPTION
[0016] Generalized Architecture
[0017] The generalized architecture is depicted in
[0018] Note that our embodiments do not require any specialized hardware besides a camera and wifi connection (and therefore would be suitable to run on any smartphone or camera-enabled device). Note further that our embodiments do not require personalization on a per-user basis, but rather functions for all users of a particular dialect of sign language. Finally, note that our embodiments are live, producing a real time output.
[0019] Our generalized architecture is as follows. A signer signs into 11 an input device (e.g. minimally a single lens camera). In real time, or after the signing is completed, the sign language information is sent to 12, which extracts out features (e.g. body pose keypoints, hand keypoints, hand pose, thresholded image, etc. . . . ). The features produced by 12 are then transmitted to component 13 which extracts sign language information (e.g. detecting if an individual is signing, transcribing that signing into gloss, or translating that signing into a target language) from a sequence of these per-frame features. Finally, the output is displayed on 14.
[0020] In our generalized architecture, at least 12 or 13 must reside (at least in part) on a cloud computation device. This allows for real time feedback to the user during signing enabling more natural interactions.
[0021] Real Time Interpreter Embodiment
[0022] An example embodiment of this is presented in
[0023] Our method for producing this translation is contained within
[0024] Setting the Head coordinates to be (0,0) in the pose and both shoulders to be an average of one unit away via an affine transform.
[0025] Setting the mean coordinates of each hand to be (0, 0, 0) and the standard deviation in each dimension for the coordinates of each hand to be an average of 1 unit via an affine transformation.
[0026] The feature vectors for a certain time period are collected and smoothed using exponential smoothing into a feature vector. The smoothed and normalized feature vectors are then sent to the processing module in 204.
[0027] Note that in the real time translation variant, for each new frame received, that frame is appended to the feature queue, and the resultant feature queue is smoothed and sent to the processing module 204 to be reprocessed.
[0028] In the processing module 202, the feature train is split into each individual sign via the sign-splitting component 209 via a 1D Convolutional Neural Network which highlights the sign transition periods. Note that this CNN additionally locates non-signing regions by outputting a special flag value (i.e. 0=intrasign region, 1=intersign region, 2=nonsigning region). The comparator in 211 then first determines if the entire signing region of the feature vector is contained within the list of pre-recorded sentences in the sentence base 214 (a database of sentences) via K Nearest-Neighbors (KNN) with a Dynamic Time Warping (DTVV) distance metric. If the feature vector does not correspond to a sentence, the comparator 211 then goes through each signs' corresponding region in the feature queue and determines if that sign was fingerspelled (done through a binary classifier). If so, the sign is processed by the fingerspelling module in 210 (done through a seq2seq RNN model). If not, the sign is determined by comparing with signs in the signbase in 213 (a database of individual signs) and choosing the most likely candidate (done through KNN with a distance metric of DTVV). Finally, a string of sign language gloss is output (the signs which constituted the feature queue). As the sign transcribed output is not yet in English, the grammar module in 213 translates the gloss to English via a Seq2Seq RNN. The resulting english text is returned to the device for visual display 201.
[0029] Signing Detection Embodiment
[0030] An example embodiment for signing detection of this is presented in
[0031] Our method for performing signing detection utilizes a subset of the components of the real time interpreter embodiment and is illustrated in
[0032] The feature vectors for a certain time period are collected and smoothed into a feature vector using exponential smoothing. The smoothed and normalized feature vectors are then sent to the processing module in 304. Additionally, for each new frame received, that frame is appended to the feature queue, and the resultant feature queue is smoothed and sent to the processing module 304 to be reprocessed.
[0033] In the processing module, the feature train is split into each individual sign via the sign-splitting component 307 via a 1D Convolutional Neural Network which highlights the sign transition periods. Note that this CNN additionally locates non-signing regions by outputting a special flag value (i.e. 0=intrasign region, 1=intersign region, 2=nonsigning region). Finally, this system collects all users whose signing detection is currently either 0 or 1 (i.e. is signing). This is sent to all other conference call participants 308 so that the specified individuals can be spotlit.
[0034] Few Option Sign Language Translation Embodiment
[0035] It is desirable to limit the possible choices of the signed output to improve accuracy. An example embodiment of few-option sign language translation is shown in
[0036] The architecture for achieving this is included in
[0037] Question Answering System Embodiment
[0038] In the question answering system embodiment, a user is prompted to sign a question to the system in 81. They then sign into the capture system in 82. The sign language is translated into gloss or english via the Real Time Interpreter embodiment presented in the disclosure above. Finally, the output is sent through an off the shelf question answering system to produce the output 83.