Realtime AI Sign Language Recognition

20220327961 · 2022-10-13

Assignee

Inventors

Cpc classification

International classification

Abstract

A real time sign language recognition method that allows Deaf and Hard of Hearing individuals to sign into any apparatus with a camera to extract target information (such as a translation in a target language) is proposed.

Claims

1. A method for converting from a sign language to a target output space (e.g. interpreting to a target second language such as English), comprising the steps of: (a) Capturing an image or a sequence of images on an input device (minimally a single lens camera). (b) Optionally extracting out initial features. (c) Transmitting said features or image or sequence of features or images to a server or some external computation device. (d) Optionally extracting out additional features on this separate computing device. (e) Executing an algorithm on the resultant features on this separate computing device. (f) Optionally transferring the output to one or more recipient devices. Where the method is capable of utilizing any standard computation device equipped with a camera (such as a smartphone or tablet) as a capture device.

2. A method as in claim 1, wherein the output comprises a value indicating if the individual is signing.

3. A method as in claim 1, wherein the output comprises the translation of the signed information contained within the signed input.

4. A method as in claim 3, where the input is streamed through the system in real time thereby producing real time captioning of the signing.

5. A method as in claim 3, where the input is sent after the individual is finished signing through the system thereby producing a translation of the signed input.

6. A method as in claim 1, where the output comprises the most likely translations of the signed information selected from a list of possible translations.

7. A method as in claim 5 in which the user is then prompted to confirm the automated translation of their input.

8. A method as in claim 6 in which the user is then prompted to confirm the automated translation of their input.

9. A method as in claim 1, where the output comprises an automated response to the users input.

10. A apparatus for converting from a sign language to a target output space (e.g. interpreting to a target second language such as English), comprising the steps of: (g) Capturing an image or a sequence of images on an input device (minimally a single lens camera). (h) Optionally extracting out initial features. (i) Transmitting said features or image or sequence of features or images to a server or some external computation device. (j) Optionally extracting out additional features on this separate computing device. (k) Executing an algorithm on the resultant features on this separate computing device. (l) Optionally transferring the output to one or more recipient devices. Where the method is capable of utilizing any standard computation device equipped with a camera (such as a smartphone or tablet) as a capture device.

11. A apparatus as in claim 10, wherein the output comprises a value indicating if the individual is signing.

12. A apparatus as in claim 10, wherein the output comprises the translation of the signed information contained within the signed input.

13. A apparatus as in claim 12, where the input is streamed through the system in real time thereby producing real time captioning of the signing.

14. A apparatus as in claim 12, where the input is sent after the individual is finished signing through the system thereby producing a translation of the signed input.

15. A apparatus as in claim 10, where the output comprises the most likely translations of the signed information selected from a list of possible translations.

16. A apparatus as in claim 14 in which the user is then prompted to confirm the automated translation of their input.

17. A apparatus as in claim 15 in which the user is then prompted to confirm the automated translation of their input.

18. A apparatus as in claim 10, where the output comprises an automated response to the users input.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0006] FIG. 1 is a block diagram for the generalized architecture of the disclosure capable of processing sign language to some target output.

[0007] FIG. 2 is a block diagram of our embodiment for Sign Language Translation, which takes as input a video stream and outputs (either simultaneously while receiving the videostream, or after the videostream input has finished) a translation of what was signed into a target language.

[0008] FIG. 3 is a block diagram of our embodiment for Sign Language Detection, in which the user is captured via an input device, and is

[0009] Brought into focus when they are signing

[0010] Brought out of focus when they are not signing

[0011] FIG. 4 is a block diagram of our embodiment for Sign Language Information Retrieval, which takes as input a video stream and outputs the most likely sentence selected from a sentence bank after the user is finished signing (this is called an ASL menu).

[0012] FIG. 5 presents a User Interface schematic for a real time interpretation. This real time interpreter not only translates from a sign language to a target language, but also detects when the user is signing.

[0013] FIG. 6 presents a User Interface schematic for a conference call with a D/HH user where the user that is signing is focused.

[0014] FIG. 7 presents a User Interface schematic for a sign language translation device in which the user signs into the device and the most likely sentences are selected from a sentence bank and presented to the user for confirmation.

[0015] FIG. 8 presents a User Interface schematic for a sign language translation device in which the user signs into the device and the most likely sentences are selected from a sentence bank and presented to the user for confirmation

DETAILED DESCRIPTION

[0016] Generalized Architecture

[0017] The generalized architecture is depicted in FIG. 1 with example embodiments depicted in FIGS. 2-4.

[0018] Note that our embodiments do not require any specialized hardware besides a camera and wifi connection (and therefore would be suitable to run on any smartphone or camera-enabled device). Note further that our embodiments do not require personalization on a per-user basis, but rather functions for all users of a particular dialect of sign language. Finally, note that our embodiments are live, producing a real time output.

[0019] Our generalized architecture is as follows. A signer signs into 11 an input device (e.g. minimally a single lens camera). In real time, or after the signing is completed, the sign language information is sent to 12, which extracts out features (e.g. body pose keypoints, hand keypoints, hand pose, thresholded image, etc. . . . ). The features produced by 12 are then transmitted to component 13 which extracts sign language information (e.g. detecting if an individual is signing, transcribing that signing into gloss, or translating that signing into a target language) from a sequence of these per-frame features. Finally, the output is displayed on 14.

[0020] In our generalized architecture, at least 12 or 13 must reside (at least in part) on a cloud computation device. This allows for real time feedback to the user during signing enabling more natural interactions.

[0021] Real Time Interpreter Embodiment

[0022] An example embodiment of this is presented in FIG. 5. A signing user 53 is displayed on the output device 51. Via the presented system, it is automatically determined if the user is signing. When the user is signing, they are brought to focus via 52, a border around their video stream. Simultaneously, a live captioning is produced within a target language (e.g. English) and displayed on 54.

[0023] Our method for producing this translation is contained within FIG. 2. An image train is captured on 201 and streamed, either real time or after capturing is finished. Specifically, within our embodiment of 12, our system performs pose detection via Convolutional Pose Machines in 206 and hand localization via a RCNN in 205. These results are combined to find the bounding box of both the dominant and non-dominant hand by iterating through all bounding boxes found from 205 and finding the one closest to each wrist joint produced by 206. A CPM extracts the hands' poses from the dominant and non-dominant hands' bounding boxes in 207. Finally, all this information is merged into a flattened feature vector. These feature vectors are then normalized in 208 by

[0024] Setting the Head coordinates to be (0,0) in the pose and both shoulders to be an average of one unit away via an affine transform.

[0025] Setting the mean coordinates of each hand to be (0, 0, 0) and the standard deviation in each dimension for the coordinates of each hand to be an average of 1 unit via an affine transformation.

[0026] The feature vectors for a certain time period are collected and smoothed using exponential smoothing into a feature vector. The smoothed and normalized feature vectors are then sent to the processing module in 204.

[0027] Note that in the real time translation variant, for each new frame received, that frame is appended to the feature queue, and the resultant feature queue is smoothed and sent to the processing module 204 to be reprocessed.

[0028] In the processing module 202, the feature train is split into each individual sign via the sign-splitting component 209 via a 1D Convolutional Neural Network which highlights the sign transition periods. Note that this CNN additionally locates non-signing regions by outputting a special flag value (i.e. 0=intrasign region, 1=intersign region, 2=nonsigning region). The comparator in 211 then first determines if the entire signing region of the feature vector is contained within the list of pre-recorded sentences in the sentence base 214 (a database of sentences) via K Nearest-Neighbors (KNN) with a Dynamic Time Warping (DTVV) distance metric. If the feature vector does not correspond to a sentence, the comparator 211 then goes through each signs' corresponding region in the feature queue and determines if that sign was fingerspelled (done through a binary classifier). If so, the sign is processed by the fingerspelling module in 210 (done through a seq2seq RNN model). If not, the sign is determined by comparing with signs in the signbase in 213 (a database of individual signs) and choosing the most likely candidate (done through KNN with a distance metric of DTVV). Finally, a string of sign language gloss is output (the signs which constituted the feature queue). As the sign transcribed output is not yet in English, the grammar module in 213 translates the gloss to English via a Seq2Seq RNN. The resulting english text is returned to the device for visual display 201.

[0029] Signing Detection Embodiment

[0030] An example embodiment for signing detection of this is presented in FIG. 6. Specifically, in this scenario, N users connect to a video call with K (where K<N) of them are signers 63 and N−K of them are non signers 64, 65. When a given user is either speaking (detected via a threshold in noise) or signing (detected via this embodiment), they are brought to focus (i.e. spotlighted) via a border around their image 62.

[0031] Our method for performing signing detection utilizes a subset of the components of the real time interpreter embodiment and is illustrated in FIG. 3. Specifically, an image train is captured on all signer's devices 301 and streamed, either real time or after capturing is finished to 303. Within this embodiment of 12, our system only performs pose detection via Convolutional Pose Machines in 305 to form a feature vector. This feature vector is then normalized in 306 by Setting the Head coordinates to be (0,0) in the pose and both shoulders to be an average of one unit away via an affine transform.

[0032] The feature vectors for a certain time period are collected and smoothed into a feature vector using exponential smoothing. The smoothed and normalized feature vectors are then sent to the processing module in 304. Additionally, for each new frame received, that frame is appended to the feature queue, and the resultant feature queue is smoothed and sent to the processing module 304 to be reprocessed.

[0033] In the processing module, the feature train is split into each individual sign via the sign-splitting component 307 via a 1D Convolutional Neural Network which highlights the sign transition periods. Note that this CNN additionally locates non-signing regions by outputting a special flag value (i.e. 0=intrasign region, 1=intersign region, 2=nonsigning region). Finally, this system collects all users whose signing detection is currently either 0 or 1 (i.e. is signing). This is sent to all other conference call participants 308 so that the specified individuals can be spotlit.

[0034] Few Option Sign Language Translation Embodiment

[0035] It is desirable to limit the possible choices of the signed output to improve accuracy. An example embodiment of few-option sign language translation is shown in FIG. 7. A user signs into a capture device equipped with several single lens cameras 71. After the user finishes signing, the method processes the input and finds the three most likely translations. These options are then presented to the user in a menu 72 for them to choose from (73, 74, 75).

[0036] The architecture for achieving this is included in FIG. 4. As in the last embodiment, the components used in this embodiment are a strict subset of real time interpreter embodiment. Specifically, an image train is captured on a specialized device with several single camera lens setup 301 and streamed, either real time or after capturing is finished. Each frame goes through the feature extractor 403 which is equivalent to 203 in the unconstrained interpretation embodiment. Then, in the processing module 404, the comparator 409 (equivalent to 211) determines if the feature vector is contained within the list of pre-recorded sentences in the sentence base 410 (a database of sentences) via K Nearest-Neighbors (KNN) with a Dynamic Time Warping (DTW) distance metric. If the feature queue is found, the top three options are sent to the end user for presentation in 72.

[0037] Question Answering System Embodiment

[0038] In the question answering system embodiment, a user is prompted to sign a question to the system in 81. They then sign into the capture system in 82. The sign language is translated into gloss or english via the Real Time Interpreter embodiment presented in the disclosure above. Finally, the output is sent through an off the shelf question answering system to produce the output 83.