Captioned Telephone Services Improvement
20210250441 · 2021-08-12
Inventors
Cpc classification
H04M3/436
ELECTRICITY
H04M3/42391
ELECTRICITY
H04M2203/5018
ELECTRICITY
H04M3/42382
ELECTRICITY
International classification
H04M3/42
ELECTRICITY
Abstract
Internet Protocol captioned telephone service often utilizing Automated Speech Recognition can be utilized with conference calls to separate out each of the various parties' speech as text, such as with text bubbles differentiated by caller on a device of the user. Additionally, a prioritized vocabulary can be provided for each user that is not shared with a public so that if the user utilizes words in their speech not common in the general public, those words can be more accurately identified by the telephone service. The service may learn and apply that vocabulary and/or the user may provide words to the service.
Claims
1. A method of audio to text transcription provided by a captioner company comprising the steps of: a) a far end caller connected to a user in audio communication with a telephone number managed by a captioned telephone service manager; b) said captioned telephone service manager utilizing a captioner to transcribe audio to text of the far end caller and sending the text to a device of the user in approximately real time; and c) a third party caller separate from the far end caller and the user joining in a conference call with the user and far end caller with the captioned telephone service manager utilizing the captioner to transcribe audio to text from the third party caller and provide to the device of the user.
2. The method of claim 1 wherein text of the third party caller is differentiated from text of the far end caller for a duration of the connection as a conference call as provided to the device of the user.
3. The method of claim 2 wherein the text of the user is differentiated from text of the far end caller and the third party for the duration of the connection as a conference call as provided to the device of the user.
4. The method of claim 2 wherein differentiation of text is provided by one of different text bubble locations on the device, differing colors, and assigning identifiers to the far end caller, the third party and the user so the user can easily correlate text to one of the third party, the far end caller and the user.
5. The method of claim 4 wherein one of the far end caller calls the telephone number managed by the captioned telephone service to initiate the connection, and the user joins to the telephone number managed by the captioned telephone service after connection with the far end caller.
6. The method of claim 4 where more than one third party join the conference call with the text of the third parties are differentiated.
7. The method of claim 4 wherein the captioner is automated speech recognition software controlled by the captioned telephone service manager.
8. The method of claim 4 further comprising a prioritized vocabulary for the user which is not provided for use with a similar priority to a public outside of the user.
9. The method of claim 8 wherein the user provides words to the captioned telephone service manager which may be less common than used by the public.
10. The method of claim 9 wherein the words are one of geographic and technical in nature.
11. The method of claim 9 wherein the user ascribes a likely priority to words likely to be used in conversations with the user.
12. The method of claim 8 wherein the captioned telephone service manager provides a prioritized vocabulary for each user based on communications of the user.
13. The method of claim 1 wherein one of the far end caller calls the telephone number managed by the captioned telephone service to initiate the connection, and the user joins to the telephone number managed by the captioned telephone service after connection with the far end caller.
14. The method of claim 1 wherein one of the far end caller calls the telephone number managed by the captioned telephone service to initiate the connection, and the user joins to the telephone number managed by the captioned telephone service after connection with the far end caller.
15. The method of claim 4 wherein the captioner is automated speech recognition software controlled by the captioned telephone service manager.
16. A method of audio to text transcription provided by a captioner company comprising the steps of: a) a far end caller connected to a user in audio communication with a telephone number managed by a captioned telephone service manager; b) said captioned telephone service manager utilizing a captioner to transcribe audio to text of the far end caller and sending the text to a device of the user in approximately real time; and c) the captioned telephone service manager maintains a prioritized vocabulary for the user not shared with a larger public.
17. The method of claim 16 further comprising the step of: a third party caller separate from the far end caller and the user joining in a conference call with the user and far end caller with the captioned telephone service manager utilizing the captioner to transcribe audio to text from the third party caller and provide to the device of the user.
18. The method of claim 16 wherein the captioned telephone service manager provides a prioritized vocabulary for each user based on communications of the user.
19. The method of claim 16 wherein the user provides words to the captioned telephone service manager which may be less common than used by the public.
20. The method of claim 19 wherein the user ascribes a likely priority to words likely to be used in conversations with the user.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0021] The accompanying drawings illustrate preferred embodiments of the invention and, together with the description, serve to explain the invention. The drawings may not show elements to scale. These drawings are offered by way of illustration and not by way of limitation:
[0022]
[0023]
[0024]
[0025]
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0026]
[0027] A flow chart of this embodiment is shown in
[0028]
[0029]
[0030] Finally,
[0031] Accordingly, the database 56 is not passed to other users as normally happen in ASR technology. One likely word is “deaf” which is utilized with great frequency as compared to the general public as often the pronunciation of this words is confused with “death” which might otherwise appear more frequently in the text streams as a miscommunication of the term deaf by certain ASR software. Accordingly, the user can help assist the algorithm identify which words are more frequently utilized for themselves than the general public. How much weight is given to any particular word could be at least partially controlled by the user. Furthermore, these preferences could change over time, daily or at other periods. Priorities could range from above average, to high priority, etc.
[0032]
[0033] Numerous alterations of the structure herein disclosed will suggest themselves to those skilled in the art. However, it is to be understood that the present disclosure relates to the preferred embodiment of the invention which is for purposes of illustration only and not to be construed as a limitation of the invention. All such modifications which do not depart from the spirit of the invention are intended to be included within the scope of the appended claims.