Patent classifications
H04M1/2475
COMMUNICATION DEVICE AND METHODS FOR USE BY HEARING IMPAIRED
A method for maintaining contact information in a hearing impaired assisted user's communication device includes the steps of (a) providing a web site for altering assisted user contact information, (b) linking a proxy device to the web site, (c) receiving an identifier associated with the assisted user's device via the proxy device, (d) identifying an assisted user's device via the received identifier, (e) enabling the proxy device to be used to modify contact information for the assisted user associated with the received identifier, (f) starting a timer to time out a sync timeout period, (g) during the sync timeout period, receiving an indication via the assisted user's device confirming a desire to update the assisted user's contact information, (h) updating the assisted user's contact information, and (i) at the end of the timeout period, ceasing an indication that updated data is ready to be used from the assisted user's device.
SEMIAUTOMATED RELAY METHOD AND APPARATUS
A captioning relay for captioning hearing user (HU) voice signals comprising a plurality of separate captioning resources and a captioning administrator module that receives HU voice signal segments corresponding to a plurality of separate ongoing calls between HUs and AUs and provides the voice signal segments in a first in, first out order to the captioning resources, the administrator module providing each voice signal segment from each call to any one of the captioning resources to be captioned without regard to which captioning resource captioned prior voice signal segments generated during the call and, the administrator module further receiving caption segments back from the captioning resources and providing those captioning segments to AU devices associated with the calls that generated corresponding HU voice signal segments, and wherein the number of captioning resources is less than the number of ongoing calls.
SEMIAUTOMATED RELAY METHOD AND APPARATUS
A system includes a first user device configured to perform captioning session operations, a call-assistant (CA) device remote from the first user device, and a remote relay server separate from the CA device. The relay server initiates a captioning process, receives, from the first user device, a request to initiate a captioning session, establishes the session, assigns the session to the CA, receives first audio data from the first user device derived from a second user device, directs the first audio data to the CA device, receives, from the CA device, second audio data related to the first audio data and derived from CA speech, accesses an ASR engine trained to the CA voice, generates captioned text including a transcription of the second audio data, generates screen information including the transcription, directs the screen information to the CA device, and directs the captioned text to the first user device.
Communication device and methods for use by hearing impaired
A method for maintaining contact information in a hearing impaired assisted user's communication device includes the steps of (a) providing a web site for altering assisted user contact information, (b) linking a proxy device to the web site, (c) receiving an identifier associated with the assisted user's device via the proxy device, (d) identifying an assisted user's device via the received identifier, (e) enabling the proxy device to be used to modify contact information for the assisted user associated with the received identifier, (f) starting a timer to time out a sync timeout period, (g) during the sync timeout period, receiving an indication via the assisted user's device confirming a desire to update the assisted user's contact information, (h) updating the assisted user's contact information, and (i) at the end of the timeout period, ceasing an indication that updated data is ready to be used from the assisted user's device.
TRANSCRIPTION GENERATION FROM MULTIPLE SPEECH RECOGNITION SYSTEMS
A method may include obtaining first audio data originating at a first device during a communication session between the first device and a second device. The method may also include obtaining a first text string that is a transcription of the first audio data, where the first text string may be generated using automatic speech recognition technology using the first audio data. The method may also include obtaining a second text string that is a transcription of second audio data, where the second audio data may include a revoicing of the first audio data by a captioning assistant and the second text string may be generated by the automatic speech recognition technology using the second audio data. The method may further include generating an output text string from the first text string and the second text string and using the output text string as a transcription of the speech.
Storing messages
A computer-implemented method to store messages is disclosed. The method may include obtaining a minimum message length for stored messages. The minimum message length may be greater than zero. The method may further include determining an amount of available storage space allocated for storage of user messages on a computer-readable medium. The method may also include, in response to a communication session not being established between a user of a first communication device and a second communication device and in response to the amount of available storage space being greater than zero but being insufficient to store the minimum message length, not storing a user message and providing an indication that there is no available storage space.
HEARING ACCOMMODATION
A method may include obtaining a first audio signal including first speech originating at a remote device during a communication session between the remote device and a communication device and obtaining a second audio signal including second speech originating at the communication device during the communication session between the remote device and the communication device. The method may also include obtaining a characteristic of the communication session from one or more of: the first audio signal, the second audio signal, and settings of the communication device and determining a hearing level of a user of the communication device using the characteristic of the communication session.
Electronic device that limits electromagnetic emissions from multiple batteries
An electronic device, method, and computer program product enable limiting electromagnetic emissions from current between batteries. A first battery is positioned proximal to an earpiece speaker within the electronic device. At least one second battery is positioned at a different location within the electronic device that is not proximal to the earpiece speaker. A controller is electrically connected to a switch that is electrically connected in-line with the first battery. The controller selectively toggles the switch between first and second switches state. The controller initiates activation of a first software mode of the electronic device corresponding to operation of the earpiece speaker. In response to detecting the activation, the controller toggles the switch to the first switch state in which the switch limits current drawn from the first battery while the electronic device is in the first software mode, thus reducing baseband electromagnetic emissions emanating from the first battery.
Device independent text captioned telephone service
A communication system and method for displaying text captions corresponding to voice communications between an assisted user's mobile wireless device and a separate hearing user's device includes at least one communication component configured to enable the appliance to communicate with a relay, a display, and a processor operably coupled to the at least one communication component and the display. The processor is configured to enable the assisted user to establish an association between the appliance and the mobile device, receive text originating at the relay, the text corresponding to a transcript of the hearing user's voice signal originating at the hearing user's device, and cause text captions corresponding to the received text to be displayed on the display.
Transcription generation from multiple speech recognition systems
A method may include obtaining first audio data originating at a first device during a communication session between the first device and a second device. The method may also include obtaining a first text string that is a transcription of the first audio data, where the first text string may be generated using automatic speech recognition technology using the first audio data. The method may also include obtaining a second text string that is a transcription of second audio data, where the second audio data may include a revoicing of the first audio data by a captioning assistant and the second text string may be generated by the automatic speech recognition technology using the second audio data. The method may further include generating an output text string from the first text string and the second text string and using the output text string as a transcription of the speech.