H04M2242/12

Applying user preferences, behavioral patterns and/or environmental factors to an automated customer support application
10694036 · 2020-06-23 · ·

A method and apparatus of applying user profile information to a customized application are disclosed. One example method of operation may include receiving an inquiry message or call from a user device, identifying and authorizing the user from inquiry message information received from the inquiry message, retrieving a user profile comprising at least one user preference, applying the at least one user preference to a user call processing application, and transmitting menu options to the user device based on the applied at least user preference.

CONSISTENT AUDIO GENERATION CONFIGURATION FOR A MULTI-MODAL LANGUAGE INTERPRETATION SYSTEM

A configuration is implemented via a processor to receive a request for spoken language interpretation of a user query from a first spoken language to a second spoken language. The first spoken language is spoken by a user situated at an audio-based device that is remotely situated from the customer care platform. The user query is sent from the audio-based device by the user to the customer care platform. The configuration performs, at a language interpretation platform, a first spoken language interpretation of the user query from the first spoken language to the second spoken language. Further, the configuration transmits, from the language interpretation platform to the customer care platform, the first spoken language interpretation so that a customer care representative speaking the second spoken language understands the first spoken language being spoken by the user.

Validation of revised IVR prompt translation

An example operation may include one or more of transferring a copy of a plurality of revised translation data sets to be added to an IVR application into a grid structure, each revised translation data set comprising a prompt name in a first field, an IVR prompt in a second field, a translation of the IVR prompt into a different language in a third field, and a timestamp in a fourth field, executing, via a processor, an accuracy validation on the plurality of revised translation data sets, wherein, for each revised translation data set, the processor identifies whether a respective translation in a different language in a third field is an accurate translation of a respective IVR prompt in a second field based on attributes of the respective translation and the respective IVR prompt, and displaying results of the accuracy validation via a user interface.

METHOD FOR SUPPORTING TRANSLATION OF GLOBAL LANGUAGES AND MOBILE PHONE
20200125644 · 2020-04-23 ·

The present disclosure provides a method for supporting translation of global languages, and the product thereof. The method includes the following steps: receiving, by a smart phone, a calling request sent by a terminal, connecting the calling request, and establishing a calling connection; receiving, by the smart phone, first voice information transmitted through the calling connection, identifying a first language and a first dialect that correspond to the first voice information, obtaining a translation model corresponding to the first dialect, and translating the first voice information of the first dialect into second voice information of a second dialect; and playing, by the smart phone, the second voice information of the second dialect by using a speaker device.

GLOBAL SIMULTANEOUS INTERPRETATION MOBILE PHONE AND METHOD
20200125645 · 2020-04-23 ·

The present disclosure provides a global simultaneous interpretation method and production thereof, the method includes the following steps: receiving a calling request sent by a terminal by a smart phone, connecting the calling request, and establishing a calling connection; receiving a first voice information transmitted through the calling connection by the smart phone, and when the first voice information is identified and is determined as a non-specified language, translating the first voice information into a second voice information of a specified language; and playing the second voice information by using a speaker device by the smart phone.

Auto-translation for multi user audio and video

The disclosed subject matter provides a system, computer readable storage medium, and a method providing an audio and textual transcript of a communication. A conferencing services may receive audio or audio visual signals from a plurality of different devices that receive voice communications from participants in a communication, such as a chat or teleconference. The audio signals representing voice (speech) communications input into respective different devices by the participants. A translation services server may receive over a separate communication channel the audio signals for translation into a second language. As managed by the translation services server, the audio signals may be converted into textual data. The textual data may be translated into text of different languages based the language preferences of the end user devices in the teleconference. The translated text may be further translated into audio signals.

Computer-based systems and methods configured for one or more technological applications for the automated assisting of telephone agent services

At least some embodiments, a system includes a memory, and a processor configured to convert an audio stream of a speech of a customer during a customer call session into customer-originated text. The customer-originated text is displayed in a first chat interface. A request from a first call center agent is sent to a second call center agent via the first chat interface to interact with the customer during the customer call session and displayed in a second chat interface. The second agent is allowed to participate in the customer call session when the second call center agent accepts the request from the first call center agent. First agent-originated text and second agent-originated text during the customer call session is merged to form a combined agent-originated text and synthesized to computer-generated agent speech having a voice of a computer-generated agent based on the combined agent-originated text communicated to the customer over the voice channel.

Conversation assistance system
10560567 · 2020-02-11 · ·

Systems and methods for providing conversation assistance include receiving from at least one user device of a user, conversation information and determining that the conversation information is associated with a conversation involving the user and a first person that is associated with first conversation assistance information in a non-transitory memory. Body measurement data of the user is retrieved from the at least first user device. A need for conversation assistance in the conversation involving the user and the first person is detected using the body measurement data. First conversation assistance information associated with the first person is retrieved from the non-transitory memory. The first conversation assistance information associated with the first person is provided through the at least one user device.

System and method for controlling client electronic devices in wireless local ad hoc network

Various aspects of a system and method for controlling client electronic devices in a wireless local ad hoc network are disclosed. The system includes one or more circuits in a server configured to receive a plurality of speaking-slot requests from a plurality of client electronic devices for a presented content. A client electronic device is selected based on an acceptance of a corresponding speaking-slot request. The acceptance of the corresponding speaking-slot request is based on at least an analysis of the one or more image frames of a user associated with the client electronic device and a context of the presented content. At least an audio stream provided by the user is received from the selected client electronic device. The selected client electronic device is controlled based on one or more parameters associated with at least the selected client electronic device.

Method and system for enabling automated audio keyword monitoring with video relay service calls
10547813 · 2020-01-28 ·

A method and system for video relay service calling with audio monitoring is described. The method includes initiating a video relay service call, the video relay service call including a video portion between a sign language interpreter and a user who is deaf, hard-of-hearing, or speech impaired (D-HOH-SI); an audio portion between the sign language interpreter and a called party; and one or more call parameters. The method further includes determining whether one of the call parameters indicates that the video relay service call should be monitored; in response to determining that the video relay service call should be monitored, directing at least the audio portion of the video relay service call to an audio monitoring service so that the audio portion of the video relay service call between the sign language interpreter and the called party can be monitored by the audio monitoring service.