Patent classifications
H04M2203/255
Systems and methods for facilitating communication between a user and a service provider
Systems and methods are disclosed for generating a dynamic customized script to facilitate communication between a user and a service provider. The method includes receiving, via an interactive voice response (IVR) system, a request from a mobile device associated with at least one user. The contextual information associated with at least one user is processed, in real-time, based, at least in part, on the request. A dynamic customized script specific to the request is generated, in real-time, based, at least in part, on the processing of the contextual information. The request is routed, via the IVR system, to an agent from a pool of agents of the service provider. A presentation of the dynamic customized script is generated in a user interface of a device associated with the agent, wherein the dynamic customized script is step-by-step guidance to the agent for handling the request of at least one user.
SYSTEMS AND METHODS FOR FACILITATING COMMUNICATION BETWEEN A USER AND A SERVICE PROVIDER
Systems and methods are disclosed for generating a dynamic customized script to facilitate communication between a user and a service provider. The method includes receiving, via an interactive voice response (IVR) system, a request from a mobile device associated with at least one user. The contextual information associated with at least one user is processed, in real-time, based, at least in part, on the request. A dynamic customized script specific to the request is generated, in real-time, based, at least in part, on the processing of the contextual information. The request is routed, via the IVR system, to an agent from a pool of agents of the service provider. A presentation of the dynamic customized script is generated in a user interface of a device associated with the agent, wherein the dynamic customized script is step-by-step guidance to the agent for handling the request of at least one user.
ALTERATION OF SPEECH WITHIN AN AUDIO STREAM BASED ON A CHARACTERISTIC OF THE SPEECH
In some implementations, a system may receive an audio stream associated with a call between a user and an agent. The system may process, by the device and using a speech alteration model, speech from a first channel of the audio stream to alter the speech from having a first speech characteristic to having a second speech characteristic, wherein the speech alteration model is trained based on reference audio data associated with the first speech characteristic and the second speech characteristic and based on reference speech data associated with the first speech characteristic and the second speech characteristic. The system may extract the speech from the first channel that has the first speech characteristic. The system may provide, within a second channel of the audio stream, altered speech that corresponds to the speech and that has the first speech characteristic.
System and method for redirecting inbound-voice-interactions to digital channels in a contact center
A computerized-method for redirecting, inbound-voice-interactions to digital channels in a contact center, is provided herein. The computerized-method includes: (i) operating a digital-qualifier module to determine a digital-medium-transition-quotient, of an inbound-voice-interaction of a customer in an inbound-queue. The digital-medium-transition-quotient is an indication of a level of suitability of a digital-communication-channel to resolve a customer issue; (ii) operating an interaction-redirection module to determine a digital-communication-channel for redirection of the inbound-voice-interaction, based on customer-preference and the determined digital-medium-transition-quotient; and (iii) forwarding the inbound-voice-interaction and the determined digital-communication-channel to an Automatic Call Distribution (ACD) system to be carried-out by an agent via the determined digital-communication-channel when the determined digital-medium-transition-quotient is above a preconfigured threshold.
ALERTING A USER TO A CHANGE IN AN AUDIO STREAM
Disclosed are methods and systems for alerting a user to a change in an audio stream. In an aspect, a user device of the user receives the audio stream, detects a change in an audio pattern occurring in the audio stream, wherein the detection of the change in the audio pattern occurs when the audio stream is muted, and in response to the detection of the change in the audio pattern, provides an alert to the user that indicates the change in the audio pattern has occurred.
Personalizing the audio visual experience during telecommunications
A method and system are provided. The method includes identifying content in a telecommunication session between a caller and one or more other parties. The method further includes dynamically personalizing media provided to the caller on a telecommunication device during at least a portion of a subsequent telecommunication session between the caller and at least one of the one or more other parties based on the identified content in the telecommunication session. The telecommunication session occurs prior to the subsequent telecommunication session.
Artificial ventriloquist-like contact center agents
The need for efficient and effective communications is of key importance to contact centers. Agent communications with customers are designed to maximize results while minimizing resources, in particular the time required for human agents to be engaged with a particular customer. Often the impact of two agents on a communication can both improve customer satisfaction and better produce the intended result of the communication. However, two (or more) live agents is resource intensive. By providing a virtual agent controlled, entirely or in part, by a live agent, the customer may be presented with the appearance of two agents while requiring the human resources of a single agent.
Conveying attention information in virtual conference
A method of executing a virtual conference among a plurality of nodes including a first node, wherein there is a display device associated with the first node that is configured to display a virtual conference window containing images of participants at other nodes of the plurality of nodes, is presented. The method entails activating the virtual conference window in response to receiving a selection in the virtual conference window, wherein the activating of the virtual conference window triggers a process of identifying one of the nodes as an attention recipient and displaying the attention recipient's image differently than images of other nodes. Where private chat function is available, attention recipient may be identified based on who the participant at the first node is chatting with. Images may be augmented to add an illusion of space and distance.
VIRTUAL VOICE RESPONSE AGENT INDIVIDUALLY CONFIGURED FOR A USER
A call can be received from a user. At least one input can be received from the user. Responsive to receiving the input(s) from the user, a user profile for the user can be identified or created. The user profile can indicate one or more speech traits of the user. A virtual intelligent voice response (VIVR) agent individually configured for the user can be identified or created. The VIVR agent can be configured to include, or identify, one or more VIVR agent features corresponding to the speech trait(s) of the user. The user can be interacted with on the call by generating synthesized speech using parameters specified by the VIVR agent feature(s) included in, or identified by, the VIVR agent individually configured for the user.
CONFIGURING OUTPUT CONTROLS ON A PER-ONLINE IDENTITY AND/OR A PER-ONLINE RESOURCE BASIS
A process includes receiving, from a user identity, instructions for output characteristics including one or more of audio characteristics for rendering or capturing audio data or visual characteristics for rendering or capturing visual data. The process also includes determining, in response to the received instructions, output controls which effect the one or more of audio characteristics or visual characteristics, and associating the output controls with an online identity or resource. The process further includes storing the associated output controls and detecting an interaction with the online identity or resource. Moreover, the process includes accessing, in response to the detection of the interaction, the stored output controls, and enabling an effect, based on the output controls, of one or more of the audio characteristics or the visual characteristics with respect to interaction with the online identity or resource.