Patent classifications
H04M2203/552
Systems and methods relating to customer experience automation
A method for personalizing a delivery of services to a first customer including: providing a customer profile; updating the customer profile via performing a first process to collect interaction data, the first process including the steps of: monitoring activity on the communication device and, therefrom, detecting the first interaction with the first contact center; identifying data relating to the first interaction for collecting as the interaction data; and updating the customer profile to include the interaction data identified from the first interaction; generating an interaction predictor, the interaction predictor comprising knowledge about the first customer derived, at least in part, from the data stored within the customer profile, the knowledge comprising a behavioral factor attributable to the first customer given a first type of interaction; and augmenting the customer profile by storing therein the interaction predictor.
AUTOMATED SPEECH-TO-TEXT PROCESSING AND ANALYSIS OF CALL DATA APPARATUSES, METHODS AND SYSTEMS
The present invention discloses a system, apparatus, and method that obtains audio and metadata information from voice calls, generates textual transcripts from those calls, and makes the resulting data searchable via a user interface. The system converts audio data from one or more sources (such as a telecommunications provider) into searchable usable text transcripts. One use of which is law enforcement and intelligence work. Another use relates to call centers to improve quality and track customer service history. Searches can be performed for callers, callees, keywords, and/or other information in calls across the system. The system can also generate automatic alerts based on callers, callees, keywords, phone numbers, and/or other information. Further the system generates and provides analytic information on the use of the phone system, the semantic content of the calls, and the connections between callers and phone numbers called, which can aid analysts in detecting patterns of behavior, and in looking for patterns of equipment use or failure.
Coaching in an automated communication link establishment and management system
A contextual lead generation in an automated communication link establishment and management system may store information related to sales calls. The system may identify strengths and weaknesses of a sales representative. The system may provide training content to the sales representative in real time base on the identified strengths and weaknesses.
Method and system for accurate automatic call tracking and analysis
There is disclosed a method in a data processing system for automatically and accurately determining an outcome of a phone call by using signifiers or audibles as a way to increase accuracy without altering the flow of the conversation. The disclosed method can be used in any call center, such as a high volume call center used in the financial (banking), insurance, rental (hotel or car), ticket sales, and the like. The method comprises receiving voice data of a phone call; transmitting a response communication based on the voice data, wherein the response communication includes at least one audible or signifier; identifying, by at least one processor, the at least one audible or signifier in the response communication; and automatically determining the outcome of the phone call based on the audible or signifier in the response communication. A data processing system and a non-transitory computer-readable medium for storing instructions consistent with the described method are also disclosed.
Cognitive automation-based engine BOT for processing audio and taking actions in response thereto
Aspects of the disclosure relate to cognitive automation-based engine processing on audio files and streams received from meetings and/or telephone calls. A noise mask can be applied to enhance the audio. Real-time speech analytics separate speech for different speakers into time-stamped streams, which are transcribed and merged into a combined output. The output is parsed by analyzing the combined output for correct syntax, normalized by breaking the parsed data into record groups for efficient processing, validated to ensure that the data satisfies defined formats and input criteria, and enriched to correct for any errors and to augment the audio information. Notifications based on the enriched data may be provided to call or meeting participants. Cognitive automation functions may also identify callers or meeting attendees, identify action items, assign tasks, calendar appointments for future meetings, create email distribution lists, route transcriptions, monitor for legal compliance, and correct for regionalization issues.
Automated audio-to-text transcription in multi-device teleconferences
A system and method are disclosed for generating a teleconference space for two or more communication devices using a computer coupled with a database and comprising a processor and memory. The computer generates a teleconference space and transmits requests to join the teleconference space to the two or more communication devices. The computer stores in memory identification information, and audiovisual data associated with one or more users, for each of the two or more communication devices. The computer stores audio transcription data, transmitted to the computer by each of the two or more communication devices and associated with one or more communication device users, in the computer memory. The computer merges the audio transcription data from each of the two or more communication devices into a master audio transcript, and transmits the master audio transcript to each of the two or more communication devices.
Storing call session information in a telephony system
In an example of this disclosure, a method may include receiving, by a database server, a data write request. The data write request may include authentication information corresponding to a first call session and first additional information. The method may include generating, by the database server, a first unique identifier based on the first additional information. The authentication information may correspond to the first unique identifier. The method may include storing the first unique identifier and the authentication information in a data structure in a memory of the database server.
Automated audio-to-text transcription in multi-device teleconferences
A system and method are disclosed for generating a teleconference space for two or more communication devices using a computer coupled with a database and comprising a processor and memory. The computer generates a teleconference space and transmits requests to join the teleconference space to the two or more communication devices. The computer stores in memory identification information, and audiovisual data associated with one or more users, for each of the two or more communication devices. The computer stores audio transcription data, transmitted to the computer by each of the two or more communication devices and associated with one or more communication device users, in the computer memory. The computer merges the audio transcription data from each of the two or more communication devices into a master audio transcript, and transmits the master audio transcript to each of the two or more communication devices.
Tool for annotating and reviewing audio conversations
Methods, systems, and computer programs are presented for searching and labeling the content of voice conversations. An Engagement Intelligence Platform (EIP) analyzes conversation transcripts to find states and information for each of the states (e.g., interest rate quoted and value of the interest rate). An annotator User Interface (IU) is provided for performing queries, such as, “Find calls were the agent asked the customer for their name and the customer did not answer;” “Find calls where the customer objected after the interest rate for the loan was quoted, “Find calls where the agent asked for consent for recording the call, but no customer confirmation was received.” The EIP analyzes the conversation and labels (e.g., “tags”) the text where the conversation associated with the label took place, such as, “An interest rate was provided.” The labels are customizable, so each client can define its own labels based on business needs.
SWITCHING BETWEEN SPEECH RECOGNITION SYSTEMS
A method may include obtaining first audio data originating at a first device during a communication session between the first device and a second device. The method may also include obtaining an availability of revoiced transcription units in a transcription system and in response to establishment of the communication session, selecting, based on the availability of revoiced transcription units, a revoiced transcription unit instead of a non-revoiced transcription unit to generate a transcript of the first audio data. The method may also include obtaining revoiced audio generated by a revoicing of the first audio data by a captioning assistant and generating a transcription of the revoiced audio using an automatic speech recognition system. The method may further include in response to selecting the revoiced transcription unit, directing the transcription of the revoiced audio to the second device as the transcript of the first audio data.