Patent classifications
G10L15/04
Inverted Projection for Robust Speech Translation
The technology provides an approach to train translation models that are robust to transcription errors and punctuation errors. The approach includes introducing errors from actual automatic speech recognition and automatic punctuation systems into the source side of the machine translation training data. A method for training a machine translation model includes performing automatic speech recognition on input source audio to generate a system transcript. The method aligns a human transcript of the source audio to the system transcript, including projecting system segmentation onto the human transcript. Then the method performs segment robustness training of a machine translation model according to the aligned human and system transcripts, and performs system robustness training of the machine translation model, e.g., by injecting token errors into training data.
SYSTEM AND METHOD FOR AUTOMATED PROCESSING OF NATURAL LANGUAGE USING DEEP LEARNING MODEL ENCODING
Automated systems and methods are provided for processing natural language, comprising obtaining first and second digitally-encoded speech representations, respectively corresponding to an agent script for and a voice recording of a telecommunication interaction; generating a similarity structure based on the speech representations, the similarity structure representing a degree of semantic similarity between the speech representations; matching markers in the first speech representation to markers in the second speech representation based on the similarity structure; and dividing the telecommunication interaction into a plurality of sections based on the matching.
SYSTEM AND METHOD FOR AUTOMATED PROCESSING OF NATURAL LANGUAGE USING DEEP LEARNING MODEL ENCODING
Automated systems and methods are provided for processing natural language, comprising obtaining first and second digitally-encoded speech representations, respectively corresponding to an agent script for and a voice recording of a telecommunication interaction; generating a similarity structure based on the speech representations, the similarity structure representing a degree of semantic similarity between the speech representations; matching markers in the first speech representation to markers in the second speech representation based on the similarity structure; and dividing the telecommunication interaction into a plurality of sections based on the matching.
Speech recognition method, electronic device, and computer storage medium
A speech recognition method includes segmenting captured voice information to obtain a plurality of voice segments, and extracting voiceprint information of the voice segments; matching the voiceprint information of the voice segments with a first stored voiceprint information to determine a set of filtered voice segments having voiceprint information that successfully matches the first stored voiceprint information; combining the set of filtered voice segments to obtain combined voice information, and determining combined semantic information of the combined voice information; and using the combined semantic information as a speech recognition result when the combined semantic information satisfies a preset rule.
Speech recognition method, electronic device, and computer storage medium
A speech recognition method includes segmenting captured voice information to obtain a plurality of voice segments, and extracting voiceprint information of the voice segments; matching the voiceprint information of the voice segments with a first stored voiceprint information to determine a set of filtered voice segments having voiceprint information that successfully matches the first stored voiceprint information; combining the set of filtered voice segments to obtain combined voice information, and determining combined semantic information of the combined voice information; and using the combined semantic information as a speech recognition result when the combined semantic information satisfies a preset rule.
SYSTEM AND METHOD FOR GENERATING WRAP UP INFORMATION
A system for generating wrap-up information is capable of learning how interactions are transformed into contact notes and outcome codes using natural language processing and can generate the contact notes and outcome codes for new incoming interactions by applying prediction models trained on interaction data, contact notes and outcome codes. The system for generating wrap-up information receives interaction data, including interaction audio data, interaction transcripts, associated contact notes and associated outcome codes. The interaction transcripts are generated from the previous interactions between agents and customers. The contact notes and outcome codes are generated by agents during the associated previous interactions. The system processes and uses the interaction data to train prediction models to analyze interaction audio data and interaction transcripts and predict appropriate contact notes and outcome codes for the interaction. Once trained the prediction model(s) can generate appropriate contact notes and outcome codes for new interactions.
ELECTRONIC DEVICE AND METHOD FOR PROCESSING SPEECH BY CLASSIFYING SPEECH TARGET
Various embodiments of the disclosure provide a method and a device which includes multiple cameras arranged at different positions, multiple microphones arranged at different positions, a memory, and a processor operatively connected to at least one of the multiple cameras, the multiple microphones, and the memory, wherein the processor is configured to: determine, using at least one of the multiple cameras, whether at least one of a user wearing the electronic device or a counterpart having a conversation with the user makes an utterance, configure directivity of at least one of the multiple microphones based on the determination, obtain an audio from at least one of the multiple microphones based on the configured directivity, obtain an image including a mouth shape of the user or the counterpart from at least one of the multiple cameras, and process speech of an utterance target in a different manner based on the obtained audio and the image.
Computing method for populating digital forms from un-parsed data
A computing device is disclosed which includes a processor and non-transient memory operably connected to the processor. The non-transient memory includes instructions that, when executed by the processor cause the processor to extract a plurality of sub-strings from a character string, analyze each sub-string for compliance with each of several field definitions, where each of the field definitions corresponds to a field in a digital form, and populate some of the fields in the digital form based on the analysis of each sub-string for compliance with the field definitions.
Computing method for populating digital forms from un-parsed data
A computing device is disclosed which includes a processor and non-transient memory operably connected to the processor. The non-transient memory includes instructions that, when executed by the processor cause the processor to extract a plurality of sub-strings from a character string, analyze each sub-string for compliance with each of several field definitions, where each of the field definitions corresponds to a field in a digital form, and populate some of the fields in the digital form based on the analysis of each sub-string for compliance with the field definitions.
Systems and methods for automatic speech translation
A method for providing automatic interpretation may include receiving, by a processor, audible speech from a speech source, generating, by the processor, in real-time, a speech transcript by applying an automatic speech recognition model on the speech, segmenting, by the processor, the speech transcript into speech segments based on a content of the speech by applying a segmenter model on the speech transcript, compressing, by the processor, the speech segments based on the content of the speech by applying a compressor model on the speech segments, generating, by the processor, a translation of the speech by applying a machine translation model on the compressed speech segments, and generating, by the processor, audible translated speech based on the translation of the speech by applying a text to speech model on the translation of the speech.