Patent classifications
G10L17/00
METHODS AND APPARATUS TO DETERMINE AN AUDIENCE COMPOSITION BASED ON VOICE RECOGNITION
Methods, apparatus, systems and articles of manufacture are disclosed. An example apparatus includes a controller to cause a people meter to emit a prompt for input of audience identification information at a first time and determine a first audience count based on the input, an audio detector to determine a second audience count based on signatures generated from audio data captured in the media environment, and a comparator to cause the people meter to not emit the prompt for at least a first time period after the first time when the first audience count is equal to the second audience count.
Universal and user-specific command processing
A system configured to process an incoming spoken utterance and to coordinate among multiple speechlet components to execute an action of the utterance, where the output of one speechlet may be used as the input to another speechlet to ultimately perform the action. The speechlets and intervening actions need not be expressly invoked by the utterance. Rather the system may determine how best to complete the action and may identify intermediate speechlets that may be provide input data to the speechlet that will ultimately perform the action. The speechlets may be configured to recognize a common universe of actions and/or entities rather than have each speechlet or subject matter domain have its own set of recognizable actions and entities.
Universal and user-specific command processing
A system configured to process an incoming spoken utterance and to coordinate among multiple speechlet components to execute an action of the utterance, where the output of one speechlet may be used as the input to another speechlet to ultimately perform the action. The speechlets and intervening actions need not be expressly invoked by the utterance. Rather the system may determine how best to complete the action and may identify intermediate speechlets that may be provide input data to the speechlet that will ultimately perform the action. The speechlets may be configured to recognize a common universe of actions and/or entities rather than have each speechlet or subject matter domain have its own set of recognizable actions and entities.
Enrollment with an automated assistant
Techniques are described herein for dialog-based enrollment of individual users for single- and/or multi-modal recognition by an automated assistant, as well as determining how to respond to a particular user's request based on the particular user being enrolled and/or recognized. Rather than requiring operation of a graphical user interface for individual enrollment, dialog-based enrollment enables users to enroll themselves (or others) by way of a human-to-computer dialog with the automated assistant.
Enrollment with an automated assistant
Techniques are described herein for dialog-based enrollment of individual users for single- and/or multi-modal recognition by an automated assistant, as well as determining how to respond to a particular user's request based on the particular user being enrolled and/or recognized. Rather than requiring operation of a graphical user interface for individual enrollment, dialog-based enrollment enables users to enroll themselves (or others) by way of a human-to-computer dialog with the automated assistant.
Detection of live speech
A method of detecting live speech comprises: receiving a signal containing speech; obtaining a first component of the received signal in a first frequency band, wherein the first frequency band includes audio frequencies; and obtaining a second component of the received signal in a second frequency band higher than the first frequency band. Then, modulation of the first component of the received signal is detected; modulation of the second component of the received signal is detected; and the modulation of the first component of the received signal and the modulation of the second component of the received signal are compared. It may then be determined that the speech may not be live speech, if the modulation of the first component of the received signal differs from the modulation of the second component of the received signal.
Graph-based approach for voice authentication
Methods for voice authentication include receiving a plurality of mono telephonic interactions between customers and agents; creating a mapping of the plurality of mono telephonic interactions that illustrates which agent interacted with which customer in each of the interactions; determining how many agents each customer interacted with; identifying one or more customers an agent has interacted with that have the fewest interactions with other agents; and selecting a predetermined number of interactions of the agent with each of the identified customers. In some embodiments, the methods further include creating a voice print from first and second speaker components of each interaction; comparing the voice prints of a first selected interaction to the voice prints from a second selected interaction; calculating a similarity score between the voice prints; aggregating scores; and identifying the voice prints that are associated with the agent.
Graph-based approach for voice authentication
Methods for voice authentication include receiving a plurality of mono telephonic interactions between customers and agents; creating a mapping of the plurality of mono telephonic interactions that illustrates which agent interacted with which customer in each of the interactions; determining how many agents each customer interacted with; identifying one or more customers an agent has interacted with that have the fewest interactions with other agents; and selecting a predetermined number of interactions of the agent with each of the identified customers. In some embodiments, the methods further include creating a voice print from first and second speaker components of each interaction; comparing the voice prints of a first selected interaction to the voice prints from a second selected interaction; calculating a similarity score between the voice prints; aggregating scores; and identifying the voice prints that are associated with the agent.
System and method to determine outcome probability of an event based on videos
System and method for determining an outcome probability of an event based on videos are disclosed. The method includes receiving the videos of an event, creating a building block model, extracting one of an audio content, a video content from the videos, analysing extracted content, generating an analysis result, analysing an engagement between speaker and participant of event, generating a data lake comprising a keyword library, computing the outcome probability of the event, enabling the building block model to learn from the data lake and the outcome probability computed and representing the at least one outcome probability in a pre-defined format.
System and method for communication analysis for use with agent assist within a cloud-based contact center
Methods to reduce agent effort and improve customer experience quality through artificial intelligence. The Agent Assist tool provides contact centers with an innovative tool designed to reduce agent effort, improve quality and reduce costs by minimizing search and data entry tasks The Agent Assist tool is natively built and fully unified within the agent interface while keeping all data internally protected from third-party sharing.