G10L17/06

DETECTION OF SPEECH

A method of own voice detection is provided for a user of a device. A first signal is detected, representing air-conducted speech using a first microphone of the device. A second signal is detected, representing bone-conducted speech using a bone-conduction sensor of the device. The first signal is filtered to obtain a component of the first signal at a speech articulation rate, and the second signal is filtered to obtain a component of the second signal at the speech articulation rate. The component of the first signal at the speech articulation rate and the component of the second signal at the speech articulation rate are compared, and it is determined that the speech has not been generated by the user of the device, if a difference between the component of the first signal at the speech articulation rate and the component of the second signal at the speech articulation rate exceeds a threshold value.

SYSTEM AND METHOD FOR DETECTING FRAUD RINGS

A system and method may identify a fraud ring based on call or interaction data by analyzing by a computer processor interaction data including audio recordings to identify clusters of interactions which are suspected of involving fraud each cluster including the same speaker; analyzing by the computer processor the clusters, in combination with metadata associated with the interaction data, to identify fraud rings, each fraud ring describing a plurality of different speakers, each fraud ring defined by a set of speakers and a set of metadata corresponding to interactions including that speaker; and for each fraud ring, creating a relevance value defining the relative relevance of the fraud ring.

SYSTEM AND METHOD FOR DETECTING FRAUD RINGS

A system and method may identify a fraud ring based on call or interaction data by analyzing by a computer processor interaction data including audio recordings to identify clusters of interactions which are suspected of involving fraud each cluster including the same speaker; analyzing by the computer processor the clusters, in combination with metadata associated with the interaction data, to identify fraud rings, each fraud ring describing a plurality of different speakers, each fraud ring defined by a set of speakers and a set of metadata corresponding to interactions including that speaker; and for each fraud ring, creating a relevance value defining the relative relevance of the fraud ring.

Method and device for user registration, and electronic device

Provided in embodiments of the present application are a method and apparatus for user registration and electronic device. The method includes: after obtaining a wake-up voice of a user each time, extracting and storing a first voiceprint feature corresponding to the wake-up voice; clustering the stored first voiceprint features to divide the stored first voiceprint features into at least one category, wherein, each of the at least one category includes at least one first voiceprint feature which belongs to the same user; assigning one category identifier to each category; storing each category identifier in correspondence to at least one first voiceprint feature corresponding to this category identifier to complete user registration. The embodiments of the present application can simplify the user operation and improve the user experience.

Method and device for user registration, and electronic device

Provided in embodiments of the present application are a method and apparatus for user registration and electronic device. The method includes: after obtaining a wake-up voice of a user each time, extracting and storing a first voiceprint feature corresponding to the wake-up voice; clustering the stored first voiceprint features to divide the stored first voiceprint features into at least one category, wherein, each of the at least one category includes at least one first voiceprint feature which belongs to the same user; assigning one category identifier to each category; storing each category identifier in correspondence to at least one first voiceprint feature corresponding to this category identifier to complete user registration. The embodiments of the present application can simplify the user operation and improve the user experience.

VOICE ASSISTANT SYSTEM AND METHOD FOR PERFORMING VOICE ACTIVATED MACHINE TRANSLATION

A method for performing a query based on a natural language voice input is provided. The method includes receiving, via a microphone, a voice input of a user, and converting the voice input into a first text data object. The method further includes converting the first text data object into a first technical language object using AI, and submitting a query based on the first technical language object. A query result in a second technical language object is retrieved in response to the query, and the query result is converted into a second text data object using AI. The method further converts the second text object into a voice data object indicating the query result, and outputs a voice signal to provide the information of the query result in a natural language form to the user.

VOICE ASSISTANT SYSTEM AND METHOD FOR PERFORMING VOICE ACTIVATED MACHINE TRANSLATION

A method for performing a query based on a natural language voice input is provided. The method includes receiving, via a microphone, a voice input of a user, and converting the voice input into a first text data object. The method further includes converting the first text data object into a first technical language object using AI, and submitting a query based on the first technical language object. A query result in a second technical language object is retrieved in response to the query, and the query result is converted into a second text data object using AI. The method further converts the second text object into a voice data object indicating the query result, and outputs a voice signal to provide the information of the query result in a natural language form to the user.

APPLIED BEHAVIORAL THERAPY APPARATUS AND METHOD
20230238114 · 2023-07-27 ·

An apparatus for providing automated analysis and monitoring of an ABT session is presented herein. The apparatus may include a display configured to present material for the ABT session to a patient, at least one video capture device configured to capture video data for the ABT session related to at least one of first facial features of the patient, second facial features of a therapist, or a response to the material presented on the display, at least one audio capture device configured to capture audio data for the ABT session related to at least one of a first voice of the patient or a second voice of the therapist, and at least one processor configured to analyze, for the ABT session, data regarding the material presented on the display, the captured video data, and the captured audio data to produce an analysis of the ABT session.

APPLIED BEHAVIORAL THERAPY APPARATUS AND METHOD
20230238114 · 2023-07-27 ·

An apparatus for providing automated analysis and monitoring of an ABT session is presented herein. The apparatus may include a display configured to present material for the ABT session to a patient, at least one video capture device configured to capture video data for the ABT session related to at least one of first facial features of the patient, second facial features of a therapist, or a response to the material presented on the display, at least one audio capture device configured to capture audio data for the ABT session related to at least one of a first voice of the patient or a second voice of the therapist, and at least one processor configured to analyze, for the ABT session, data regarding the material presented on the display, the captured video data, and the captured audio data to produce an analysis of the ABT session.

SYSTEMS AND METHODS FOR COHERENT AND TIERED VOICE ENROLLMENT

Computer-implemented methods and systems include enrolling a user at a first security tier, from a plurality of security tiers, based on user risk criteria and call risk criteria applied to one or more historical calls, storing voice calibration information for the enrolled user based on the one or more historical calls, monitoring for a call and receiving data associated with the call, the data having a voice component captured using a microphone, authenticating the call as originating from the enrolled user by matching the voice component to the voice calibration information, granting the enrolled user account access in accordance with the first security tier, during the call, based on the enrolling the user at the first security tier and the authenticating the call as originating from the enrolled user, and updating the voice calibration information based on the voice component.