G10L17/26

VOICE-BASED CONTROL OF SEXUAL STIMULATION DEVICES
20230210716 · 2023-07-06 ·

A system and method for voice-based control of sexual stimulation devices. In some configurations, the system and method involve receiving voice data, analyzing the voice data to detect spoken commands, and generating control signals based on the commands. In some configurations, the system and method involve receiving voice data, analyzing the voice data for non-speech vocalizations, detecting voice stress patterns, and generating control signals based on the detected patterns. In some configurations, the analyses of the voice data are performed by machine learning algorithms which may be trained on associations between speech and non-speech vocalizations of a user while the user engages in one or more voice-based training tasks, associating speech and non-speech vocalizations with controls of the sexual stimulation device. In some configurations, machine learning algorithms are used to make the associations. In some configurations, data from other biometric sensors is included in the associations.

DYNAMIC SIDE-TONE TO CONTROL VOICE CATEGORY
20230215455 · 2023-07-06 ·

A method for providing sidetone adjustment comprises generating an audio signal representing user speech, determining a spectral distribution of the audio signal, determining a voice category from the spectral distribution of the audio signal, applying an adjustment to the audio signal based on the determined voice category to generate an adjusted audio signal, and providing audio output based on the adjusted audio signal to the user as sidetone. The adjustment to the audio signal may comprise adjustments to a plurality of frequency bands in the audio signal. The adjustments may further comprise boosting the levels of frequency bands in a high frequency speech band.

DYNAMIC SIDE-TONE TO CONTROL VOICE CATEGORY
20230215455 · 2023-07-06 ·

A method for providing sidetone adjustment comprises generating an audio signal representing user speech, determining a spectral distribution of the audio signal, determining a voice category from the spectral distribution of the audio signal, applying an adjustment to the audio signal based on the determined voice category to generate an adjusted audio signal, and providing audio output based on the adjusted audio signal to the user as sidetone. The adjustment to the audio signal may comprise adjustments to a plurality of frequency bands in the audio signal. The adjustments may further comprise boosting the levels of frequency bands in a high frequency speech band.

Detecting deep-fake audio through vocal tract reconstruction

A method is provided for identifying synthetic “deep-fake” audio samples versus organic audio samples. Methods may include: generating a model of a vocal tract using one or more organic audio samples from a user; identifying a set of bigram-feature pairs from the one or more audio samples; estimating the cross-sectional area of the vocal tract of the user when speaking the set of bigram-feature pairs; receiving a candidate audio sample; identifying bigram-feature pairs of the candidate audio sample that are in the set of bigram-feature pairs; calculating a cross-sectional area of a theoretical vocal tract of a user when speaking the identified bigram-feature pairs; and identifying the candidate audio sample as a deep-fake audio sample in response to the calculated cross-sectional area of the theoretical vocal tract of a user failing to correspond within a predetermined measure of the estimated cross sectional area of the vocal tract of the user.

Detecting deep-fake audio through vocal tract reconstruction

A method is provided for identifying synthetic “deep-fake” audio samples versus organic audio samples. Methods may include: generating a model of a vocal tract using one or more organic audio samples from a user; identifying a set of bigram-feature pairs from the one or more audio samples; estimating the cross-sectional area of the vocal tract of the user when speaking the set of bigram-feature pairs; receiving a candidate audio sample; identifying bigram-feature pairs of the candidate audio sample that are in the set of bigram-feature pairs; calculating a cross-sectional area of a theoretical vocal tract of a user when speaking the identified bigram-feature pairs; and identifying the candidate audio sample as a deep-fake audio sample in response to the calculated cross-sectional area of the theoretical vocal tract of a user failing to correspond within a predetermined measure of the estimated cross sectional area of the vocal tract of the user.

Processing speech signals in voice-based profiling
11538472 · 2022-12-27 · ·

This document describes a data processing system for processing a speech signal for voice-based profiling. The data processing system segments the speech signal into a plurality of segments, with each segment representing a portion of the speech signal. For each segment, the data processing system generates a feature vector comprising data indicative of one or more features of the portion of the speech signal represented by that segment and determines whether the feature vector comprises data indicative of one or more features with a threshold amount of confidence. For each of a subset of the generated feature vectors, the system processes data in that feature vector to generate a prediction of a value of a profile parameter and transmits an output responsive to machine executable code that generates a visual representation of the prediction of the value of the profile parameter.

Processing speech signals in voice-based profiling
11538472 · 2022-12-27 · ·

This document describes a data processing system for processing a speech signal for voice-based profiling. The data processing system segments the speech signal into a plurality of segments, with each segment representing a portion of the speech signal. For each segment, the data processing system generates a feature vector comprising data indicative of one or more features of the portion of the speech signal represented by that segment and determines whether the feature vector comprises data indicative of one or more features with a threshold amount of confidence. For each of a subset of the generated feature vectors, the system processes data in that feature vector to generate a prediction of a value of a profile parameter and transmits an output responsive to machine executable code that generates a visual representation of the prediction of the value of the profile parameter.

Processing and visualising audio signals
11538473 · 2022-12-27 · ·

The present disclosure relates to methods, computer programs, and computer-readable media for processing a voice audio signal. A method includes receiving, at an electronic device, a voice audio signal, identifying spoken phrases within the voice audio signal based on the detection of voice activity or inactivity, dividing the voice audio signal into a plurality of segments based on the identified spoken phrases, and in accordance with a determination that a selected segment of the plurality of segments has a duration, Tseg, longer than a threshold duration, T.sub.thresh, identifying a most likely location of a breath in the audio associated with the selected segment and dividing the selected segment into sub-segments based on the identified most likely location of a breath.

Processing and visualising audio signals
11538473 · 2022-12-27 · ·

The present disclosure relates to methods, computer programs, and computer-readable media for processing a voice audio signal. A method includes receiving, at an electronic device, a voice audio signal, identifying spoken phrases within the voice audio signal based on the detection of voice activity or inactivity, dividing the voice audio signal into a plurality of segments based on the identified spoken phrases, and in accordance with a determination that a selected segment of the plurality of segments has a duration, Tseg, longer than a threshold duration, T.sub.thresh, identifying a most likely location of a breath in the audio associated with the selected segment and dividing the selected segment into sub-segments based on the identified most likely location of a breath.

Apparatus, systems and methods for determining a commentary rating

Commentary rating determination systems and methods determine a commentary rating for commentary about a subject media content event that has been generated by a community member. An exemplary embodiment receives video information acquired by a 360° video camera, identifies a physical object from the received video information, determines a physical attribute associated with the identified physical object, wherein the determined physical attribute describes a characteristic of the identified physical object, compares the determined physical attribute of the identified physical object with a plurality of predefined physical object attributes stored in a database, and in response to identifying one of the plurality of predefined physical object attributes that matches the determined physical attribute, associates the quality value of the identified one of the plurality of predefined physical object attributes with the identified physical object. Then, the commentary rating is determined for the commentary based on the associated quality value.