G10L25/63

Narrative authentication

Authentication is performed based on a user narrative. A narrative, such as a personal story, can be requested during a setup process. Content, voice signature, and emotion can be determined or inferred from analyzing the narrative. Subsequently, a user can provide vocal input associated with the narrative, such as by retelling the narrative or answering questions regarding the narrative. The vocal input can be analyzed for content, voice signature and emotion, and compared with the initial narrative. An authentication score can then generated based on the comparison.

EMOTION TYPE CLASSIFICATION FOR INTERACTIVE DIALOG SYSTEM
20180005646 · 2018-01-04 ·

Techniques for selecting an emotion type code associated with semantic content in an interactive dialog system. In an aspect, fact or profile inputs are provided to an emotion classification algorithm, which selects an emotion type based on the specific combination of fact or profile inputs. The emotion classification algorithm may be rules-based or derived from machine learning. A previous user input may be further specified as input to the emotion classification algorithm. The techniques are especially applicable in mobile communications devices such as smartphones, wherein the fact or profile inputs may be derived from usage of the diverse function set of the device, including online access, text or voice communications, scheduling functions, etc.

EMOTION TYPE CLASSIFICATION FOR INTERACTIVE DIALOG SYSTEM
20180005646 · 2018-01-04 ·

Techniques for selecting an emotion type code associated with semantic content in an interactive dialog system. In an aspect, fact or profile inputs are provided to an emotion classification algorithm, which selects an emotion type based on the specific combination of fact or profile inputs. The emotion classification algorithm may be rules-based or derived from machine learning. A previous user input may be further specified as input to the emotion classification algorithm. The techniques are especially applicable in mobile communications devices such as smartphones, wherein the fact or profile inputs may be derived from usage of the diverse function set of the device, including online access, text or voice communications, scheduling functions, etc.

Information-processing device, vehicle, computer-readable storage medium, and information-processing method
11710499 · 2023-07-25 · ·

An information-processing device includes a first feature-value information acquiring unit for acquiring an acoustic feature-value vector and a language feature-value vector extracted from a user's spoken voice. The information-processing device includes a second feature-value information acquiring unit for acquiring an image feature-value vector extracted from the user's facial image. The information-processing device includes an emotion estimating unit including a learned model including: a first attention layer using, as inputs, a first vector generated from the acoustic feature-value vector and a second vector generated from the image feature-value vector; and a second attention layer using, as an input, an output vector from the first attention layer and a third vector generated from the language feature-value vector, wherein the emotion estimating unit is for estimating the user's emotion based on the output vector from the second attention layer.

Information-processing device, vehicle, computer-readable storage medium, and information-processing method
11710499 · 2023-07-25 · ·

An information-processing device includes a first feature-value information acquiring unit for acquiring an acoustic feature-value vector and a language feature-value vector extracted from a user's spoken voice. The information-processing device includes a second feature-value information acquiring unit for acquiring an image feature-value vector extracted from the user's facial image. The information-processing device includes an emotion estimating unit including a learned model including: a first attention layer using, as inputs, a first vector generated from the acoustic feature-value vector and a second vector generated from the image feature-value vector; and a second attention layer using, as an input, an output vector from the first attention layer and a third vector generated from the language feature-value vector, wherein the emotion estimating unit is for estimating the user's emotion based on the output vector from the second attention layer.

Message delivery apparatus and methods
11707694 · 2023-07-25 ·

The present disclosure provides a more adapt and accessible messaging system. In some aspects, the present disclosure relates to a messaging system that allows users to prerecord messages for future or real time use. Allowing users to communicate emotional messages, both visually and audially to relay feedback to both users. In some embodiments, the system may be useful to help with mental issues, self-esteem problems and other personal issues, non-limiting examples. In some implementations, the device may provide instant messages from one message device to another message device.

Message delivery apparatus and methods
11707694 · 2023-07-25 ·

The present disclosure provides a more adapt and accessible messaging system. In some aspects, the present disclosure relates to a messaging system that allows users to prerecord messages for future or real time use. Allowing users to communicate emotional messages, both visually and audially to relay feedback to both users. In some embodiments, the system may be useful to help with mental issues, self-esteem problems and other personal issues, non-limiting examples. In some implementations, the device may provide instant messages from one message device to another message device.

Memory retention system

The present disclosure generally relates to a computer-implemented system for intelligently retaining and recalling memory data. An exemplary method comprises receiving, via a microphone of an electronic device, a speech input of the user; receiving a text input of the user; constructing a first instance of a memory data structure based on the speech input; constructing a second instance of the memory data structure based on the text input; adding the first instance and the second instance of the memory data structure to a memory stack of the user; displaying a user interface for retrieving memory data of the user; receiving, via the user interface, a beginning of a statement from the user; retrieving a particular instance of the memory data structure from the memory stack based on the beginning of the statement; and automatically displaying a completion of the statement.

ENHANCED VIRTUAL AND/OR AUGMENTED COMMUNICATIONS INTERFACE
20230239436 · 2023-07-27 ·

The present invention provides systems and methods employing a conferencing system for facilitating enhanced communication between users. In certain embodiments, the conferencing system comprises a communication interface configured to, during a conference session, provide a virtual and/or augmented conference between multiple users having access to a multi-channel, multi-access, always-on, and non-blocking communication. In particular embodiments, the communication interface is in communication with at least one additional component select from: a video component, a data component (e.g., that provides non-audio data to one or more of said users), an audio/video ambience component, and a whiteboard component.

SYSTEMS AND METHODS FOR GENERATING EMOTIONALLY-ENHANCED TRANSCRIPTION AND DATA VISUALIZATION OF TEXT
20230237242 · 2023-07-27 ·

Generating emotionally enhanced transcription of non-textual data and an enriched visualization of transcribed data by capturing non-textual data of a speaker using bio-feedback technology, transcribing it into to a textual format, combining transcribed textual data with emotional state of the speaker to generate the emotionally enhanced transcribed textual data, and presenting emotionally enhanced transcribed textual data through an enriched visualization including color-coding transcribed textual data to identify mistakes in the transcribed data.