Patent classifications
G10L17/12
ELECTRONIC DEVICE AND CONTROL METHOD THEREFOR
Disclosed is an electronic device that provides a parenting guide based on voice recognition. In the electronic device including a parenting assistance agent operated by one of a standby mode and an active mode, the device includes a microphone that senses sound, and a controller that converts, when characteristic information corresponding to sound sensed through the microphone in the standby mode satisfies a preset crying sound criteria, an operation mode of the parenting assistance agent into an active mode, extracts parenting data corresponding to analyzed situation information after analyzing situation information related to the sound, and outputs parenting guide information corresponding to the extracted parenting data. A user interface to which artificial intelligence (AI) is applied is provided through the electronic device.
ELECTRONIC DEVICE AND CONTROL METHOD THEREFOR
Disclosed is an electronic device that provides a parenting guide based on voice recognition. In the electronic device including a parenting assistance agent operated by one of a standby mode and an active mode, the device includes a microphone that senses sound, and a controller that converts, when characteristic information corresponding to sound sensed through the microphone in the standby mode satisfies a preset crying sound criteria, an operation mode of the parenting assistance agent into an active mode, extracts parenting data corresponding to analyzed situation information after analyzing situation information related to the sound, and outputs parenting guide information corresponding to the extracted parenting data. A user interface to which artificial intelligence (AI) is applied is provided through the electronic device.
INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND PROGRAM
The information processing apparatus (2000) computes a first score representing a degree of similarity between the input sound data (10) and the registrant sound data (22) of the registrant (20). The information processing apparatus (2000) obtains a plurality of pieces of segmented sound data (12) by segmenting the input sound data (10) in the time direction. The information processing apparatus (2000) computes, for each piece of segmented sound data piece (12), a second score representing the degree of similarity between the segmented sound data (12) and the registrant sound data (22). The information processing apparatus 2000 makes first determination to determine whether a number of speakers of sound included in the input sound data (10) is one or multiple, using at least the second score. The information processing apparatus (2000) makes second determination to determine whether the input sound data (10) includes the sound of the registrant (20), based on the first score, the second scores, and a result of the first determination.
INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND PROGRAM
The information processing apparatus (2000) computes a first score representing a degree of similarity between the input sound data (10) and the registrant sound data (22) of the registrant (20). The information processing apparatus (2000) obtains a plurality of pieces of segmented sound data (12) by segmenting the input sound data (10) in the time direction. The information processing apparatus (2000) computes, for each piece of segmented sound data piece (12), a second score representing the degree of similarity between the segmented sound data (12) and the registrant sound data (22). The information processing apparatus 2000 makes first determination to determine whether a number of speakers of sound included in the input sound data (10) is one or multiple, using at least the second score. The information processing apparatus (2000) makes second determination to determine whether the input sound data (10) includes the sound of the registrant (20), based on the first score, the second scores, and a result of the first determination.
ELECTRONIC DEVICE AND METHOD FOR OPERATION THEREOF
Various embodiments of the present disclosure relate to a method for providing an intelligent assistance service and an electronic device for performing the same. According to an embodiment, an electronic device comprises at least one communication circuit, at least one microphone, at least one speaker, at least one processor operatively connected to the communication circuit, the microphone, and the speaker, and at least one memory electrically connected to the processor, wherein the memory has instructions stored therein which, when executed, cause the processor to receive a wake-up utterance calling a voice-based intelligent assistance service, in response to the wake-up utterance, to identify a session which is in progress by the voice-based intelligent assistance service, and, upon receiving a control command, to provide the control command to an external device through the session on the basis of the session. Other embodiments are also possible.
ELECTRONIC DEVICE AND METHOD FOR OPERATION THEREOF
Various embodiments of the present disclosure relate to a method for providing an intelligent assistance service and an electronic device for performing the same. According to an embodiment, an electronic device comprises at least one communication circuit, at least one microphone, at least one speaker, at least one processor operatively connected to the communication circuit, the microphone, and the speaker, and at least one memory electrically connected to the processor, wherein the memory has instructions stored therein which, when executed, cause the processor to receive a wake-up utterance calling a voice-based intelligent assistance service, in response to the wake-up utterance, to identify a session which is in progress by the voice-based intelligent assistance service, and, upon receiving a control command, to provide the control command to an external device through the session on the basis of the session. Other embodiments are also possible.
USER ACCOUNT MATCHING BASED ON A NATURAL LANGUAGE UTTERANCE
Techniques are described for user account matching based on natural language utterances. In an example, a computer system receives a set of words, a voice print, and offer data about an offer based at least in part on a natural language utterance at a user device. The computer system determines a set of user accounts based at least in part on the set of words and determines, from this set, a first user account based at least in part on the voice print. The first user account is associated with a first user identifier. The computer system determines that the offer is associated with a second user account that is further associated with a second user identifier. The computer system generates associations of the user accounts with user identifiers and with the offer.
USER ACCOUNT MATCHING BASED ON A NATURAL LANGUAGE UTTERANCE
Techniques are described for user account matching based on natural language utterances. In an example, a computer system receives a set of words, a voice print, and offer data about an offer based at least in part on a natural language utterance at a user device. The computer system determines a set of user accounts based at least in part on the set of words and determines, from this set, a first user account based at least in part on the voice print. The first user account is associated with a first user identifier. The computer system determines that the offer is associated with a second user account that is further associated with a second user identifier. The computer system generates associations of the user accounts with user identifiers and with the offer.
Reverberation compensation for far-field speaker recognition
Techniques are provided for reverberation compensation for far-field speaker recognition. A methodology implementing the techniques according to an embodiment includes receiving an authentication audio signal associated with speech of a user and extracting features from the authentication audio signal. The method also includes scoring results of application of one or more speaker models to the extracted features. Each of the speaker models is trained based on a training audio signal processed by a reverberation simulator to simulate selected far-field environmental effects to be associated with that speaker model. The method further includes selecting one of the speaker models, based on the score, and mapping the selected speaker model to a known speaker identification or label that is associated with the user.
Reverberation compensation for far-field speaker recognition
Techniques are provided for reverberation compensation for far-field speaker recognition. A methodology implementing the techniques according to an embodiment includes receiving an authentication audio signal associated with speech of a user and extracting features from the authentication audio signal. The method also includes scoring results of application of one or more speaker models to the extracted features. Each of the speaker models is trained based on a training audio signal processed by a reverberation simulator to simulate selected far-field environmental effects to be associated with that speaker model. The method further includes selecting one of the speaker models, based on the score, and mapping the selected speaker model to a known speaker identification or label that is associated with the user.