Patent classifications
G10L17/04
Passenger Assistant for a Shared Mobility Vehicle
A shared mobility vehicle hosts a moving “info kiosk” that provides information assistance to potential passengers (or other individuals) and to on-board passenger. The approach is applicable to human-operated vehicles, and a particularly applicable to autonomous vehicles where no human operator is available to provide assistance.
Device and method for muffling, communication device and wearable device
A muffling device includes an acquisition circuit, configured to obtain reference sound wave information of a user. The muffling device includes a modulation circuit, configured to analyze an acoustic wave characteristic of the reference sound wave information to obtain a characteristic parameter of the reference sound wave information. The muffling device includes a muffling circuit, configured to generate compensated sound wave information according to the characteristic parameter of the reference sound wave information. The muffling device includes a correction circuit, configured to compare muffed sound wave information superimposed by the compensated sound wave information and the reference sound wave information with the reference sound wave information, and feed back a comparison result to the muffling circuit. The muffling circuit can adjust the compensated sound wave information according to a fed back comparison result. The muffling device includes an output circuit, configured to output adjusted compensated sound wave information.
Device and method for muffling, communication device and wearable device
A muffling device includes an acquisition circuit, configured to obtain reference sound wave information of a user. The muffling device includes a modulation circuit, configured to analyze an acoustic wave characteristic of the reference sound wave information to obtain a characteristic parameter of the reference sound wave information. The muffling device includes a muffling circuit, configured to generate compensated sound wave information according to the characteristic parameter of the reference sound wave information. The muffling device includes a correction circuit, configured to compare muffed sound wave information superimposed by the compensated sound wave information and the reference sound wave information with the reference sound wave information, and feed back a comparison result to the muffling circuit. The muffling circuit can adjust the compensated sound wave information according to a fed back comparison result. The muffling device includes an output circuit, configured to output adjusted compensated sound wave information.
METHODS FOR IMPROVING THE PERFORMANCE OF NEURAL NETWORKS USED FOR BIOMETRIC AUTHENTICATIO
A method of generating a biometric signature of a user for use in authentication using a neural network, the method comprising: receiving (110) a plurality of biometric samples from a user;
extracting at least one feature vector using the plurality of biometric samples; using the elements of the at least one feature vector as inputs for a neural network; extracting the corresponding activations from an output layer of the neural network; and generating a biometric signature of the user using the extracted activations, such that a single biometric signature represents multiple biometric samples from the user.
User authentication as a service
Systems, methods, and devices for adaptably authenticating a user are disclosed. A device captures a user input, and sends data corresponding thereto to a system. The system determines natural language understanding (NLU) results representing the user input. A user authentication component of the system receives the NLU results and determines a skill configured to perform an action responsive to the user input. The user authentication component adaptably performs user authentication based on a user authentication condition associated with the skill. If the user can be authenticated to the satisfaction of the condition, the NLU results data are sent to the skill, along with an indicator representing the user was authenticated by the system.
USING SPEECH MANNERISMS TO VALIDATE AN INTEGRITY OF A CONFERENCE PARTICIPANT
Techniques are provided to validate a digitized audio signal that is generated by a conference participant. Reference speech features of the conference participant are obtained, either via samples provided explicitly by the participant, or collected passively via prior conferences. The speech features include one or more of word choices, filler words, common grammatical errors, idioms, common phrases, pace of speech, or other features. The reference speech features are compared to features observed in the digitized audio signal. If the reference speech features are sufficiently similar to the observed speech features, the digitized audio signal is validated and the conference participant is allowed to remain in the conference. If the validation is not successful, a variety of possible actions are taken, including alerting an administrator and/or terminating the participant's attendance in the conference.
USING SPEECH MANNERISMS TO VALIDATE AN INTEGRITY OF A CONFERENCE PARTICIPANT
Techniques are provided to validate a digitized audio signal that is generated by a conference participant. Reference speech features of the conference participant are obtained, either via samples provided explicitly by the participant, or collected passively via prior conferences. The speech features include one or more of word choices, filler words, common grammatical errors, idioms, common phrases, pace of speech, or other features. The reference speech features are compared to features observed in the digitized audio signal. If the reference speech features are sufficiently similar to the observed speech features, the digitized audio signal is validated and the conference participant is allowed to remain in the conference. If the validation is not successful, a variety of possible actions are taken, including alerting an administrator and/or terminating the participant's attendance in the conference.
Multi-user personalization at a voice interface device
A method at an electronic device with one or more microphones and a speaker includes receiving a first voice input; comparing the first voice input to one or more voice models; based on the comparing, determining whether the first voice input corresponds to any of a plurality of occupants, and according to the determination, authenticating an occupant and presenting a response, or restricting functionality of the electronic device.
Multi-user personalization at a voice interface device
A method at an electronic device with one or more microphones and a speaker includes receiving a first voice input; comparing the first voice input to one or more voice models; based on the comparing, determining whether the first voice input corresponds to any of a plurality of occupants, and according to the determination, authenticating an occupant and presenting a response, or restricting functionality of the electronic device.
Text independent speaker recognition
Text independent speaker recognition models can be utilized by an automated assistant to verify a particular user spoke a spoken utterance and/or to identify the user who spoke a spoken utterance. Implementations can include automatically updating a speaker embedding for a particular user based on previous utterances by the particular user. Additionally or alternatively, implementations can include verifying a particular user spoke a spoken utterance using output generated by both a text independent speaker recognition model as well as a text dependent speaker recognition model. Furthermore, implementations can additionally or alternatively include prefetching content for several users associated with a spoken utterance prior to determining which user spoke the spoken utterance.