Patent classifications
G06F16/636
MULTI-USER AUTHENTICATION ON A DEVICE
In some implementations, processor(s) can receive an utterance from a speaker, and determine whether the speaker is a known user of a user device or not a known user of the user device. The user device can be shared by a plurality of known users. Further, the processor(s) can determine whether the utterance corresponds to a personal request or non-personal request. Moreover, and in response to determining that the speaker not a known user of the user device and in response to determining that the utterance corresponds to a non-personal request, the processor(s) can cause a response to the utterance to be provided for presentation to the speaker at the user device response to the utterance, or can cause an action to be performed by the user device responsive to the utterance.
Audio Content Serving and Creation Based on Modulation Characteristics and Closed Loop Monitoring
Disclosed systems and methods include determining a desired trajectory within a multi-dimensional mental state space based on a path from an initial position within the multi-dimensional mental state space to a target position within the multi-dimensional mental state space, where the initial position corresponds to an initial mental state of the user, and where the target position corresponds to a target mental state of the user. Some embodiments include selecting a first media item that has an expected trajectory within the multi-dimensional mental state space that approximates the desired trajectory, and causing playback of the first media item.
Multi-user authentication on a device
In some implementations, a set of audio recordings capturing utterances of a user is received by a first speech-enabled device. Based on the set of audio recordings, the first speech-enabled device generates a first user voice recognition model for use in subsequently recognizing a voice of the user at the first speech-enabled device. Further, a particular user account associated with the first voice recognition model is determined, and an indication that a second speech-enabled device that is associated with the particular user account is received. In response to receiving the indication, the set of audio recordings is provided to the second speech-enabled device. Based on the set of audio recordings, the second speech-enabled device generates a second user voice recognition model for use in subsequently recognizing the voice of the user at the second speech-enabled device.
Headset playback acoustic dosimetry
In-ear sound pressure level, SPL, is determined that is caused by output audio being converted into sound by a headset worn by a user. The in-ear SPL is converted into a sound sample having units that are suitable for evaluating sound noise exposure. These operations are repeated to produce a sequence of sound samples during playback. This sequence of sound samples is then written to a secure database. Access to the database is authorized by the user. Other aspects are also described and claimed.
System for managing transitions between media content items
A system for playing media content items operates to provide smooth transitions between the media content items to continuously support a user's repetitive motion activity. The system can generate crossfade data containing information for transitions between media content items. The mix-in and mix-out points for the transitions are calculated to eliminate one or more portions of media content items that have lower musical energy than a majority portion of the items, and to maintain substantially consistent and/or stable musical energy (e.g., audio power or sound power) throughout the media content items including transitions therebetween.
Sleep assistance device
A sleep assistance device includes a contactless biometric sensor, a processor, memory, and a speaker. The processor detects a user's sleep state by reading signals from the contactless biometric sensor. The processor may then initiate a wind-down routine upon detecting a sleep-readiness state, including playing relaxing sounds or playing a respiration entrainment sound. The processor may also play noise-masking sounds upon detecting that a user has fallen asleep and seamlessly transition between the sounds played during the wind-down routine and the noise-masking sounds.
System for Managing Transitions Between Media Content Items
A system for playing media content items operates to provide smooth transitions between the media content items to continuously support a user's repetitive motion activity. The system can generate crossfade data containing information for transitions between media content items. The mix-in and mix-out points for the transitions are calculated to eliminate one or more portions of media content items that have lower musical energy than a majority portion of the items, and to maintain substantially consistent and/or stable musical energy (e.g., audio power or sound power) throughout the media content items including transitions therebetween.
METHODS AND SYSTEMS FOR PROCESSING AUDIO SIGNALS CONTAINING SPEECH DATA
Methods and systems for processing audio signals containing speech data are disclosed. Biometric data associated with at least one speaker are extracted from an audio input. A correspondence is determined between the extracted biometric data and stored biometric data associated with a consenting user profile, where a consenting user profile is a user profile indicates consent to store biometric data. If no correspondence is determined, the speech data is discarded, optionally after having been processed.
NETWORK-ASSISTED REMOTE MEDIA LISTENING
Improved approaches for media listening amongst different users are disclosed. For example, methods, systems or computer program code can enable users to have a remote listening experience in real time. Advantageously, a remote user at a remote client device can in effect listen to a particular digital media asset that is being played at a local client device of a local user. Media information and/or user profiles can also be provided about themselves and shared with other users.
Methods and systems for processing audio signals containing speech data
Methods and systems for processing audio signals containing speech data are disclosed. Biometric data associated with at least one speaker are extracted from an audio input. A match is determined between the extracted biometric data and stored biometric data associated with a consenting user profile, where a consenting user profile is a user profile associated with a record indicating consent to store biometric data. If a match is determined to exist with such a profile, the speech data is stored in an archive after processing. If no such match is determined, or if the extracted biometric data includes data from a speaker not having a consenting user profile, the speech data is discarded, optionally after having been processed. The system and method provides a safeguard against transferring to storage data of users, particularly minors or children, for whom a verified and valid consent has not been obtained from an authorised adult.