G06F16/436

Method and apparatus for gaze detection

A method and apparatus for determining gaze direction information, includes a light source for forming illuminating light to an eye region of a user, and optical element(s) configured to guide the illuminating light from the light source to the eye region. The illuminating light is dynamically adjustable to generate a dynamic light beam on the eye region, and a sensor is configured to capture reflected light on the eye region and generate reflection eye data. The apparatus can maintain user profile information, adjust spectral power distribution of the light source based on the user profile information, receive the reflection eye data, and generate the gaze direction information based on the reflection eye data.

MODIFIED MEDIA DETECTION
20230053277 · 2023-02-16 ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for detecting modified media are disclosed. In one aspect, a method includes the actions of receiving an item of media content. The actions further include providing the item as an input to a model that is configured to determine whether the item likely includes audio of a user's voice that was not spoken by the user or likely includes video of the user that depicts actions of the user that were not performed by the user. The actions further include receiving, from the model, data indicating whether the item likely includes audio of the user's voice that was not spoken by the user or includes video of the user that depicts actions of the user that were not performed by the user. The actions further include determining whether the item likely includes deepfake content.

Content explanation method and apparatus

A content explanation method and apparatus applied to content explanation includes identifying, by a content explanation apparatus, an emotion of the user, when identifying a negative emotion showing that the user is confused about delivered multimedia information, obtaining, by the content explanation apparatus, a target representation manner of target content in a target intelligence type, where the target content is content about which the user is confused in the multimedia information delivered to the user by an information delivery apparatus associated with the content explanation apparatus, and presenting, by the content explanation apparatus, the target content to the user in the target representation manner.

Method, apparatus and computer device for searching audio, and storage medium

The present disclosure relates to a method for searching an audio, pertaining to the technical field of electronics. The method includes: detecting a predetermined trigger event in response to receiving a trigger instruction for searching an audio; recording a time point when a detected trigger event occurs each time upon detecting the predetermined trigger event once until a predetermined end event is detected, and acquiring recorded time points to obtain a time point sequence; selecting a target reference time sequence matching the time point sequence from pre-stored reference time sequences; and determining target audio data corresponding to the target reference time sequence based on a pre-stored corresponding relationship between audio data and the reference time sequence.

METHOD AND APPARATUS FOR GAZE DETECTION

A method and apparatus for determining gaze direction information, includes a light source for forming illuminating light to an eye region of a user, and optical element(s) configured to guide the illuminating light from the light source to the eye region. The illuminating light is dynamically adjustable to generate a dynamic light beam on the eye region, and a sensor is configured to capture reflected light on the eye region and generate reflection eye data. The apparatus can maintain user profile information, adjust spectral power distribution of the light source based on the user profile information, receive the reflection eye data, and generate the gaze direction information based on the reflection eye data.

CONTENT DELIVERY TECHNIQUES FOR CONTROLLING BIOMETRIC PARAMETERS

Methods, systems, and devices for content delivery are described. A device may receive biometric data associated with a user from a wearable device. The device may determine that a biometric parameter of a set of biometric parameters associated with the biometric data satisfies a threshold during an occasion. The device may select media content from a set of media content for recommending to the user. Each respective media content of the set of media content may be scored based on a respective effectiveness associated with each respective media content for controlling a value of the biometric parameter. The selecting may be triggered based on the biometric parameter satisfying the threshold. The media content may be selected based on a score associated with the media content. The device may output the media content via a graphical user interface (GUI) of the apparatus during the occasion.

Activities data modeling in human internet-of-things platforms

A platform models and correlates physical activities based on users' interactions with a simple grip-metaphor design, enabling multi-dimensions actionable information to improve the health, performance and well-being of connected grip users within like-minded communities. For example, the platform captures multi-dimensional datasets generated from activities of each of a plurality of users on the online human internet of thing platform, where the activities include physical interactions with connected grips systems connected to the online human internet of thing platform. The platform then filters the captured multi-dimensional datasets into a plurality of categories and scores the filtered multi-dimensional data by the human internet of thing platform. Finally, the platform generates a multi-dimensional information modeling for each user based on the scored multi-dimensional data.

Memorial facility with memorabilia, meeting room, secure memorial database, and data needed for an interactive computer conversation with the deceased
11635929 · 2023-04-25 ·

The embodied invention is a multi-vault memorial facility with a memorial room and an interactive system for storing and providing memorial information of the deceased. The meeting room includes memorabilia and access to a secure personal biographical database which includes historical and personal information about the deceased. An interactive user interface for a visiting person is used when at the facility. The personal biography database security is provided by administration control for access and identifies who can modify the database. The interactive user interface utilizes visual and audio presentations, and a conversational user interface with a likeness and a digital memory of the deceased. A computer projection talks with a visitor and mimics responses of the deceased.

Displaying augmented reality content with tutorial content

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for displaying augmented reality content. The program and method provide for receiving, by a device, user input selecting an augmented reality content item for display, the augmented reality content item corresponding to a tutorial with augmented reality content; causing, in response to receiving the user input, a camera of the device to activate to capture an image feed; displaying tutorial content in conjunction with the image feed; and modifying the image feed with augmented reality content that corresponds to the tutorial content.

MUSIC RECOMMENDATION METHOD AND APPARATUS
20230206093 · 2023-06-29 ·

A music recommendation method and apparatus are provided, to determine an attention mode of a user in a complex environment by using viewpoint information of the user, thereby more precisely implementing music matching. According to a first aspect, a music recommendation method is provided. The method includes: receiving visual data of a user (S501); obtaining at least one attention unit and attention duration of the at least one attention unit based on the visual data (S502); determining an attention mode of the user based on the attention duration of the at least one attention unit (S503); and determining recommended music information based on the attention mode (S504).