G06V40/174

Therapeutic smile detection systems
11568680 · 2023-01-31 ·

Systems for detecting when a person exhibits a smile with therapeutic benefits including a facial expression detection device and a system processor. The facial expression detection device is configured to acquire facial expression data. The system processor is in data communication with the facial expression detection device and is configured to execute stored computer executable system instructions. The computer executable system instructions include the steps of receiving facial expression parameter data establishing target facial expression criteria, receiving current facial expression data from the facial expression detection device, comparing the current facial expression data to the target facial expression criteria of the facial expression parameter data, and identifying whether the current facial expression data satisfies the target facial expression criteria. The target facial expression criteria define a smile with therapeutic benefits.

Content clustering of new photographs for digital picture frame display

A method for automated routing of pictures taken on mobile electronic devices to a digital picture frame including a camera integrated with the frame, and a network connection module allowing the frame for direct contact and upload of photos from electronic devices or from photo collections of community members. The integrated camera is used to automatically determine an identity of a frame viewer and can capture gesture-based feedback. The displayed photos are automatically shown and/or changed according to the detected viewers. The photos can be filtered and cropped at the receiver side. Clustering photos by content is used to improve display and to respond to photo viewer desires.

ARTIFICIAL INTELLIGENCE-DRIVEN AVATAR-BASED PERSONALIZED LEARNING TECHNIQUES
20230237922 · 2023-07-27 ·

Methods, apparatus, and processor-readable storage media for artificial intelligence-driven avatar-based personalized learning techniques are provided herein. An example computer-implemented method includes obtaining multiple forms of input data from user devices associated with a user in a virtual learning environment; determining status information for user variables by processing at least a portion of the multiple forms of input data using a first set of artificial intelligence techniques; determining instruction-related modifications for the user by processing, using a second set of artificial intelligence techniques, at least a portion of the multiple forms of input data and at least a portion of the determined status information; implementing, based on the determined instruction-related modifications, modifications to an instructor avatar with respect to the user in the virtual learning environment; and performing one or more automated actions based on user response to the implemented modifications to the instructor avatar.

METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR GENERATING AVATAR
20230237723 · 2023-07-27 ·

Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for generating an avatar. The method includes generating an indication of correlation among image information, audio information, and text information of a video. The method may further include generating, based on the indication of the correlation, a first feature set and a second feature set representing features of a target object in the video, wherein the first feature set represents invariant features of the target object in the video, and the second feature set represents equivariant features of the target object in the video. The method may further include generating the avatar based on the first feature set and the second feature set. With this method, the generated avatar can be made more accurate and vivid with a better effect, while also reducing data annotation cost, improving operation efficiency, and enhancing user experience.

Facial beauty prediction method and device based on multi-task migration

Disclosed are a facial beauty prediction method and device based on multi-task migration. The method includes: performing similarity measurement based on a graph structure on a plurality of tasks to obtain an optimal combination of the plurality of tasks; constructing a facial beauty prediction model including a feature sharing layer based on the optimal combination; migrating feature parameters of an existing large-scale facial image network to the feature sharing layer of the facial beauty prediction model; inputting facial images for training to pre-train the facial beauty prediction model; and inputting a facial image to be tested to the trained facial beauty prediction model to obtain facial recognition results.

METHOD AND DATA PROCESSING APPARATUS

A method of generating an emotion descriptor icon includes receiving input content comprising video information, and performing analysis on the input content to produce information representing the video information with respect to a plurality of characteristics. The method also includes determining, based on a comparison of the information representing the video information at a temporal position in the video information and a set of information items respectively representing an emotion state, a relative likelihood of association between the input content and at least some of a plurality of emotion states, selecting an emotion state based on the outcome of the determination, and outputting an emotion descriptor icon selected from an emotion descriptor icon set comprising a plurality of emotion descriptor icons. The outputted emotion descriptor icon is associated with the selected emotion state.

OBJECT REPLACEMENT SYSTEM
20230230292 · 2023-07-20 ·

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for performing operations comprising: receiving an image that includes a depiction of a real-world environment; processing the image to obtain data indicating presence of a real-world object in the real-world environment; receiving input that selects an AR experience comprising an AR object; determining that the real-world object detected in the real-world environment depicted in the image indicated in the obtained data corresponds to the AR object; applying a machine learning technique to the image to generate a new image that depicts the real-world environment without the real-world object; and applying the AR object to the new image to generate a modified new image that depicts the real-world environment including the AR object in place of the real-world object.

METHOD OF EVALUATION OF SOCIAL INTELLIGENCE AND SYSTEM ADOPTING THE METHOD
20230225652 · 2023-07-20 ·

Provided is a method of evaluating emotional intelligence, the method including evaluating an accuracy of emotional recognition of subjects for the emotional image stimuli, evaluating an accuracy of interpersonal emotional recognition of subjects for a facial image stimulus with a certain emotion, a classification step of presenting a video stimulus of a certain emotion to selected subjects with high accuracy, extracting a heart rate variability (HRV) the subjects, forming a classification model by classification of the video stimulus and machine learning using the HRV through a model generating unit, and evaluating an emotional intelligence of the subject exposed to the image stimulus using the classification model.

Object tracking and best shot detection system

A method and system using face tracking and object tracking is disclosed. The method and system use face tracking, location, and/or recognition to enhance object tracking, and use object tracking and/or location to enhance face tracking.

System and method for emotion detection and inter-vehicle communication
11564073 · 2023-01-24 · ·

A computer-implemented method for emotion detection and communication includes receiving host passenger data for a host passenger of a host vehicle and determining an emotion of the host vehicle based on the host passenger data. The method includes communicating the emotion of the host vehicle to one or more remote vehicles and an emotion of the one or more remote vehicles to the host vehicle. Further, the method includes generating an output based on the emotion of the host vehicle and the emotion of the one or more remote vehicles. The output is an interactive user interface providing an indication of the emotion of the host vehicle and an indication of the emotion of the one or more remote vehicle. The method includes rendering the output to a human machine interface device.