G06V40/175

METHOD OF FACE EXPRESSION RECOGNITION

The present invention provides a method of facial expression recognition including 3 steps: step 1: collecting facial expression data, which contributes to solve the problem of lacking data, disparate and bias data, that cause the overfitting problem when training the deep learning model; step 2: designing a new deep learning network that able to focus on special regions of the face to extract and learn the important features of facial expressions by intergating ensemble attention modules into basic deep network architecture like ResNet; step 3: training the ensemble attention deep learning model in step 2 on the collected dataset in step 1, using the combination of two loss functions including ArcFace and Softmax to reduce the overfitting problem.

Methods and Systems for Opening of a Vehicle Access Point Using Audio or Video Data Associated with a User

Methods and systems for opening an access point of a vehicle. A system and a method may involve receiving wirelessly a signal from a remote controller carried by a user. The system and the method may further involve receiving audio or video data indicating the user approaching the vehicle. The system and the method may also involve determining an intention of the user to access an interior of the vehicle based on the audio or video data. The system and the method may also involve opening an access point of the vehicle responsive to the determining of the intention of the user to access the interior of the vehicle.

COLLECTION OF MACHINE LEARNING TRAINING DATA FOR EXPRESSION RECOGNITION

Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.

Determining a mood for a group
11710323 · 2023-07-25 · ·

A system and method for determining a mood for a crowd is disclosed. In example embodiments, a method includes identifying an event that includes two or more attendees, receiving at least one indicator representing emotions of attendees, determining a numerical value for each of the indicators, and aggregating the numerical values to determine an aggregate mood of the attendees of the event.

Therapeutic smile detection systems
11568680 · 2023-01-31 ·

Systems for detecting when a person exhibits a smile with therapeutic benefits including a facial expression detection device and a system processor. The facial expression detection device is configured to acquire facial expression data. The system processor is in data communication with the facial expression detection device and is configured to execute stored computer executable system instructions. The computer executable system instructions include the steps of receiving facial expression parameter data establishing target facial expression criteria, receiving current facial expression data from the facial expression detection device, comparing the current facial expression data to the target facial expression criteria of the facial expression parameter data, and identifying whether the current facial expression data satisfies the target facial expression criteria. The target facial expression criteria define a smile with therapeutic benefits.

Intelligent mixing and replacing of persons in group portraits
11551338 · 2023-01-10 · ·

The present disclosure is directed toward intelligently mixing and matching faces and/or people to generate an enhanced image that reduces or minimize artifacts and other defects. For example, the disclosed systems can selectively apply different alignment models to determine a relative alignment between a references image and a target image having an improved instance of the person. Upon aligning the digital images, the disclosed systems can intelligently identify a replacement region based on a boundary that includes the target instance and the reference instance of the person without intersecting other objects or people in the image. Using the size and shape of the replacement region around the target instance and the reference instance, the systems replace the instance of the person in the reference image with the target instance. The alignment of the images and the intelligent selection of the replacement region minimizes inconsistencies and/or artifacts in the final image.

Information processing system for extracting images, image capturing apparatus, information processing apparatus, control methods therefor, and storage medium
11544834 · 2023-01-03 · ·

An information processing system comprises an image capturing apparatus and an information processing apparatus, the image capturing apparatus includes an image capturing device; a first evaluation unit configured to perform first evaluation on a captured image; and a first transmission unit configured to transmit the captured image to the information processing apparatus, and the information processing apparatus includes a reception unit configured to receive the captured image; a second evaluation unit configured to perform second evaluation on the captured image; and a second transmission unit configured to transmit an evaluation result to the image capturing apparatus, the image capturing apparatus further including a sorting unit configured to receive the evaluation result of the second evaluation, and sort the captured image using an evaluation results of the first evaluation and the second evaluation.

Emoji recording and sending

The present disclosure generally relates to generating and modifying virtual avatars. An electronic device having a camera and a display apparatus displays a virtual avatar that changes appearance in response to changes in a face in a field of view of the camera. In response to detecting changes in one or more physical features of the face in the field of view of the camera, the electronic device modifies one or more features of the virtual avatar.

Oral Care Based Digital Imaging Systems And Methods For Determining Perceived Attractiveness Of A Facial Image Portion

Oral care based imaging computer-implemented systems and methods for determining perceived attractiveness of a facial image portion of at least one person depicted in a digital image. The method has the following steps: a) obtaining a digital image comprising at least one oral feature of at least one person, wherein the digital image includes a facial image portion of the at least one person, the facial image portion having both positive and negative attributes as defined by pixel data of the digital image; b) analyzing the facial image portion; c) generating an Attractiveness Score indicative of a perceived attractiveness of the facial image portion based on the analyzed facial image portion in the obtained digital image; d) further generating an image description that identifies at least one area in said facial image portion based on the Attractiveness Score; and e) presenting the image description to a use.

System and method for biometric monitoring and educational analysis optimization using artificial intelligence

Systems and methods for educational analysis optimization. The system includes a camera, a processor and memory. The memory stores instructions to execute a method. The method begins with receiving a request from a user at a client device to begin a stimulus session. Then, video recording of the user for the stimulus session is initialized. Next, calibrations for emotions and gaze are set. Then, one or more stimuli are presented to the user. Cues and reactions are recorded and mapped to content that was displayed during the times of recorded reactions and cues. The recordings are post-processed for educational analysis and feedback is provided to the user. The feedback and analysis can be optimized using a predictive artificial intelligence model.