G06V40/175

Collection of machine learning training data for expression recognition

Apparatus, methods, and articles of manufacture for implementing crowdsourcing pipelines that generate training examples for machine learning expression classifiers. Crowdsourcing providers actively generate images with expressions, according to cues or goals. The cues or goals may be to mimic an expression or appear in a certain way, or to “break” an existing expression recognizer. The images are collected and rated by same or different crowdsourcing providers, and the images that meet a first quality criterion are then vetted by expert(s). The vetted images are then used as positive or negative examples in training machine learning expression classifiers.

Computer-implemented method of recognizing facial expression, apparatus for recognizing facial expression, method of pre-training apparatus for recognizing facial expression, computer-program product for recognizing facial expression

A computer-implemented method of recognizing a facial expression of a subject in an input image is provided. The method includes filtering the input image to generate a plurality of filter response images; inputting the input image into a first neural network; processing the input image using the first neural network to generate a first prediction value; inputting the plurality of filter response images into a second neural network; processing the plurality of filter response images using the second neural network to generate a second prediction value; weighted averaging the first prediction value and the second prediction value to generate a weighted average prediction value; and generating an image classification result based on the weighted average prediction value.

METHODS AND SYSTEMS FOR OPENING OF A VEHICLE ACCESS POINT

Methods and systems for opening an access point of a vehicle. A system and a method may involve receiving wirelessly a signal from a remote controller carried by a user. The system and the method may further involve receiving audio or video data indicating the user approaching the vehicle. The system and the method may also involve determining an intention of the user to access an interior of the vehicle based on the audio or video data. The system and the method may also involve opening an access point of the vehicle responsive to the determining of the intention of the user to access the interior of the vehicle.

Systems and methods for analyzing facial expressions within an online classroom to gauge participant attentiveness
09734410 · 2017-08-15 · ·

Systems, methods, and non-transitory computer readable analyzing facial expressions within an interactive online event to gauge participant level of attentiveness are provided. Facial expressions from a plurality of participants accessing an interactive online event may be analyzed to determine each participant's facial expression. The determined expressions may be analyzed to determine an overall level of attentiveness. The level of attentiveness be relayed to the host of the interactive online event to inform him or her how the participants are reacting to the interactive online event. If there are participants not paying attention or confused, the host may modify their presentation to increase the attentiveness of the students.

DETERMINING POSITION OF A PERSONAL CARE DEVICE RELATIVE TO BODY SURFACE
20220309269 · 2022-09-29 · ·

A computer system obtains a digital 3-dimensional model of a body surface, such as a human face. Portions of the surface correspond to reference points in the model. The system obtains image data of a personal care device in use as well as image data of the surface on which the model is based. The system calculates a relative location of the device relative to the reference points based on the image data of the surface. The system determines a true position of the device with respect to the surface based on the image data of the device in use, known physical dimensions of the device, and its relative location, even when the device is partially occluded from view. The system may, based on the true position of the device, cause device to perform actions such as application of a cosmetic or administration of a skin therapy.

METHOD AND DEVICE FOR GENERATING EMOTICON, AND STORAGE MEDIUM

A method and a device for generating an emoticon are provided. A first expression tag list corresponding to a face image in a portrait is acquired by inputting the face image into an expression recognition model. Additionally, at least one label text corresponding to the face image is determined based on the first expression tag list and a correspondence between a preset text and a second expression tag list. Furthermore, an expression image corresponding to the portrait is determined, where the face image is a part of the expression image. Moreover, an emoticon is generated by labelling the expression image with the at least one label text.

Identifying facial expressions in acquired digital images
09818024 · 2017-11-14 · ·

A face is detected and identified within an acquired digital image. One or more features of the face is/are extracted from the digital image, including two independent eyes or subsets of features of each of the two eyes, or lips or partial lips or one or more other mouth features and one or both eyes, or both. A model including multiple shape parameters is applied to the two independent eyes or subsets of features of each of the two eyes, and/or to the lips or partial lips or one or more other mouth features and one or both eyes. One or more similarities between the one or more features of the face and a library of reference feature sets is/are determined. A probable facial expression is identified based on the determining of the one or more similarities.

Expression recognition device

An expression recognition device includes processing circuitry to acquire an image; extract a face area of a person from the acquired image and obtaining a face image added with information of the face area; extract one or more face feature points on a basis of the face image; determine a face condition representing a state of a face in the face image depending on reliability of each of the extracted face feature points; determine a reference point for extraction of a feature amount used for expression recognition from among the extracted face feature points depending on the determined face condition; extract the feature amount on a basis of the determined reference point; recognize a facial expression of the person in the face image using the extracted feature amount; and output information related to a recognition result of the facial expression of the person in the face image.

AUGMENTED REALITY SPEECH BALLOON SYSTEM
20210407533 · 2021-12-30 ·

Disclosed is an augmented reality system to generate and cause display of an augmented reality interface at a client device. Various embodiments may detect speech, identify a source of the speech, transcribe the speech to a text string, generate a speech bubble based on properties of the speech and that includes a presentation of the text string, and cause display of the speech bubble at a location in the augmented reality interface based on the source of the speech.

Audio control based on determination of physical space surrounding vehicle
11202150 · 2021-12-14 · ·

An audio control device for a vehicle is provided. The audio control device includes control circuitry that is communicatively coupled to one or more audio reproduction devices of the vehicle. From one or more sensors associated with the vehicle, the control circuitry receives one or more signals. corresponding to a detection of a plurality of objects in a physical space surrounding the vehicle. The control circuitry determines a scene of the physical space, based on the plurality of objects. The control circuitry determines distance between a first object of the detected plurality of objects and the vehicle, based on the received one or more signals. The control circuitry adjusts one or more audio parameters of the audio reproduction devices, based on the determined scene and the determined distance. The control circuitry controls the audio reproduction devices to reproduce a first audio output based on the adjusted audio parameters.