Patent classifications
G06V40/175
Control data creation device, component control device, control data creation method, component control method and computer program
A control data creation device is provided that has an acquisition part, a creation part and an evaluation part. The acquisition part acquires input information concerning traveling of a human-powered vehicle. The creation part creates by a learning algorithm a learning model that outputs output information concerning control of a component of the human-powered vehicle based on input information acquired by the acquisition part. The evaluation part evaluates output information output from the learning model. The creation part updates the learning model based on training data including an evaluation by the evaluation part, input information corresponding to an output of the output information and the output information.
Input polarity of computing device
A computing device is described. In an example implementation, the computing device includes a housing including a display screen on a front surface, the housing and display screen being collectively positionable in a plurality of physical orientations, an input device that includes a first selection mechanism and a second selection mechanism, the first selection mechanism being actuatable to adjust a setting of an output of an application displayed on the display screen, the second selection mechanism being actuatable to adjust the setting of the output of the application displayed on the display screen, and an orientation sensor configured to determine which physical orientation of the plurality of physical orientations that the display screen is positioned in, and change a first input polarity of the first selection mechanism to correspond to the determined physical orientation of the display screen.
Information processing apparatus and information processing method
Provided is an information processing apparatus that includes an acquisition unit and a generation control unit. The acquisition unit acquires input information including at least one of an image or audio of a first user. The generation control unit controls, on the basis of request information including a request to output information including at least one of an image or audio and the acquired input information, generation of output information related to the first user to be output by a terminal of a second user who is a communication partner of the first user.
Multimodal sentiment detection
Described herein is a system for improving sentiment detection and/or recognition using multiple inputs. For example, an autonomously motile device is configured to generate audio data and/or image data and perform sentiment detection processing. The device may process the audio data and the image data using a multimodal temporal attention model to generate sentiment data that estimates a sentiment score and/or a sentiment category. In some examples, the device may also process language data (e.g., lexical information) using the multimodal temporal attention model. The device can adjust its operations based on the sentiment data. For example, the device may improve an interaction with the user by estimating the user's current emotional state, or can change a position of the device and/or sensor(s) of the device relative to the user to improve an accuracy of the sentiment data.
DURESS-BASED USER ACCOUNT DATA PROTECTION
One embodiment provides a method, including: receiving, at an information handling device, a login request to a user account; identifying, using a processor, that the login request was provided by an authorized user; determining, subsequent to the identifying, whether the authorized user provided the login request under duress; and performing, responsive to determining that the authorized user provided the login request under duress, an action that protects one or more data sources contained within the user account. Other aspects are described and claimed.
Automated presentation contributions
A system can monitor a presentation and an audience. While monitoring the presentation and audience, the system can determine questions that would be beneficial for the audience to ask. The system can also ask these questions on behalf of the audience or wait to allow the audience the opportunity to ask the questions themselves. The system can learn over time to better determine which questions are suited for which audiences and presentations.
EMOJI RECORDING AND SENDING
The present disclosure generally relates to generating and modifying virtual avatars. An electronic device having a camera and a display apparatus displays a virtual avatar that changes appearance in response to changes in a face in a field of view of the camera. In response to detecting changes in one or more physical features of the face in the field of view of the camera, the electronic device modifies one or more features of the virtual avatar.
Multi-modal emotion recognition device, method, and storage medium using artificial intelligence
A multi-modal emotion recognition system is disclosed. The system includes a data input unit for receiving video data and voice data of a user, a data pre-processing unit including a voice pre-processing unit for generating voice feature data from the voice data and a video pre-processing unit for generating one or more face feature data from the video data, a preliminary inference unit for generating situation determination data as to whether or not the user's situation changes according to a temporal sequence based on the video data. The system further comprises a main inference unit for generating at least one sub feature map based on the voice feature data or the face feature data, and inferring the user's emotion state based on the sub feature map and the situation determination data.
THERAPEUTIC SMILE DETECTION SYSTEMS
Systems for detecting when a person exhibits a smile with therapeutic benefits including a facial expression detection device and a system processor. The facial expression detection device is configured to acquire facial expression data. The system processor is in data communication with the facial expression detection device and is configured to execute stored computer executable system instructions. The computer executable system instructions include the steps of receiving facial expression parameter data establishing target facial expression criteria, receiving current facial expression data from the facial expression detection device, comparing the current facial expression data to the target facial expression criteria of the facial expression parameter data, and identifying whether the current facial expression data satisfies the target facial expression criteria. The target facial expression criteria define a smile with therapeutic benefits.
Usage control of personal data
Examples of a system for usage control of a personal data are described. The system may obtain an input image including a first face of a person. Further, the system may compute a usage control matrix based on the input image, at least one usage control function, and a predefined criteria. The pre-defined criteria may be associated with at least one of: a data usage policy, a face matching probability related to matching of the face present in the input image, and a face recognition probability related to a recognition of an identity of the person. Furthermore, by using the input image and the usage control matrix, the system may transform the input image to a usage-controlled image. Furthermore, the system may verify a matching of the face present in the input image with a second face present in the usage-controlled image. Furthermore, the system may recognize an identity of the person in the input image and provide a feedback indicative of a failure to verify the identity of the person from the usage-controlled image.