Patent classifications
B25J11/001
Interaction system, apparatus, and non-transitory computer readable storage medium
An interaction system that provides an interaction interface comprising: a computer and a device that obtains information, wherein the computer stores information for managing data in which a type of a second feature value is associated with a listening pattern that defines a tendency of a response action performed by the interaction interface on a user; calculates a first feature value that is an index for evaluating a change in state during speech by the user on the basis of information obtained by the device; calculates second feature values on the basis of the first feature value; selects a target second feature value from among the second feature values; selects the listening pattern corresponding to the target second feature value; and generates output information for controlling the interaction interface on the basis of the selected listening pattern.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
There is provided an information processing apparatus and an information processing method capable of intuitively and easily indicating a state of feeling of the autonomous moving body to the user. The information processing apparatus includes a control unit that, according to a change in a position of the autonomous moving body sensed by a sensing unit that senses the position of the autonomous moving body and a feeling parameter indicating the feeling of the autonomous moving body, causes a display screen to display a figure corresponding to the autonomous moving body while changing a state of the figure. The present disclosure can be applied to, for example, an apparatus and the like that controls an autonomous moving body.
Robot, robot control method, and storage medium
In a robot, an actuator causes the robot to operate. A processor is configured to acquire, when a holding portion is held by a predetermined target, physical information on a physical function of the predetermined target, and cause, by controlling the actuator depending on the acquired physical information, the robot to perform at least one of an examination operation for examining the physical function of the predetermined target and a training support operation for training the physical function of the predetermined target.
Socially assistive robot
A companion robot is disclosed. In some embodiments, the companion robot may include a head having a facemask and a projector configured to project facial images onto the facemask; a facial camera; a microphone configured to receive audio signals from the environment; a speaker configured to output audio signals; and a processor electrically coupled with the projector, the facial camera, the microphone, and the speaker. In some embodiments, the processor may be configured to receive facial images from the facial camera; receive speech input from the microphone; determine an audio output based on the facial images and/or the speech input; determine a facial projection output based the facial images and/or the speech input; output the audio output via the speaker; and project the facial projection output on the facemask via the projector.
VIRTUAL-LIFE-BASED HUMAN-MACHINE INTERACTION METHODS, APPARATUSES, AND ELECTRONIC DEVICES
Disclosed herein are methods, systems, and apparatus, including computer programs encoded on computer storage media, for virtual-life-based human-machine interaction. One of the method includes obtaining cognitive data for a target user in response to a human-machine interaction from the target user by performing perception analysis on the target user. Target response content is identified based on the cognitive data and sent to the target user. A virtual interactive animation that comprises a virtual life image is dynamically generated based on the cognitive data and the target response content, where the virtual life image has an animation effect that matches the human-machine interaction performed by the target user.
SYSTEMS AND METHODS FOR MULTIMODAL BOOK READING
Systems and methods to process reading articles for a multimodal book application are disclosed. Exemplary implementations may: identify a title of a reading article; store the title of the reading article in a database; scan two or more pages of the reading and generating text representing content of the reading article; analyze the generated text to identify characteristics of the reading article; store the identified characteristics in the database; associate the identified characteristics with the reading article title; generate augmented content files for one or more portions of the reading article based at least in part on the identified characteristics, and; and store the augmented content files in the database and associating the augmented content files with different portions of the reading article.
Conversation output system, conversation output method, and non-transitory recording medium
First information is acquired that is information of at least one of information about a user of a robot or situation information that is information about a situation around the robot. Conversation data is generated on the basis of the first information that is acquired. The conversation data creates an impression on the user that the robot and a predetermined target are having a conversation that corresponds to at least the first information. An outputter is controlled so as to output information based on the generated conversation data, thereby creating an impression on the user that the robot and the predetermined target are having the conversation that corresponds to at least the first information. The robot does not include a function that executes a conversation of a level greater than or equal to a level of the conversation based on the conversation data.
SYSTEMS AND METHODS FOR EMOTIONAL-IMAGING COMPOSER
Systems and methods for Emotional-Imaging Composer are disclosed. The method may include recording a real-time biosignal from a plurality of biosignal sensors. The method may further include determining an emotion that is associated with the real-time biosignal. The method may further include outputting a display feature corresponding to the emotion, wherein the display feature is a lighting effect on a graphical user interface.
INFINITE ROBOT PERSONALITIES
Aspects of the present disclosure generally relate to providing a large variety of robot personalities. In certain aspects, a robot personality may be represented as a personality location in a personality space, which may be a continuous unidimensional or multidimensional space. The dimensions of the personality space may be based on one or more factors. Based on the personality location, an affective state may be maintained for the robot, which may be represented as an affect location in an affect space. The affect location may be updated based on one or more inputs. Accordingly, robot expressions may be influenced based upon the affect location, which in turn is affected by the personality of the robot in the personality space.
Autonomously acting robot that performs a greeting action
Empathy toward a robot is increased by the robot emulating human-like or animal-like behavior. A robot includes a movement determining unit that determines a direction of movement, an action determining unit that selects a gesture from multiple kinds of gesture, and a drive mechanism that executes a specified movement and gesture. When a user enters a hall, an external sensor installed in advance detects a return home, and notifies the robot via a server that the user has returned home. The robot heads to the hall, and welcomes the user home by performing a gesture indicating goodwill, such as sitting down and raising an arm.