Patent classifications
B25J11/001
Robot and method of recognizing mood using the same
A robot includes an output unit including at least one of a display or a speaker, a camera, and a processor configured to control the output unit to output content, to acquire an image including a plurality of users through the camera while the content is output, to determine a mood of a group including the plurality of users based on the acquired image, and to control the output unit to output feedback based on the determined mood.
Robot and controlling method thereof
A robot includes a display configured to display a face image indicating a face of the robot, an input unit configured to receive a customizing request for the face of the robot, and a processor configured to acquire customizing data based on the received customizing request, to generate a face design based on the acquired customizing data, and to control the display to display a face image based on the generated face design.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
There is provided an information processing apparatus and an information processing method that can provide more useful information for an action plan of an autonomous mobile body, the information processing apparatus including an action recommendation unit configured to present a recommended action recommended to an autonomous mobile body, to the autonomous mobile body that performs an action plan based on situation estimation. The action recommendation unit determines the recommended action on the basis of an action history collected from a plurality of the autonomous mobile bodies, and on the basis of a situation summary received from a target autonomous mobile body that is a target of recommendation. The information processing method includes presenting, by a processor, a recommended action recommended to an autonomous mobile body, to the autonomous mobile body that performs an action plan based on situation estimation.
Robot
A robot for performing an expression by a non-verbal reaction, includes a body including a lower part provided so as to be capable of panning and tilting with respect to a support point coupled to a placement surface; a pair of arms provided to side parts of the body so as to be capable of moving up and down; and a head provided to an upper part of the body so as to be capable of panning and tilting, wherein the non-verbal reaction includes a combination of the tilting and the panning of the body with respect to the support point and movement of the pair of arms or the head or any combination thereof.
System and method for dynamic program configuration
The present teaching relates to method, system, medium, and implementations for configuring an animatronic device. Information about a user is obtained for whom an animatronic device is to be configured to carry out a dialogue with the user and is used to select, from a plurality of selectable programs, a program related to a topic to be covered in the dialogue, where the program is to be used by the animatronic device to drive the dialogue with the user. The animatronic device is then configured based on the program for carrying out the dialogue with the user.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
Provided is an information processing apparatus including a communication unit that transmits sensing data collected by an autonomous mobile object to a server, in which the communication unit transmits the sensing data related to a predetermined learning target to the server, and receives a dictionary for recognition generated by recognition learning using the sensing data collected by a plurality of the autonomous mobile objects and related to the learning target. In addition, provided is an information processing apparatus including a control unit that controls presentation of a progression status related to recognition learning for generating a dictionary for recognition used for an autonomous mobile object, in which the recognition learning is executed by using sensing data collected by a plurality of the autonomous mobile objects and related to a predetermined learning target.
Condition-Based Robot Audio Techniques
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for audio augmentation of physical robot sounds. A robot can determine that a first physically moveable component of the robot is to be actuated and in response, obtain a conditional state of the robot. The robot can obtain an audio object that generates an audio enhancement for the first physically moveable component being actuated, the audio enhancement having one or more characteristics that match the obtained conditional state of the robot. The robot can output the audio enhancement while actuating the first physically moveable component.
Automated Multi-Persona Response Generation
A system for performing automated multi-persona response generation includes processing hardware, a display, and a memory storing a software code. The processing hardware executes the software code to receive input data describing an action and identifying a multiple interaction profiles corresponding respectively to multiple participants in the action, obtain the interaction profiles, and simulate execution of the action with respect to each of the participants. The processing hardware is further configured to execute the software code to generate, using the interaction profiles, a respective response to the action for each of the participants to provide multiple responses. In various implementations, one or more of those multiple responses may be used to train additional artificial intelligence (AI) systems, or may be rendered to an output device in the form of one or more of a display, an audio output device, or a robot, for example.
PROVIDING DISPOSITION-DRIVEN RESPONSES TO STIMULI WITH AN ARTIFICIAL INTELLIGENCE-BASED SYSTEM
An artificial intelligence-based computer-implemented method and system is provided for performing an action with a machine. Input is received from one or more sensors. The input is used to determine a stimulus, which is used with a waveform to select a set of actions to perform. A mood of the machine is determined from a second waveform generated by a mood mechanism. The mood is used to select an action from the subset of actions. The mood may activate one or more reaction mechanisms to provide a physiological response. The machine then initiates the action.
Object control system and object control method
A feeling deduction unit 100 deduces a user's feeling. An internal state management unit 110 manages an internal state of an object and an internal state of a user on the basis of the deduced user's feeling. An action management unit 120 determines an action of the object on the basis of the internal state of the object. An output processing unit 140 causes the object to perform the action determined by the action management unit 120.