Patent classifications
B25J11/0005
Multimodal sentiment detection
Described herein is a system for improving sentiment detection and/or recognition using multiple inputs. For example, an autonomously motile device is configured to generate audio data and/or image data and perform sentiment detection processing. The device may process the audio data and the image data using a multimodal temporal attention model to generate sentiment data that estimates a sentiment score and/or a sentiment category. In some examples, the device may also process language data (e.g., lexical information) using the multimodal temporal attention model. The device can adjust its operations based on the sentiment data. For example, the device may improve an interaction with the user by estimating the user's current emotional state, or can change a position of the device and/or sensor(s) of the device relative to the user to improve an accuracy of the sentiment data.
MOVING ROBOT AND METHOD OF CONTROLLING THE SAME
According to a moving robot and a method of controlling the same of the present disclosure, the moving robot detect the sound generated in the area, moves a sound generation point according to a type of the sound and an operation mode, analyzes an image of the sound generation point and determines an indoor situation to perform the corresponding operation. The moving robot detects the sound to determine an accident at a location at which the sound is generated, can automatically perform a specified operation corresponding to the generated accident even when there is no control command of a user, and thus, it is possible rapidly respond to the generated accident. The moving robot can divide an object generating the sound into a person, a companion animal, and a subject, and can perform different operations according to the object.
Communication robot and control program of communication robot
A communication robot includes: an operation part; and a communication arbitration unit configured to exhibit a robot mode for autonomously operating the operation part by applying a first operational criterion and an avatar mode for operating the operation part based on an operation instruction sent from a remote operator by applying a second operational criterion to arbitrate communication among three parties, that is, the robot mode, the avatar mode, and a service user.
ROBOT, ROBOT CONTROL METHOD, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM
The robot includes a storage unit and a control unit. The control unit acquires outside stimulus feature amounts that are feature amounts of an outside stimulus acting from outside, stores the acquired outside stimulus feature amounts in the storage unit as a history, compares outside stimulus feature amounts acquired at a certain timing with outside stimulus feature amounts stored in the storage unit to calculate a first similarity degree, and controls operations based on the calculated first similarity degree.
TECHNIQUE FOR ACTUATING A ROBOTIC APPARATUS
A technique for actuating a robotic apparatus is disclosed. In one particular embodiment, the technique may be realized as an apparatus for providing controlled movement of a robotic appendage, comprising a digit, wherein the digit comprises a first joint and a second joint, and an actuator configured to control a degree of freedom of the digit. The actuator causes the first joint to bend at a first rate from a first position to a second position and the second joint to bend at a second rate from a third position to a fourth position. The first rate is faster than the second rate.
METHOD, SYSTEM, AND NON-TRANSITORY COMPUTER-READABLE RECORDING MEDIUM FOR PROVIDING A SERVICE USING A ROBOT
A method for providing a service using a robot is provided. The method includes the steps of: acquiring information associated with a serving place of the robot and information on a customer visiting the serving place; determining service guide information including information on at least one of a travel route to a table to be provided to the customer, among a plurality of tables in the serving place, and a conversation scenario to be provided to the customer during travel to the table to be provided to the customer, with reference to the information associated with the serving place and the information on the customer; and providing the customer with a service associated with the serving place by the robot with reference to the service guide information.
Flux sensing system
A flux sensing system includes a memory and a processor in communication with the memory and at least one sensing device, the memory storing a plurality of capabilities and a plurality of semantic fluxes associated with the plurality of capabilities. Based on inputs from the at least one sensing device, the computing system is configured to determine an active servicing capability associated with a first semantic flux and/or a consumer interest associated with a second semantic flux and match the interest with the capability based on semantic drift inference.
Dialogue apparatus and control program for dialogue apparatus
A dialogue apparatus includes a display unit, a first dialogue control unit configured to display a first character on the display unit and simulate a speech function of an external communication robot capable of having a dialogue to conduct the dialogue with a user, a second dialogue control unit configured to display a second character on the display unit and conduct the dialogue so as to mediate the dialogue between the user and the first dialogue control unit, and a transmission unit configured to transmit, to the external communication robot, dialogue information about the dialogue conducted by the first dialogue control unit and the second dialogue control unit.
Robotic control using profiles
Techniques for robotic control using profiles are disclosed. Cognitive state data for an individual is obtained. A cognitive state profile for the individual is learned using the cognitive state data that was obtained. Further cognitive state data for the individual is collected. The further cognitive state data is compared with the cognitive state profile. Stimuli are provided by a robot to the individual based on the comparing. The robot can be a smart toy. The cognitive state data can include facial image data for the individual. The further cognitive state data can include audio data for the individual. The audio data can be voice data. The voice data augments the cognitive state data. Cognitive state data for the individual is obtained using another robot. The cognitive state profile is updated based on input from either of the robots.
SYSTEMS AND METHODS TO ADAPT AND OPTIMIZE HUMAN-MACHINE INTERACTION USING MULTIMODAL USER-FEEDBACK
Systems and methods for human-machine interaction. An adaptive behavioral control system of a human-machine interaction system controls an interaction sub-system to perform a plurality of actions for a first action type in accordance with a computer-behavioral policy, each action being a different alternative action for the action type. The adaptive behavioral control system detects a human reaction of an interaction participant to the performance of each action of the first action type from data received from a human reaction detection sub-system. The adaptive behavioral control system stores information indicating each detected human reaction in association with information identifying the associated action. In a case where stored information indicating detected human reactions for the first action type satisfy an update condition, the adaptive behavioral control system updates the computer-behavioral policy for the first action type.