Patent classifications
B25J11/001
Systems and methods to adapt and optimize human-machine interaction using multimodal user-feedback
Systems and methods for human-machine interaction. An adaptive behavioral control system of a human-machine interaction system controls an interaction sub-system to perform a plurality of actions for a first action type in accordance with a computer-behavioral policy, each action being a different alternative action for the action type. The adaptive behavioral control system detects a human reaction of an interaction participant to the performance of each action of the first action type from data received from a human reaction detection sub-system. The adaptive behavioral control system stores information indicating each detected human reaction in association with information identifying the associated action. In a case where stored information indicating detected human reactions for the first action type satisfy an update condition, the adaptive behavioral control system updates the computer-behavioral policy for the first action type.
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
An information processing device (10) according to the present disclosure includes an operation controller (175) that controls a moving operation of an autonomous mobile body (10) that travels while maintaining an inverted state, and controls a posture operation of the autonomous mobile body that temporally changes from a reference posture in the inverted state. Furthermore, the information processing device (10) according to the present disclosure further includes an acquisition unit (174) that acquires motion data corresponding to a posture operation of the autonomous mobile body (10). The operation controller (175) controls a posture operation of the autonomous mobile body (10) based on the motion data acquired by the acquisition unit (174).
Apparatus and method for generating robot interaction behavior
Disclosed herein are an apparatus and method for generating robot interaction behavior. The method for generating robot interaction behavior includes generating co-speech gesture of a robot corresponding to utterance input of a user, generating a nonverbal behavior of the robot, that is a sequence of next joint positions of the robot, which are estimated from joint positions of the user and current joint positions of the robot based on a pre-trained neural network model for robot pose estimation, and generating a final behavior using at least one of the co-speech gesture and the nonverbal behavior.
System and method of selling goods or services, or collecting recycle refuse using mechanized mobile merchantry
The present invention relates to a system and method of selling goods or services, or collecting recycle refuse using mechanized mobile merchantry, comprising positioning, by self-propelling, at least one of a mechanized mobile merchantry within a geographical boundary, allowing interaction with consumers, and effectuating selling of goods or services, or collection of recycle refuse with consumers. Other exemplary embodiments can include signaling a mechanized mobile merchantry with a consumer's mobile device to direct the merchantry to self-propel to the consumer's location, and utilizing usage logs and algorithms to optimize functionality of a fleet of merchantry and reposition the merchantry, as necessary, within a geographical boundary, to increase sales and consumer convenience. The present invention also relates to a waste collection system with option to deliver new food and beverage items includes a customer service robot comprising a slave computer and one or more waste receptacles to collect waste material. The customer service robot includes at least one robotic arm having at least three degrees of motion to facilitate collection from, e.g., tables and the like. The customer service robot is also capable of interfacing with a recycling unit and a master computer to control disposal of the collected waste.
Systems and methods to adapt and optimize human-machine interaction using multimodal user-feedback
Systems and methods for human-machine interaction. An adaptive behavioral control system of a human-machine interaction system controls an interaction sub-system to perform a plurality of actions for a first action type in accordance with a computer-behavioral policy, each action being a different alternative action for the action type. The adaptive behavioral control system detects a human reaction of an interaction participant to the performance of each action of the first action type from data received from a human reaction detection sub-system. The adaptive behavioral control system stores information indicating each detected human reaction in association with information identifying the associated action. In a case where stored information indicating detected human reactions for the first action type satisfy an update condition, the adaptive behavioral control system updates the computer-behavioral policy for the first action type.
Generative design techniques for robot behavior
An automated robot design pipeline facilitates the overall process of designing robots that perform various desired behaviors. The disclosed pipeline includes four stages. In the first stage, a generative engine samples a design space to generate a large number of robot designs. In the second stage, a metric engine generates behavioral metrics indicating a degree to which each robot design performs the desired behaviors. In the third stage, a mapping engine generates a behavior predictor that can predict the behavioral metrics for any given robot design. In the fourth stage, a design engine generates a graphical user interface (GUI) that guides the user in performing behavior-driven design of a robot. One advantage of the disclosed approach is that the user need not have specialized skills in either graphic design or programming to generate designs for robots that perform specific behaviors or express various emotions.
Robot control method and companion robot
The present invention provides a robot control method, and the method includes: collecting interaction information of a companion target, and obtaining digital person information of a companion person (101), where the interaction information includes interaction information of a sound or an action of the companion target toward the robot, and the digital person information includes a set of digitized information of the companion person; and determining, by using the interaction information and the digital person information, a manner of interacting with the companion target (103); generating, based on the digital person information of the companion person and by using a machine learning algorithm, an interaction content corresponding to the interaction manner (105); and generating a response action toward the companion target based on the interaction manner and the interaction content (107).
Artificial intelligence (AI) robot and control method thereof
Disclosed is a method of controlling a robot, comprising: switching to a surrounding environment concentration mode according to a sound of surrounding environment in a display off mode; searching a user in the surrounding environment concentration mode and switching to a user concentration mode when the user is searched; switching to a user conversation mode from the user concentration mode according to a sound received from the user; and entering the display off mode again after passing through a play alone mode, when the user is not searched in the surrounding environment concentration mode. Accordingly, the robot can operate in an optimal mode according to the change of the surrounding environment by setting various modes of the robot.
Robot and method of controlling same
Disclosed herein is a robot including an output interface including at least one of a display or a speaker, and a processor configured to acquire output data of a predetermined playback time point of content output via the robot or an external device, recognize a first emotion corresponding to the acquired output data, and control the output interface to output an expression based on the recognized first emotion.
Entertainment system, robot device, and server device
An entertainment system includes: a robot device capable of acting in an autonomous action mode in a real world; a server device configured to cause a virtual robot associated with the robot device to act in a virtual world; and a terminal device capable of displaying an image of the virtual world in which the virtual robot acts. The server device provides the image of the virtual world to the terminal device. The server device transmits a request from the virtual robot to the robot device. When the robot device acquires the request from the virtual robot, the robot device acts in a collaboration action mode in which collaboration is made with the virtual robot.