B25J11/001

Generative design techniques for robot behavior

An automated robot design pipeline facilitates the overall process of designing robots that perform various desired behaviors. The disclosed pipeline includes four stages. In the first stage, a generative engine samples a design space to generate a large number of robot designs. In the second stage, a metric engine generates behavioral metrics indicating a degree to which each robot design performs the desired behaviors. In the third stage, a mapping engine generates a behavior predictor that can predict the behavioral metrics for any given robot design. In the fourth stage, a design engine generates a graphical user interface (GUI) that guides the user in performing behavior-driven design of a robot. One advantage of the disclosed approach is that the user need not have specialized skills in either graphic design or programming to generate designs for robots that perform specific behaviors or express various emotions.

SYSTEM AND METHOD TO EMULATE HUMAN COGNITION IN ARTIFICIAL INTELLIGENCE USING BIO-INSPIRED PHYSIOLOGY SIMULATION
20230278200 · 2023-09-07 ·

An AI enabled human emulation system, a method, and a computer program product may be provided for embodied cognition with humanoid robot hardware (robot) to emulate human behavior. The system may include a memory configured to store computer program code and a processor configured to execute the computer program code to employ common sense reasoning, in much the same holistic way that humans do. The processor may be configured to obtain a trained AI model and sensor data from a surrounding environment of the robot. The AI model is trained using a brain emulation system and a human body physiology simulation system. The processor may be further configured to generate novel emergent pattern data to self-regulate the robot. The processor may also be configured to control the robot that interacts with a user by expressing a behavior to the user.

ROBOT, ROBOT CONTROL METHOD, AND RECORDING MEDIUM
20230016899 · 2023-01-19 ·

A robot is equipped with a processor. The processor detects external appearance or audio of a living being, and by controlling the robot, causes the robot to execute an operation in accordance with liking data indicating preferences of the robot regarding external appearance or audio and the detected external appearance or audio of the living being.

Expression feedback method and smart robot

An expression feedback method and a smart robot, belonging to smart devices. The method comprises: step S1, a smart robot using an image collection apparatus to collect image information; step S2, the smart robot detecting whether human face information representing a human face exists in the image information; if so, acquiring position information and size information associated with the human face information, and then turning to step S3; and if not, returning to step S1; step S3, according to the position information and the size information, obtaining, by prediction, a plurality of pieces of feature point information in the human face information and outputting same; and step S4, using a first identification model formed through pre-training, determining whether the human face information represents a smiling face according to the feature point information, and then exiting from this step; if so, the smart robot outputting preset expression feedback information; and if not, exiting from this step. The beneficial effect of the method is: enriching an information interaction content between a smart robot and a user, so as to improve usage experience of the user.

CONTROL OF SOCIAL ROBOT BASED ON PRIOR CHARACTER PORTRAYAL

A method and apparatus for controlling a social robot includes providing a set of quantitative personality trait values, also called a “personality profile” to a decision engine of the social robot. The personality profile is derived from a character portrayal in a fictional work, dramatic performance, or by a real-life person (any one of these sometime referred to herein as a “source character”). The decision engine controls social responses of the social robot to environmental stimuli, based in part on the set of personality trait values. The social robot thereby behaves in a manner consistent with the personality profile for the profiled source character.

Generative design techniques for robot behavior

An automated robot design pipeline facilitates the overall process of designing robots that perform various desired behaviors. The disclosed pipeline includes four stages. In the first stage, a generative engine samples a design space to generate a large number of robot designs. In the second stage, a metric engine generates behavioral metrics indicating a degree to which each robot design performs the desired behaviors. In the third stage, a mapping engine generates a behavior predictor that can predict the behavioral metrics for any given robot design. In the fourth stage, a design engine generates a graphical user interface (GUI) that guides the user in performing behavior-driven design of a robot. One advantage of the disclosed approach is that the user need not have specialized skills in either graphic design or programming to generate designs for robots that perform specific behaviors or express various emotions.

Systems and methods for emotional-imaging composer

Systems and methods for Emotional-Imaging Composer are disclosed. The method may include recording a real-time biosignal from a plurality of biosignal sensors. The method may further include determining an emotion that is associated with the real-time biosignal. The method may further include outputting a display feature corresponding to the emotion, wherein the display feature is a lighting effect on a graphical user interface.

Information processing apparatus, information processing method, and program

There is provided an information processing apparatus and an information processing method to increase movement patterns of an autonomous mobile body more easily, the information processing apparatus including an operation control unit configured to control an operation of a driving unit. The operation control unit generates, on the basis of a teaching movement, control sequence data for causing a driving unit of an autonomous mobile body to execute an autonomous movement corresponding to the teaching movement, and causes the driving unit to execute the autonomous movement according to the control sequence data, on the basis of an action plan determined by situation estimation. The information processing method includes controlling, by a processor, an operation of a driving unit, and the controlling further includes generating control sequence data, and causing the driving unit to execute an autonomous movement according to the control sequence data.

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
20220288791 · 2022-09-15 ·

An information processing device including: an output control unit that controls an output from an interaction device to a user; an action evaluation unit that determines an action of the user performed in correspondence with an output of the interaction device; an emotion estimation unit that estimates an emotion of the user corresponding to the action of the user; and an information accumulation unit that accumulates the output of the interaction device, the action of the user, and the emotion of the user in association with each other as interaction information, in which the output control unit controls the output from the interaction device to the user based on the interaction information accumulated.

INFORMATION PROCESSING APPARATUS, CONTROL METHOD, AND PROGRAM
20220291665 · 2022-09-15 ·

Communication with a user is more naturally and effectively realized. An information processing apparatus includes a first sensor (1101) that detects an object present in a first direction with respect to an autonomous mobile body; second sensors (1102, 1103) that detect the object present in the first direction with respect to the autonomous mobile body by a system different from a system of the first sensor; and an operation control unit (230) that controls an operation of the autonomous mobile body based on a detection result acquired by the first sensor and a detection result acquired by the second sensors.