G05B2219/40391

Imitation Learning in a Manufacturing Environment

A computing system identifies a trajectory example generated by a human operator. The trajectory example includes trajectory information of the human operator while performing a task to be learned by a control system of the computing system. Based on the trajectory example, the computing system trains the control system to perform the task exemplified in the trajectory example. Training the control system includes generating an output trajectory of a robot performing the task. The computing system identifies an updated trajectory example generated by the human operator based on the trajectory example and the output trajectory of the robot performing the task. Based on the updated trajectory example, the computing system continues to train the control system to perform the task exemplified in the updated trajectory example.

SYSTEM(S) AND METHOD(S) OF USING IMITATION LEARNING IN TRAINING AND REFINING ROBOTIC CONTROL POLICIES

Implementations described herein relate to training and refining robotic control policies using imitation learning techniques. A robotic control policy can be initially trained based on human demonstrations of various robotic tasks. Further, the robotic control policy can be refined based on human interventions while a robot is performing a robotic task. In some implementations, the robotic control policy may determine whether the robot will fail in performance of the robotic task, and prompt a human to intervene in performance of the robotic task. In additional or alternative implementations, a representation of the sequence of actions can be visually rendered for presentation to the human can proactively intervene in performance of the robotic task.

Systems, devices, articles, and methods for using trained robots

Robotic systems, methods of operation of robotic systems, and storage media including processor-executable instructions are disclosed herein. The system may include a robot, at least one processor in communication with the robot, and an operator interface in communication with the robot and the at least one processor. The method may include executing a first set of autonomous robot control instructions which causes a robot to autonomously perform the at least one task in an autonomous mode, and generating a second set of autonomous robot control instructions from the first set of autonomous robot control instructions and a first set of environmental sensor data received from a sensor. Execution of the second set of autonomous robot control instructions causes the robot to autonomously perform the at least one task. The method may include producing at least one signal that represents the second set of autonomous robot control instructions.

ROBOT CONTROL DEVICE, ROBOT SYSTEM, AND ROBOT CONTROL METHOD

A robot control device includes: a trained model built by being trained on work data; a control data acquisition section which acquires control data of the robot based on data from the trained model; base trained models built for each of a plurality of simple operations by being trained on work data; an operation label storage section which stores operation labels corresponding to the base trained models; a base trained model combination information acquisition section which acquires combination information when the trained model is represented by a combination of a plurality of the base trained models, by acquiring a similarity between the trained model and the respective base trained models; and an information output section which outputs the operation label corresponding to each of the base trained models which represent the trained model.

DATA PROCESSING DEVICE AND DATA PROCESSING METHOD

There is provided a data processing device and a data processing method capable of improving reproducibility in a case where a cooking robot reproduces the same dish as a dish cooked by a cook. The data processing device according to one aspect of the present technology generates recipe data including a data set used when a cooking robot performs a cooking operation, the data set linking cooking operation data in which information regarding an ingredient of a dish and information regarding an operation of a cook in a cooking process using the ingredient are described, and sensation data indicating a sensation of the cook measured in conjunction with progress of the cooking process. The present technology can be applied to a computer that controls cooking in a cooking robot.

MULTI-SENSOR ARRAY INCLUDING AN IR CAMERA AS PART OF AN AUTOMATED KITCHEN ASSISTANT SYSTEM FOR RECOGNIZING AND PREPARING FOOD AND RELATED METHODS

An automated kitchen assistant system inspects a food preparation area in the kitchen environment using a novel sensor combination. The combination of sensors includes an Infrared (IR) camera that generates IR image data and at least one secondary sensor that generates secondary image data. The IR image data and secondary image data are processed to obtain combined image data. A trained convolutional neural network is employed to automatically compute an output based on the combined image data. The output includes information about the identity and the location of the food item. The output may further be utilized to command a robotic arm, kitchen worker, or otherwise assist in food preparation. Related methods are also described.

ROBOT SYSTEM AND ROBOT CONTROL METHOD

A robot system (1) includes the robot (10), a motion sensor (11), a surrounding environment sensor (12, 13), an operation apparatus (21), a learning control section (41), and a relay apparatus (30). The robot (10) performs work based on an operation command. The operation apparatus (21) detects and outputs an operator-operating force applied by the operator. The learning control section (41) outputs a calculation operating force. The relay apparatus (30) outputs the operation command based on the operator-operating force and the calculation operating force. The learning control section (41) estimates and outputs the calculation operating force by using a model constructed by performing the machine learning of the operator-operating force, the surrounding environment data, the operation data, and the operation command based on the operation data and the surrounding environment data outputted by the sensors (11 to 13), and the operation command outputted by the relay apparatus (30).

Methods and systems for food preparation in a robotic cooking kitchen
11117253 · 2021-09-14 · ·

The present disclosure is directed to methods, computer program products, and computer systems for instructing a robot to prepare a food dish by replacing the human chef's movements and actions. Monitoring a human chef is carried out in an instrumented application-specific setting, a standardized robotic kitchen in this instance, and involves using sensors and computers to watch, monitor, record and interpret the motions and actions of the human chef, in order to develop a robot-executable set of commands robust to variations and changes in the environment, capable of allowing a robotic or automated system in a robotic kitchen to prepare the same dish to the standards and quality as the dish prepared by the human chef.

ROBOT SYSTEM AND SUPPLEMENTAL LEARNING METHOD

A robot system includes a robot, state detection sensors to, a timekeeping unit, a learning control unit, a determination unit, an operation device, and an input unit, and an additional learning unit. The determination unit determines whether or not the work of the robot can be continued under the control of the learning control unit based on the state values detected by the state detection sensors to and outputs determination result. The additional learning unit performs additional learning of the determination result indicating that the work of the robot cannot be continued, the operator operation force, work state output by the operation device and the input unit, and timer signal output by the timekeeping unit.

ACTION IMITATION METHOD AND ROBOT AND COMPUTER READABLE MEDIUM USING THE SAME

The present disclosure provides an action imitation method as well as a robot and a computer readable storage medium using the same. The method includes: collecting a plurality of action images of a to-be-imitated object; processing the action images through a pre-trained convolutional neural network to obtain a position coordinate set of position coordinates of a plurality of key points of each of the action images; calculating a rotational angle of each of the linkages of the to-be-imitated object based on the position coordinate sets of the action images; and controlling a robot to move according to the rotational angle of each of the linkages of the to-be-imitated object. In the above-mentioned manner, the rotational angle of each linkage of the to-be-imitated object can be obtained by just analyzing and processing the images collected by an ordinary camera without the help of high-precision depth camera.