Patent classifications
G05B2219/36184
SYSTEMS, DEVICES, ARTICLES, AND METHODS FOR USING TRAINED ROBOTS
Robotic systems, methods of operation of robotic systems, and storage media including processor-executable instructions are disclosed herein. The system may include a robot, at least one processor in communication with the robot, and an operator interface in communication with the robot and the at least one processor. The method may include executing a first set of autonomous robot control instructions which causes a robot to autonomously perform the at least one task in an autonomous mode, and generating a second set of autonomous robot control instructions from the first set of autonomous robot control instructions and a first set of environmental sensor data received from a senor. Execution of the second set of autonomous robot control instructions causes the robot to autonomously perform the at least one task. The method may include producing at least one signal that represents the second set of autonomous robot control instructions.
Systems, devices, articles, and methods for using trained robots
Robotic systems, methods of operation of robotic systems, and storage media including processor-executable instructions are disclosed herein. The system may include a robot, at least one processor in communication with the robot, and an operator interface in communication with the robot and the at least one processor. The method may include executing a first set of autonomous robot control instructions which causes a robot to autonomously perform the at least one task in an autonomous mode, and generating a second set of autonomous robot control instructions from the first set of autonomous robot control instructions and a first set of environmental sensor data received from a sensor. Execution of the second set of autonomous robot control instructions causes the robot to autonomously perform the at least one task. The method may include producing at least one signal that represents the second set of autonomous robot control instructions.
MULTI-SENSOR ARRAY INCLUDING AN IR CAMERA AS PART OF AN AUTOMATED KITCHEN ASSISTANT SYSTEM FOR RECOGNIZING AND PREPARING FOOD AND RELATED METHODS
An automated kitchen assistant system inspects a food preparation area in the kitchen environment using a novel sensor combination. The combination of sensors includes an Infrared (IR) camera that generates IR image data and at least one secondary sensor that generates secondary image data. The IR image data and secondary image data are processed to obtain combined image data. A trained convolutional neural network is employed to automatically compute an output based on the combined image data. The output includes information about the identity and the location of the food item. The output may further be utilized to command a robotic arm, kitchen worker, or otherwise assist in food preparation. Related methods are also described.
Methods and systems for food preparation in a robotic cooking kitchen
The present disclosure is directed to methods, computer program products, and computer systems for instructing a robot to prepare a food dish by replacing the human chef's movements and actions. Monitoring a human chef is carried out in an instrumented application-specific setting, a standardized robotic kitchen in this instance, and involves using sensors and computers to watch, monitor, record and interpret the motions and actions of the human chef, in order to develop a robot-executable set of commands robust to variations and changes in the environment, capable of allowing a robotic or automated system in a robotic kitchen to prepare the same dish to the standards and quality as the dish prepared by the human chef.
ROBOT CONTROL METHOD AND APPARATUS AND ROBOT USING THE SAME
The present disclosure discloses a robot control method as well as an apparatus, and a robot using the same. The method includes: obtaining a human pose image; obtaining pixel information of key points in the human pose image; obtaining three-dimensional positional information of key points of a human arm according to the pixel information of the preset key points; obtaining a robotic arm kinematics model of a robot; obtaining an angle of each joint in the robotic arm kinematics model according to the three-dimensional positional information of the key points of the human arm and the robotic arm kinematics model; and controlling an arm of the robot to perform a corresponding action according to the angle of each joint. The control method does not require a three-dimensional stereo camera to collect three-dimensional coordinates of a human body, which reduces the cost to a certain extent.
TELEMETRY HARVESTING AND ANALYSIS FROM EXTENDED REALITY STREAMING
A method for producing an optimized instruction set for guiding a robot through a service procedure includes fitting human operators with an XR headset and controllers, instructing the human operators to perform the same service procedure through a series of individual steps, monitoring the operator's movements, and recording the XR telemetry data produced by the headset and the controllers as the operator performs the series of steps within the service procedure. The XR telemetry data is analyzed, optimized and translated into an optimized set of instructions to enable a robot to perform the service procedure. In some aspects, machine learning and neural networks are used to acquire, aggregate, analyze and optimize the XR telemetry data.
Multi-sensor array including an IR camera as part of an automated kitchen assistant system for recognizing and preparing food and related methods
An automated kitchen assistant system inspects a food preparation area in the kitchen environment using a novel sensor combination. The combination of sensors includes an Infrared (IR) camera that generates IR image data and at least one secondary sensor that generates secondary image data. The IR image data and secondary image data are processed to obtain combined image data. A trained convolutional neural network is employed to automatically compute an output based on the combined image data. The output includes information about the identity and the location of the food item. The output may further be utilized to command a robotic arm, kitchen worker, or otherwise assist in food preparation. Related methods are also described.
Teleoperating of robots with tasks by mapping to human operator pose
A system enables teleoperation of a robot based on a pose of a subject. The system includes an image capturing device and an operator system controller that are remotely located from a robotic system controller and a robot. The image capturing device captures images of the subject. The operator system controller maps a processed version of the captured image to a three-dimensional skeleton model of the subject and generates body pose information of the subject in the captured image. The robotic system controller communicates with the operator system controller over a network. The robotic system controller generates a plurality of kinematic parameters for the robot and causes the robot to take a pose corresponding to the pose of the subject in the captured image.
AUGMENTED REALITY-ENHANCED FOOD PREPARATION SYSTEM AND RELATED METHODS
A food preparation system is configured to enhance the efficiency of food preparation operations in a commercial kitchen by displaying instructions on a surface in the kitchen work area. The food preparation system includes a plurality of cameras aimed at a kitchen workspace for preparing the plurality of food items and a processor operable to compute an instruction for a kitchen worker to perform a food preparation step based on one or more types of information selected from order information, recipe information, kitchen equipment information, data from the cameras, and food item inventory information. A projector in communication with the processor visually projects the instruction onto a location in the kitchen workspace for the kitchen worker to observe. Related methods for projecting food preparation instructions are described.
Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose
A system enables teleoperation of a robot based on a pose of a subject. The system includes an image capturing device and an operator system controller that are remotely located from a robotic system controller and a robot. The image capturing device captures images of the subject. The operator system controller maps a processed version of the captured image to a three-dimensional skeleton model of the subject and generates body pose information of the subject in the captured image. The robotic system controller communicates with the operator system controller over a network. The robotic system controller generates a plurality of kinematic parameters for the robot and causes the robot to take a pose corresponding to the pose of the subject in the captured image.