G05B2219/40391

ACTION IMITATION METHOD AND ROBOT AND COMPUTER READABLE STORAGE MEDIUM USING THE SAME

The present disclosure provides action imitation method as well as a robot and a computer readable storage medium using the same. The method includes: collecting at least a two-dimensional image of a to-be-imitated object; obtaining two-dimensional coordinates of each key point of the to-be-imitated object in the two-dimensional image and a pairing relationship between the key points of the to-be-imitated object; converting the two-dimensional coordinates of the key points of the to-be-imitated object in the two-dimensional image into space three-dimensional coordinates corresponding to the key points of the to-be-imitated object through a pre-trained first neural network model, and generating an action control instruction of a robot based on the space three-dimensional coordinates corresponding to the key points of the to-be-imitated object and the pairing relationship between the key points, where the action control instruction is for controlling the robot to imitate an action of the to-be-imitated object.

Motion transfer of highly dimensional movements to lower dimensional robot movements

Techniques for transferring highly dimensional movements to lower dimensional robot movements are described. In an example, a reference motion of a target is used to train a non-linear approximator of a robot to learn how to perform the motion. The robot and the target are associated with a robot model and a target model, respectively. Features related to the positions of the robot joints are input to the non-linear approximator. During the training, a robot joint is simulated, which results in movement of this joint and different directions of a robot link connected thereto. The robot link is mapped to a link of the target model. The directions of the robot link are compared to the direction of the target link to learn the best movement of the robot joint. The training is repeated for the different links and for different phases of the reference motion.

ROBOT CONTROL METHOD AND APPARATUS AND ROBOT USING THE SAME
20210197384 · 2021-07-01 ·

The present disclosure discloses a robot control method as well as an apparatus, and a robot using the same. The method includes: obtaining a human pose image; obtaining pixel information of key points in the human pose image; obtaining three-dimensional positional information of key points of a human arm according to the pixel information of the preset key points; obtaining a robotic arm kinematics model of a robot; obtaining an angle of each joint in the robotic arm kinematics model according to the three-dimensional positional information of the key points of the human arm and the robotic arm kinematics model; and controlling an arm of the robot to perform a corresponding action according to the angle of each joint. The control method does not require a three-dimensional stereo camera to collect three-dimensional coordinates of a human body, which reduces the cost to a certain extent.

Robotic System Having Shuttle

A robotic system includes a robot having a picking arm to grasp an inventory item and a shuttle. The shuttle includes a platform adapted to receive the inventory item from the picking arm of the robot. The platform is moveable between a pick-up location located substantially adjacent to the robot and an end location spaced a distance apart from the pick-up location. The system improves efficiency as transportation of the item from the pick-up location to the end location is divided between the robot and the shuttle.

System and method for instructing a device
11014243 · 2021-05-25 · ·

A system and method of instructing a device is disclosed. The system includes a signal source for providing at least one visual signal where the at least one visual signal is substantially indicative of at least one activity to be performed by the device. A visual signal capturing element captures the at least one visual signal and communicates the at least one visual signal to the device where the device interprets the at least one visual signal and performs the activity autonomously and without requiring any additional signals or other information from the signal source.

TELEMETRY HARVESTING AND ANALYSIS FROM EXTENDED REALITY STREAMING

A method for producing an optimized instruction set for guiding a robot through a service procedure includes fitting human operators with an XR headset and controllers, instructing the human operators to perform the same service procedure through a series of individual steps, monitoring the operator's movements, and recording the XR telemetry data produced by the headset and the controllers as the operator performs the series of steps within the service procedure. The XR telemetry data is analyzed, optimized and translated into an optimized set of instructions to enable a robot to perform the service procedure. In some aspects, machine learning and neural networks are used to acquire, aggregate, analyze and optimize the XR telemetry data.

METHOD AND DEVICE FOR TRAINING MANIPULATION SKILLS OF A ROBOT SYSTEM
20210122036 · 2021-04-29 ·

A method of training a robot system for manipulation of objects, the robot system being able to perform a set of skills, wherein each skill is learned as a skill model, the method comprising: receiving physical input from a human trainer, regarding the skill to be learned by the robot; determining for the skill model a set of task parameters including determining for each task parameter of the set of task parameters if a task parameter is an attached task parameter, which is related to an object being part of said kinesthetic demonstration or if a task parameter is a free task parameter, which is not related to a physical object; obtaining data for each task parameter of the set of task parameters from the set of kinesthetic demonstrations, and training the skill model with the set of task parameters and the data obtained for each task parameter.

Multi-sensor array including an IR camera as part of an automated kitchen assistant system for recognizing and preparing food and related methods

An automated kitchen assistant system inspects a food preparation area in the kitchen environment using a novel sensor combination. The combination of sensors includes an Infrared (IR) camera that generates IR image data and at least one secondary sensor that generates secondary image data. The IR image data and secondary image data are processed to obtain combined image data. A trained convolutional neural network is employed to automatically compute an output based on the combined image data. The output includes information about the identity and the location of the food item. The output may further be utilized to command a robotic arm, kitchen worker, or otherwise assist in food preparation. Related methods are also described.

AUGMENTED REALITY-ENHANCED FOOD PREPARATION SYSTEM AND RELATED METHODS

A food preparation system is configured to enhance the efficiency of food preparation operations in a commercial kitchen by displaying instructions on a surface in the kitchen work area. The food preparation system includes a plurality of cameras aimed at a kitchen workspace for preparing the plurality of food items and a processor operable to compute an instruction for a kitchen worker to perform a food preparation step based on one or more types of information selected from order information, recipe information, kitchen equipment information, data from the cameras, and food item inventory information. A projector in communication with the processor visually projects the instruction onto a location in the kitchen workspace for the kitchen worker to observe. Related methods for projecting food preparation instructions are described.

BACKUP CONTROL BASED CONTINUOUS TRAINING OF ROBOTS
20210031364 · 2021-02-04 ·

Provided are systems and methods for training a robot. The method commences with collecting, by the robot, sensor data from a plurality of sensors of the robot. The sensor data may be related to a task being performed by the robot based on an artificial intelligence (AI) model. The method may further include determining, based on the sensor data and the AI model, that a probability of completing the task is below a threshold. The method may continue with sending a request for operator assistance to a remote computing device and receiving, in response to sending the request, teleoperation data from the remote computing device. The method may further include causing the robot to execute the task based on the teleoperation data. The method may continue with generating training data based on the sensor data and results of execution of the task for updating the AI model.