G05B2219/39473

DELIVERY VEHICLES FOR EN ROUTE FOOD PRODUCT PREPARATION

Technologies are generally described for delivery vehicles and containers for en route food product preparation. Modular food product preparation systems that may receive food items and supplies and prepare food product(s) en route such that the food product(s) is prepared by the time the system reaches a delivery destination may include trucks, railway cars, watercraft, and similar vehicles. Food preparation process steps and timing may be determined based on travel information (e.g., delivery destination, routes, etc.), as well as, food item and food product information. An on-board controller may determine the process steps and timing(s) and control operations of robotic devices arranged modularly in a container or vehicle to execute steps of the food preparation process. Alternatively or additionally, the on-board controller may receive instructions from a remote controller. Travel parameters of the vehicle may also be adjusted based on the food preparation process and/or travel information.

EN ROUTE FOOD PRODUCT PREPARATION

Technologies are generally described for en route food product preparation. Food product preparation process steps and timing may be determined based on travel information (e.g., starting point, intermediate waypoints, delivery destination, routes, etc.), as well as, food item and food product information. Instructions for robotic devices arranged modularly in a container or truck to execute steps of the food product preparation process and their timing may be transmitted to a controller managing the operations of the robotic devices. Instructions may be updated en route based on changing travel or other conditions.

Robotic grasping prediction using neural networks and geometry aware object representation

Deep machine learning methods and apparatus, some of which are related to determining a grasp outcome prediction for a candidate grasp pose of an end effector of a robot. Some implementations are directed to training and utilization of both a geometry network and a grasp outcome prediction network. The trained geometry network can be utilized to generate, based on two-dimensional or two-and-a-half-dimensional image(s), geometry output(s) that are: geometry-aware, and that represent (e.g., high-dimensionally) three-dimensional features captured by the image(s). In some implementations, the geometry output(s) include at least an encoding that is generated based on a trained encoding neural network trained to generate encodings that represent three-dimensional features (e.g., shape). The trained grasp outcome prediction network can be utilized to generate, based on applying the geometry output(s) and additional data as input(s) to the network, a grasp outcome prediction for a candidate grasp pose.

Robot system for processing an object and method of packaging and processing the same

A robot system for processing an object to be packaged as a product, a packaging method, and a method of processing the same are provided. The object has multiple surfaces, and multiple e-package information tags are provided on the surfaces of the object for storing information of the product. Each surface is provided with one of the e-package information tags. The information of the product includes information of a location, an orientation and physical features of the object. In operation, the robot system controls a sensing device to detect and capture one of the e-package information tags on the object to obtain a captured image, and processes the captured image to obtain the information of the product. Based on the information of the product, the robot system controls a robotic grasping device to perform a robotic manipulation for handling the object.

Hybrid Machine Learning-Based Systems and Methods for Training an Object Picking Robot with Real and Simulated Performance Data

For training an object picking robot with real and simulated grasp performance data, grasp locations on an object are assigned based on object physical properties. A simulation experiment for robot grasping is performed using a first set of assigned locations. Based on simulation data from the simulation, a simulated object grasp quality of the robot is evaluated for each of the assigned locations. A first set of candidate grasp locations on the object is determined based on data representative of simulated grasp quality from the evaluation. Based on sensor data from an actual experiment for the robot grasping using each of the candidate grasp locations, an actual object grasp quality is evaluated for each of the candidate locations.

DEPTH PERCEPTION MODELING FOR GRASPING OBJECTS
20200215685 · 2020-07-09 ·

Methods, grasping systems, and computer-readable mediums storing computer executable code for grasping an object are provided. In an example, a depth image of the object may be obtained by a grasping system. A potential grasp point of the object may be determined by the grasping system based on the depth image. A tactile output corresponding to the potential grasp point may be estimated by the grasping system based on data from the depth image. The grasping system may be controlled to grasp the object at the potential grasp point based on the estimated tactile output.

ROBOT SYSTEM FOR PROCESSING AN OBJECT AND METHOD OF PACKAGING AND PROCESSING THE SAME
20200094414 · 2020-03-26 ·

A robot system for processing an object to be packaged as a product, a packaging method, and a method of processing the same are provided. The object has multiple surfaces, and multiple e-package information tags are provided on the surfaces of the object for storing information of the product. Each surface is provided with one of the e-package information tags. The information of the product includes information of a location, an orientation and physical features of the object. In operation, the robot system controls a sensing device to detect and capture one of the e-package information tags on the object to obtain a captured image, and processes the captured image to obtain the information of the product. Based on the information of the product, the robot system controls a robotic grasping device to perform a robotic manipulation for handling the object.

ROBOTIC GRASPING PREDICTION USING NEURAL NETWORKS AND GEOMETRY AWARE OBJECT REPRESENTATION

Deep machine learning methods and apparatus, some of which are related to determining a grasp outcome prediction for a candidate grasp pose of an end effector of a robot. Some implementations are directed to training and utilization of both a geometry network and a grasp outcome prediction network. The trained geometry network can be utilized to generate, based on two-dimensional or two-and-a-half-dimensional image(s), geometry output(s) that are: geometry-aware, and that represent (e.g., high-dimensionally) three-dimensional features captured by the image(s). In some implementations, the geometry output(s) include at least an encoding that is generated based on a trained encoding neural network trained to generate encodings that represent three-dimensional features (e.g., shape). The trained grasp outcome prediction network can be utilized to generate, based on applying the geometry output(s) and additional data as input(s) to the network, a grasp outcome prediction for a candidate grasp pose.

SYSTEM AND METHOD FOR ROBOTIC BIN PICKING

A method and computing system comprising identifying one or more candidate objects for selection by a robot. A path to the one or more candidate objects may be determined based upon, at least in part, a robotic environment and at least one robotic constraint. A feasibility of grasping a first candidate object of the one or more candidate objects may be validated. If the feasibility is validated, the robot may be controlled to physically select the first candidate object. If the feasibility is not validated, at least one of a different grasping point of the first candidate object, a second path, or a second candidate object may be selected.

Workpiece picking system
10434652 · 2019-10-08 · ·

A workpiece picking system including: a robot; a hand, attached to a hand tip portion of the robot, for picking workpieces; a three-dimensional sensor, attached to the hand tip portion, for acquiring positional information of a three-dimensional point group in a partial region in a container; a workpiece state calculation unit which calculates a position and posture of a workpiece based on positional information of a three-dimensional point group in an acquired first partial region; a data acquisition position calculation unit which calculates a robot corresponding to a second partial region where positional information is to be acquired next, based on the positional information of the three-dimensional point group in the acquired first partial region; and a control unit which controls the robot and the hand based on the calculated position and posture of the workpiece and based on the calculated robot position corresponding to the second partial region.