Patent classifications
G05B2219/40571
ROBOTIC MULTI-ITEM TYPE PALLETIZING & DEPALLETIZING
Techniques are disclosed to use a robotic arm to palletize or depalletize diverse items. In various embodiments, data associated with a plurality of items to be stacked on or in a destination location is received. A plan to stack the items on or in the destination location is generated based at least in part on the received data. The plan is implemented at least in part by controlling a robotic arm of the robot to pick up the items and stack them on or in the receptacle according to the plan, including by for each item: using one or more first order sensors to move the item to a first approximation of a destination position for that item at the destination location; and using one or more second order sensors to snug the item into a final position.
ROBOTIC SYSTEMS AND METHODS FOR ROBUSTLY GRASPING AND TARGETING OBJECTS
Embodiments are generally directed to generating a training dataset of labelled examples of sensor images and grasp configurations using a set of three-dimensional (3D) models of objects, one or more analytic mechanical representations of either or both of grasp forces and grasp torques, and statistical sampling to model uncertainty in either or both sensing and control. Embodiments can also include using the training dataset to train a function approximator that takes as input a sensor image and returns data that is used to select grasp configurations for a robot grasping or targeting mechanism.
Robotic multi-item type palletizing and depalletizing
Techniques are disclosed to use a robotic arm to palletize or depalletize diverse items. In various embodiments, data associated with a plurality of items to be stacked on or in a destination location is received. A plan to stack the items on or in the destination location is generated based at least in part on the received data. The plan is implemented at least in part by controlling a robotic arm of the robot to pick up the items and stack them on or in the receptacle according to the plan, including by for each item: using one or more first order sensors to move the item to a first approximation of a destination position for that item at the destination location; and using one or more second order sensors to snug the item into a final position.
Assembling parts in an assembly line
A method for assembling parts in an assembly line, such as an automotive final assembly line, is disclosed. The method includes advancing a part along the assembly line with an Automated Guided Vehicle (AGV), arranging a first real time vision system to monitor the position of the AGV in at least two directions, and providing the readings of the first real time vision system to a controller arranged to control an assembly unit of the assembly line to perform an automated operation on the part that is advanced or supported by the AGV. An assembly line is also disclosed.
ROBOTIC MULTI-ITEM TYPE PALLETIZING & DEPALLETIZING
Techniques are disclosed to use a robotic arm to palletize or depalletize diverse items. In various embodiments, data associated with a plurality of items to be stacked on or in a destination location is received. A plan to stack the items on or in the destination location is generated based at least in part on the received data. The plan is implemented at least in part by controlling a robotic arm of the robot to pick up the items and stack them on or in the receptacle according to the plan, including by for each item: using one or more first order sensors to move the item to a first approximation of a destination position for that item at the destination location; and using one or more second order sensors to snug the item into a final position.
ROBOT SYSTEM, LEARNING APPARATUS, INFORMATION PROCESSING APPARATUS, LEARNED MODEL, CONTROL METHOD, INFORMATION PROCESSING METHOD, METHOD FOR MANUFACTURING PRODUCT, AND RECORDING MEDIUM
A robot system includes a robot, and an information processing portion. The information processing portion is configured to obtain a learned model by learning first force information about a force applied by a worker to a workpiece, first position information about a position of a first portion of the worker, and first workpiece information about a state of the workpiece, and control the robot on a basis of output data of the learned model.
TRAINED MODEL GENERATION METHOD, TRAINED MODEL GENERATION DEVICE, TRAINED MODEL, AND HOLDING MODE INFERENCE DEVICE
A trained model includes a class inference model, a first holding mode inference model, and a second holding mode inference model. The class inference model is configured to infer, based on an inference image of a holding target object, a classification result obtained by classifying the holding target object into a predetermined holding category. The first holding mode inference model is configured to infer, based on the classification result and the inference image, a first holding mode for the holding target object. The second holding mode inference model is configured to infer, based on the first holding mode and the inference image, a second holding mode for the holding target object. A trained model generation method includes generating a trained model by performing learning using learning data including a learning image of a learning target object corresponding to a holding target object.
Autonomous mobile grabbing method for mechanical arm based on visual-haptic fusion under complex illumination condition
The present disclosure discloses an autonomous mobile grabbing method for a mechanical arm based on visual-haptic fusion under a complex illumination condition, which mainly includes approaching control over a target position and feedback control over environment information. According to the method, under the complex illumination condition, weighted fusion is conducted on visible light and depth images of a preselected region, identification and positioning of a target object are completed based on a deep neural network, and a mobile mechanical arm is driven to continuously approach the target object; in addition, the pose of the mechanical arm is adjusted according to contact force information of a sensor module, the external environment and the target object; and meanwhile, visual information and haptic information of the target object are fused, and the optimal grabbing pose and the appropriate grabbing force of the target object are selected. By adopting the method, the object positioning precision and the grabbing accuracy are improved, the collision damage and instability of the mechanical arm are effectively prevented, and the harmful deformation of the grabbed object is reduced.
Program generation device and program generation method generating a route program for that returns tip of robot from end point by a prescribed distance
Provided is a program generation device capable of automatically generating a route program which takes into account the amount of bending when the tip of a robot abuts against a workpiece. This program generation device is provided with: an acquisition unit that acquires route data indicating a route to be followed by the tip of the robot with respect to an object; a detection unit that detects a pressing force for pressing the tip of the robot to the object; a calculation unit that calculates the amount of misalignment of the followed route caused by bending of the tip of the robot, on the basis of the pressing force detected by the detection unit and a prescribed constant; and a generation unit that automatically generates a route program for controlling a moving route of the tip of the robot, on the basis of the route data acquired by the acquisition unit and the amount of misalignment calculated by the calculation unit.