Patent classifications
G05B2219/39543
Method and system for detecting and picking up objects
A method includes steps of: capturing an image of a container; recognizing at least one object in the container based on the image; determining at least one first coordinate set corresponding to the at least one object; determining at least one second coordinate set that corresponds to target one (s) of the at least one first coordinate set and that relates to a fixed picking device of a robotic arm; adjusting position(s) of unfixed picking device(s) of the robotic arm if necessary; controlling the robotic arm to pick up one (s) of the at least one object that correspond(s) to the at least one second coordinate set with the fixed picking device and/or at least one unfixed picking device.
AUTONOMOUS MOBILE GRABBING METHOD FOR MECHANICAL ARM BASED ON VISUAL-HAPTIC FUSION UNDER COMPLEX ILLUMINATION CONDITION
The present disclosure discloses an autonomous mobile grabbing method for a mechanical arm based on visual-haptic fusion under a complex illumination condition, which mainly includes approaching control over a target position and feedback control over environment information.
According to the method, under the complex illumination condition, weighted fusion is conducted on visible light and depth images of a preselected region, identification and positioning of a target object are completed based on a deep neural network, and a mobile mechanical arm is driven to continuously approach the target object; in addition, the pose of the mechanical arm is adjusted according to contact force information of a sensor module, the external environment and the target object; and meanwhile, visual information and haptic information of the target object are fused, and the optimal grabbing pose and the appropriate grabbing force of the target object are selected.
By adopting the method, the object positioning precision and the grabbing accuracy are improved, the collision damage and instability of the mechanical arm are effectively prevented, and the harmful deformation of the grabbed object is reduced.
Object manipulation apparatus, handling method, and program product
An object manipulation apparatus according to an embodiment of the present disclosure includes a memory and a hardware processor coupled to the memory. The hardware processor is configured to: calculate, based on an image in which one or more objects to be grasped are contained, an evaluation value of a first behavior manner of grasping the one or more objects; generate information representing a second behavior manner based on the image and a plurality of evaluation values of the first behavior manner; and control actuation of grasping the object to be grasped in accordance with the information being generated.
Grasp generation using a variational autoencoder
In at least one embodiment, a system determines a set of possible grasp poses that allow a robot to successfully grasp an object by generating a set of potential grasp poses, and then evaluating the performance of each potential grasp pose. In at least one embodiment, the system performs a refinement operation on the grasp poses, and based on an evaluation of the poses, creates an improved set of possible grasps for the object.
Generating a model for an object encountered by a robot
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
SYSTEMS AND METHODS FOR A VISION GUIDED END EFFECTOR
Systems and method for an object from a plurality of objects are disclosed. An image of a scene containing the plurality of objects is obtained, and a segmentation map is generated for the objects in the scene. The shapes of the objects are determined based on the segmentation map. An end effector is adjusted in response to determining the shapes of the objects. The adjusting the end effector includes shaping the end effector according to at least one of the shapes of the objects. The plurality of objects is approached in response to the shaping of the end effector, and one of the plurality of objects is picked with the end effector.
HANDLING SYSTEM, TRANSPORT SYSTEM, CONTROL DEVICE, STORAGE MEDIUM, AND HANDLING METHOD
According to an embodiment, there is provided a handling system capable of handling a plurality of objects, the handling system including: a movable arm, a holder, a sensor, and a controller. The holder is attached to the movable arm and is capable of holding the object. The sensor is capable of detecting the object. The controller controls the movable arm and the holder. The controller determines whether or not to change an arrangement of the object before the object is held, on the basis of information acquired from the sensor. In a case where it is determined to change the arrangement of the object, the controller evaluates effectiveness of an arrangement change operation for each object, and decides on an arrangement change operation on the basis of an evaluation result.
ROBOT CONTROL SYSTEM, ROBOT CONTROL METHOD, AND PROGRAM
A robot control system according to one or more embodiments may include a robot that performs a task in relation to a workpiece, a coordinate measuring machine that measures a three-dimensional shape of the workpiece, a control device that controls the robot in accordance with a measurement result from the coordinate measuring machine, and an image capturing apparatus that captures an image of the workpiece. An image capture interval for the image capturing apparatus is shorter than a measurement interval for the coordinate measuring machine. In a period after the coordinate measuring machine conducts a measurement and until the robot performs the task, the control device is configured to compute a position of the workpiece by referring to an image capture result from the image capturing apparatus.
GRIP DEVICE AND ROBOT DEVICE COMPRISING SAME
A grip device is provided. A grip device according to an embodiment of the present disclosure includes: a first finger; a second finger facing the first finger; a first link part including a first guide slot and supporting the first finger; a second link part supporting the second finger and including a second guide slot, intersecting the first link part; a hinge configured to move inside the first guide slot and second guide slot and connecting the first link part and the second link part at an intersection point of the first link part and second link part; a first actuator configured to adjust a distance between the first finger and second finger by moving the first link part and/or the second link part; and a second actuator configured to move the hinge inside the first guide slot and second guide slot.
Machine learning control of object handovers
A robotic control system directs a robot to take an object from a human grasp by obtaining an image of a human hand holding an object, estimating the pose of the human hand and the object, and determining a grasp pose for the robot that will not interfere with the human hand. In at least one example, a depth camera is used to obtain a point cloud of the human hand holding the object. The point cloud is provided to a deep network that is trained to generate a grasp pose for a robotic gripper that can take the object from the human's hand without pinching or touching the human's fingers.