Patent classifications
G05B2219/40584
Object detecting method, object detecting device, and robot system
An object detecting method includes imaging a plurality of target objects with an imaging section and acquiring a first image, recognizing an object position/posture of one of the plurality of target objects based on the first image, counting the number of successfully recognized object positions/postures of the target object, outputting, based on the object position/posture of the target object, a signal for causing a holding section to hold the target object, calculating, as a task evaluation value, a result about whether the target object was successfully held, updating, based on an evaluation indicator including the number of successfully recognized object positions/postures and the task evaluation value, a model for estimating the evaluation indicator from an imaging position/posture of the imaging section and determining an updated imaging position/posture, acquiring a second image in the updated imaging position/posture, and recognizing the object position/posture of the target object based on the second image.
THREE-DIMENSIONAL MEASUREMENT APPARATUS, SYSTEM, AND PRODUCTION METHOD
A three-dimensional measurement apparatus includes an attachment portion for attaching the three-dimensional measurement apparatus to a robot, a flange for attaching an end effector, a sensor configured to receive light from an object, and a calculation unit configured to determine three-dimensional information about the object by performing a calculation using data obtained by the sensor. A shortest distance among distances from a center of the flange to an outer peripheral edge of the calculation unit is less than or equal to a radius of the attachment portion or the flange, as viewed from the flange.
Image capturing system, image capturing apparatus, and control method of the same
An image capturing apparatus that is attachable to a robot arm, comprises an image capturing device configured to capture an object image, a receiving unit configured to receive information relating to status of the robot arm from a control apparatus for controlling the robot arm, and an imaging control unit configured to control an image capturing operation of the image capturing device that is performed on an object, wherein the imaging control unit controls the image capturing operation based on the information relating to status of the robot arm that was notified by the control apparatus.
Method and computing system for performing motion planning based on image information generated by a camera
A system and method for motion planning is presented. The system is configured, when an object is or has been in a camera field of view of a camera, to receive first image information that is generated when the camera has a first camera pose. The system is further configured to determine, based on the first image information, a first estimate of the object structure, and to identify, based on the first estimate of the object structure or based on the first image information, an object corner. The system is further configured to cause an end effector apparatus to move the camera to a second camera pose, and to receive second image information for representing the object's structure. The system is configured to determine a second estimate of the object's structure based on the second image information, and to generate a motion plan based on at least the second estimate.
ROBOTIC AUTOMATED FILLING AND CAPPING SYSTEM FOR VAPE OIL CARTRIDGES
A robotic automated filling and capping system is made up of a cartridge infeed conveyor, a cap infeed conveyor, an outfeed conveyor, a six-axis robot, and a selective compliance articulated robot arm, all of which are configured to work together to automatically, and sanitarily, fill and cap vape oil cartridges.
METHOD AND COMPUTING SYSTEM FOR PERFORMING MOTION PLANNING BASED ON IMAGE INFORMATION GENERATED BY A CAMERA
A system and method for motion planning is presented. The system is configured, when an object is or has been in a camera field of view of a camera, to receive first image information that is generated when the camera has a first camera pose. The system is further configured to determine, based on the first image information, a first estimate of the object structure, and to identify, based on the first estimate of the object structure or based on the first image information, an object corner. The system is further configured to cause an end effector apparatus to move the camera to a second camera pose, and to receive second image information for representing the object's structure. The system is configured to determine a second estimate of the object's structure based on the second image information, and to generate a motion plan based on at least the second estimate.
METHOD AND COMPUTING SYSTEM FOR PERFORMING OBJECT DETECTION OR ROBOT INTERACTION PLANNING BASED ON IMAGE INFORMATION GENERATED BY A CAMERA
A method and computing system for performing object detection are presented. The computing system may be configured to: receive first image information that represents at least a first portion of an object structure of an object in a camera's field of view, wherein the first image information is associate with a first camera pose; generate or update, based on the first image information, sensed structure information representing the object structure; identify an object corner associated with the object structure; cause the robot arm to move the camera to a second camera pose in which the camera is pointed at the object corner; receive second image information associated with the second camera pose; update the sensed structure information based on the second image information; determine, based on the updated sensed structure information, an object type associated with the object; determine one or more robot interaction locations based on the object type.
PICKING ROBOT, PICKING METHOD, AND COMPUTER PROGRAM PRODUCT
A picking robot of an embodiment includes an acquisition unit, first and second calculation units, a control unit, and a grip unit. The acquisition unit acquires first area information. The first calculation unit calculates first position/posture information indicating a position/posture of a target object from the first area information. The second calculation unit calculates second position/posture information differing from the first position/posture information. The control unit grips the target object, based on the first position/posture information, controls a first operation of moving the target object to a second area. When a first operation result is inadequate, the control unit controls a second operation of arranging the target object at a position indicated by the second position/posture information in a posture indicated by the second position/posture information. The grip unit grips the target object and moves the gripped target object, based on the control by the control unit.
IMAGE VOLUME FOR OBJECT POSE ESTIMATION
Apparatuses, systems, and techniques estimate a pose of an object based on images generated from a combined image volume. In at least one embodiment, the combined image volume is obtained from a plurality of image volumes generated based on a plurality of images of an object.
MULTI-MEMBERED ACTUATED KINEMATIC SYSTEM
The present invention relates to multi-limb actuated kinematics (1) having a plurality of drive units (11-16) connected to one another as a serial kinematic chain, the drive units (11-16) respectively having a control unit (11b, 12b, 16b), which are designed to operate at least one drive (11c, 12c, 16c) of the drive unit (11-16) to carry out the movement of the drive unit (11-16), the control units (11b, 12b, 16b) of the drive units (11-16) being connected to one another by a first data line (A.sub.10, A.sub.11, A.sub.12, A.sub.13, A.sub.16, A.sub.17) such that they transmit signals and being designed to receive at least data for operating the drive (11c, 12c, 16c) via the first data line (A.sub.10, A.sub.11, A.sub.12, A.sub.13, A.sub.16, A.sub.17). The multi-limb actuated kinematics (1) are characterised in that the control units (11b, 12b, 16b) of the drive units (11-16) are further connected to one another by a second data line (B.sub.10, B.sub.11, B.sub.12, B.sub.13, B.sub.16, B.sub.17, B.sub.19) such that they transmit signals and are designed to forward the data of the second data line (B.sub.10, B.sub.11, B.sub.12, B.sub.13, B.sub.16, B.sub.17, B.sub.19).