Patent classifications
G05B2219/40584
ROBOT
Provided is a robot capable of automatically confirming the accuracy of a three-dimensional sensor and correcting the accuracy. The robot comprises: a three-dimensional sensor; a notification unit for notifying a determination timing for determining a deviation of an optical system of the three-dimensional sensor, on the basis of a change in a physical quantity related to the three-dimensional sensor; and a determination unit for determining whether or not there is a deviation at the optical system of the three-dimensional sensor. The change in the physical quantity includes at least one of acceleration and the number of times of acceleration and deceleration added to the three-dimensional sensor, a change in temperature of the three-dimensional sensor within a certain period, a change in temperature of the three-dimensional sensor within a total operating period, and the number of times of change in temperature of the three-dimensional sensor within the total operating period.
ROBOT SYSTEM, ROBOT ARM, END EFFECTOR, AND ADAPTER
A robot system including a robot arm with a movable portion includes a first imaging device and a second imaging device attached to the robot arm, a control unit that controls the robot system, a distance information acquisition unit that acquires information on a distance to a target object, the control unit is capable of changing a baseline length that is a distance between the first imaging device and the second imaging device, and the distance information acquisition unit acquires the information on the distance to the target object on the basis of the baseline length.
Device and method for controlling a robot to pick up an object in various positions
A method for controlling a robot to pick up an object in various positions. The method includes: defining a plurality of reference points on the object; mapping a first camera image of the object in a known position onto a first descriptor image; identifying the descriptors of the reference points from the first descriptor image; mapping a second camera image of the object in an unknown position onto a second descriptor image; searching the identified descriptors of the reference points in the second descriptor image; ascertaining the positions of the reference points in the three-dimensional space in the unknown position from the found positions; and ascertaining a pickup pose of the object for the unknown position from the ascertained positions of the reference points.
Automatic welding system and method for large structural parts based on hybrid robots and 3D vision
Disclosed are an automatic welding system and method for large structural parts based on hybrid robots and 3D vision. The system comprises a hybrid robot system composed of a mobile robot and an MDOF robot, a 3D vision system, and a welding system used for welding. The rough positioning technique based on a mobile platform and the accurate recognition and positioning technique based on high-accuracy 3D vision are combined, so the working range of the MDOF robot in the XYZ directions is expanded, and flexible welding of large structural parts is realized. The invention adopts 3D vision, thus having better error tolerance and lower requirements for the machining accuracy of workpieces, positioning accuracy of mobile robots and placement accuracy of the workpieces; and the cost is reduced, the flexibility is improved, the working range is expanded, labor is saved, production efficiency is improved, and welding quality is improved.
IMAGE CAPTURING SYSTEM, IMAGE CAPTURING APPARATUS, AND CONTROL METHOD OF THE SAME
An image capturing apparatus that is attachable to a robot arm, comprises an image capturing device configured to capture an object image, a receiving unit configured to receive information relating to status of the robot arm from a control apparatus for controlling the robot arm, and an imaging control unit configured to control an image capturing operation of the image capturing device that is performed on an object, wherein the imaging control unit controls the image capturing operation based on the information relating to status of the robot arm that was notified by the control apparatus.
METHOD AND COMPUTING SYSTEM FOR PERFORMING OBJECT DETECTION OR ROBOT INTERACTION PLANNING BASED ON IMAGE INFORMATION GENERATED BY A CAMERA
A method and computing system for object detection are presented. The computing system is configured to receive first image information representing at least a first portion of an object structure of an object in a camera's field of view, wherein the first image information is associate with a first camera pose; generate or update, based on the first image information, sensed structure information representing the object structure; identify an object corner associated with the object structure; cause the robot arm to move the camera to a second camera pose pointed at the object corner; receive second image information associated with the second camera pose; update the sensed structure information based on the second image information; determine, based on the updated sensed structure information, an object type associated with the object; determine one or more robot interaction locations based on the object type.
THREE-DIMENSIONAL MEASUREMENT SYSTEM
A three dimensional measurement system has a measurement unit that images the measuring object and that has a light projecting device projecting projected light onto the measuring object and a light receiving device receiving the reflected light. The system has a specular reflection member that forms a mounting surface for mounting the measuring object and reflects projected light at a specific angle. The system has a data acquisition unit that acquires imaging data and a coordinate calculation unit that calculates three dimensional coordinates for the part of the imaging data whose brightness is higher than the lower limit brightness and does not calculate three dimensional coordinates for the part whose brightness is lower than the lower limit brightness. The light receiving device is arranged at a position where the brightness of the part where the specular reflection member is imaged is lower than the lower limit brightness.
TRANSFORMATION FOR COVARIATE SHIFT OF GRASP NEURAL NETWORKS
A covariate shift generally refers to the change of the distribution of the input data (e.g., noise distribution) between the training and inference regimes. Such covariate shifts can degrade the performance grasping neural networks, and thus robotic grasping operations. As described herein, an output of a grasp neural network can be transformed, so as to determine appropriate locations on a given object for a robot or autonomous machine to grasp.
IMAGING DEVICE FOR CALCULATING THREE-DIMENSIONAL POSITION ON THE BASIS OF IMAGE CAPTURED BY VISUAL SENSOR
This imaging device comprises: an orientation detection unit that detects the orientation of a camera with respect to the direction of gravity; and a parameter setting unit that sets parameters for calculating a three-dimensional position corresponding to a specific position in an image captured by the camera. A storage unit stores setting information for setting parameters corresponding to the orientation of the camera. The parameter setting unit sets parameters on the basis of the direction of gravity, the orientation of the camera, and the setting information.
ROBOTIC AUTOMATED FILLING AND CAPPING SYSTEM FOR VAPE OIL CARTRIDGES
A robotic automated filling and capping system is made up of a cartridge infeed conveyor, a cap infeed conveyor, an outfeed conveyor, a six-axis robot, and a selective compliance articulated robot arm, all of which are configured to work together to automatically, and sanitarily, fill and cap vape oil cartridges.