Patent classifications
B25J9/1697
Controller of robot apparatus for adjusting position of member supported by robot
A controller of the robot apparatus performs approaching control for making a second workpiece approach a first workpiece and position adjustment control for adjusting a position of the second workpiece with respect to a position of the first workpiece. The approaching control includes control for calculating a movement direction and a movement amount of a position of the robot based on an image captured by a first camera, and making the second workpiece approach the first workpiece. The position adjustment control includes control for calculating a movement direction and a movement amount of a position of the robot based on an image captured by the first camera and an image captured by the second camera, and precisely adjusting a position of the first workpiece with respect to the second workpiece.
System, devices and methods for tele-operated robotics
The system, devices and methods herein enable autonomous and tele-operation of tele-operated robots for maintenance of a property around known and unknown obstacles. A method may include using an unmanned aerial vehicle for obtaining additional data relating to the property and obstacles within the property and plan a path around the obstacles using data from sensors on-board the tele-operated robot and the aerial image. A method may also provide optimization of total time needed for performing the property maintenance and the labor costs in situations where manual intervention is needed for navigating the tele-operated robot around obstacles on the property or for removing obstacles on the property.
Object manipulation apparatus, handling method, and program product
An object manipulation apparatus according to an embodiment of the present disclosure includes a memory and a hardware processor coupled to the memory. The hardware processor is configured to: calculate, based on an image in which one or more objects to be grasped are contained, an evaluation value of a first behavior manner of grasping the one or more objects; generate information representing a second behavior manner based on the image and a plurality of evaluation values of the first behavior manner; and control actuation of grasping the object to be grasped in accordance with the information being generated.
Robot task system
A robot task system includes: a robot; a transfer device configured to be driven to transfer a plurality of workpieces thereon by a specific distance at a time, the plurality of workpieces being placed within the specific distance; a driving management unit configured to manage a driving distance and a driving start timing of the transfer device for driving the transfer device each time; a task position generation unit configured to generate a plurality of task positions at the driving start timing managed by the driving management unit, the plurality of task positions being positions for the robot to execute a predetermined task on the plurality of workpieces; a task unit configured to update, according to the driving of the transfer device, the plurality of task positions generated by the task position generation unit and generate a task command to cause the robot to execute the predetermined task on the plurality of workpieces while following the plurality of workpieces; and a control unit configured to control the transfer device based on the driving distance and the driving start timing of the transfer device, and control the robot based on the task command generated by the task unit.
Method and system for automated plant surveillance and manipulation
A method and a system for automated plant surveillance and manipulation are provided. Pursuant to the method and the system, images of target plants are obtained through a machine vision system having multiple cameras. The obtained images of the target plants are processed to determine tissue candidates of the target plants and to determine a position and an orientation of each tissue candidate. A tool is manipulated, based on the position and the orientation of each tissue candidate, to excise each tissue candidate to obtain tissue samples. The tissue samples are transported for subsequently manipulation including live processing of the tissue samples or destructive processing of the tissue samples.
METHOD AND SYSTEM FOR PERFORMING AUTOMATIC CAMERA CALIBRATION
A system and method for performing automatic camera calibration is provided. The system receives a calibration image, and determines a plurality of image coordinates for representing respective locations at which a plurality of pattern elements of a calibration pattern appear in a calibration image. The system determines, based on the plurality of image coordinates and defined pattern element coordinates, an estimate for a first lens distortion parameter of a set of lens distortion parameters, wherein the estimate for the first lens distortion parameter is determined while estimating a second lens distortion parameter of the set of lens distortion parameters to be zero, or is determined without estimating the second lens distortion parameter. The system determines, after the estimate of the first lens distortion parameter is determined, an estimate for the second lens distortion parameter based on the estimate for the first lens distortion parameter.
ROBOT SYSTEM AND METHOD OF FORMING THREE-DIMENSIONAL MODEL OF WORKPIECE
A robot system includes a robot installed in a work area and controlled by a second control device, a 3D camera operated by an operator, a sensor that is disposed in a manipulation area that is a space different from the work area, and wirelessly detects position information and posture information on the 3D camera, a display, and a first control device. The first control device acquires image information on a workpiece imaged by the 3D camera, acquires, from the sensor, the position information and the posture information when the workpiece is imaged by the 3D camera, displays the acquired image information on the display, forms a three-dimensional model of the workpiece based on the image information, and the acquired position information and posture information, displays the formed three-dimensional model on the display, and outputs first data that is data of the formed three-dimensional model to the second control device.
POSITION DETECTION METHOD, CONTROLLER, AND ROBOT SYSTEM
A method includes: (a) causing a robotic arm to position a contacting structure of the arm laterally in a horizontal direction in relation to a first subject on a target object; (b) causing the arrn to bring the contacting structure into contact with at least three locations on the first subject; (c) detecting positions of the contacting structure in relation to the robot when contacting the locations; (d) detecting a position of the first subject in relation to the robot by using the detected positions of the contacting structure; (e) performing same steps as the steps (a) to (d) for a second subject on the target object; and (f) detecting a position of the robot in relation to the target object by using the positions of the subjects in relation to the robot and using positions of the subjects in relation to the target object.
CURVED SURFACE FOLLOWING CONTROL METHOD FOR ROBOT
A surface following control method for a robot is used for controlling the robot including a hand part, an arm part, and a controller. In this surface following control method for the robot, processes including a normal direction identification process and a work tool posture control process are performed. In the normal direction identification process, a normal direction of a virtual shape at a virtual position where the work tool attached to the hand part contacts the virtual shape which is a shape represented by the formula is identified. In the work tool control process, the work tool attached to the hand part is brought into contact with the target workpiece at a corresponding position which is a position corresponding to the virtual position on the surface of the target workpiece, in a posture along the normal direction identified in the normal direction identification process.
NEURAL NETWORK FOR CLASSIFYING OBSTRUCTIONS IN AN OPTICAL SENSOR
A neural network configured for classifying whether an image from an optical sensor characterizes an obstruction of the optical sensor or not. The classification is characterized by an output of the neural network for an input of the neural network and wherein the input is based on the image. The neural network comprises a first convolutional layer that characterizes a 1D-convolution along a vertical axis of a convolution output of a preceding convolutional layer and a second convolutional layer that characterizes a 1D-convolution along a horizontal axis of the convolution output. The output of the neural network is based on a first convolution output of the first convolutional layer and based on a second convolution output of the second convolutional layer.