Patent classifications
G05B2219/40563
METHOD FOR DETECTING AND RE-IDENTIFYING OBJECTS USING A NEURAL NETWORK
A method for detecting and re-identifying objects using a neural network. The method includes the steps: extracting features from an image, the features comprising information about at least one object in the image; detecting the at least one object in the image using an anchor-based object detection based on the extracted features, classification data being determined by a classification for detecting the object with the aid of at least one anchor and regression data being determined by a regression; and re-identifying the at least one object by determining embedding data based on the extracted features, the embedding data representing an object description for the at least one feature of the image.
PICKING SYSTEM AND END EFFECTOR OF ROBOT ARM
According to one embodiment, an end effector includes a gripping mechanism and a to-be-detected section. The gripping mechanism is configured to grip an article in a releasable manner. The to-be-detected section is irradiated with incident light from a distance detector. Regarding the to-be-detected section, a detection value group indicating a detection result of a distance from the distance detector to the to-be-detected section to be detected by the distance detector possesses optical characteristics different from a detection value group indicating a distance from the distance detector to the article to be detected by the distance detector.
Object attitude detection device, control device, and robot system
An object attitude detection device includes a pick-up image acquisition unit, a template image acquisition unit, and an attitude decision unit. The pick-up image acquisition unit acquires a picked-up image of an object. The template image acquisition unit acquires a template image for each attitude of the object. The attitude decision unit decides an attitude of the object based on the template image having pixels. In the pixels, a distance between pixels forming a contour in the picked-up image and pixels forming a contour of the template image is shorter than a first threshold. Further, a degree of similarity between a gradient of the pixels forming the contour in the picked-up image and a gradient of the pixels forming the contour of the template image is higher than a second threshold.
Object conveying system
Provided is an object conveying system including: a conveying apparatus that conveys an object; one or more cameras that capture images of feature points of the object; a position measuring portion that measures positions of the feature points from the acquired images; a detecting portion that detects a position or a movement velocity of the object; a position correcting portion that corrects the positions of the feature points so as to achieve positions at which the feature points are disposed at the same time; a line-of-sight calculating portion that calculates lines of sight that pass through the feature points on the basis of the corrected positions of the feature points and the positions of the cameras; and a position calculating portion that calculates a three-dimensional position of the object by applying a polygon having a known shape to the calculated lines of sight.
METHOD AND ROBOT DEVICE FOR SHARING OBJECT DATA
Provided are a method and robot device for sharing object data. The method, performed by a robot device, of sharing object data includes: obtaining sensing data related to objects in a certain space; classifying the obtained sensing data into a plurality of pieces of object data based on properties of the objects; selecting another robot device from among at least one other robot device; selecting object data to be provided to the selected robot device from among the classified plurality of pieces of object data; and transmitting the selected object data to the selected robot device, wherein the classifying of the sensing data into a plurality of pieces object data includes generating a plurality of data layers including the classified plurality of pieces of object data.
Robot system
A robot system provided with: a conveying apparatus; a robot that performs processing on an article being conveyed; a camera that captures images of the article being conveyed; a conveying-velocity calculating portion that calculates at least one of a position of the article on the conveying apparatus and a velocity at which the article is conveyed by the conveying apparatus on the basis of the plurality of images captured by the camera; and a control unit that controls the robot on the basis of at least one of the position and the conveying velocity. The control unit determines whether or not the article is present in the images, and, in the case in which the article is absent, controls the robot on the basis of at least one of the position and the conveying velocity calculated by the conveying-velocity calculating portion immediately therebefore.
ROBOTIC PERCEPTION EXTENSIBILITY
Aspects of the present disclosure generally relate to robot perception extensibility. In certain aspects, a robot maintains perception data that represents its understanding of its surroundings. The perception data relates to a variety of objects and comprises information generated by a variety of components, such as sensors and software processes. In order to extend the perception capabilities of the robot, an extensibility interface is provided, which enables an extensibility device to annotate objects and to provide new objects to the robot based on the additional perception data generated by the extensibility device. As a result of incorporating the objects from the extensibility device into the perception data of the robot, the additional perception data of the extensibility device is available to software executing on the robot without requiring the additional effort typically necessary to extend the capabilities of such a device.
CONTROL DEVICE AND METHOD FOR A ROBOT SYSTEM
A control device is provided to reliably locate objects and calculate appropriate grasp points for each object to thereby effectively control a robot system. grasp point calculation is based on, at least, properties of a surface of the object to be grasped by the robot system to more effectively calculate a grasp point for the surface of the object An organised point cloud generator and a robot system having a suction cup end are to generate an organised point cloud of a storage The robot system can grasp the object from the storage means. Normals of the organised point cloud and principal curvatures of the organised point cloud can be calculated. A grasp point can be calculated for the suction cup end effector of the robot system to grasp an object based on the organised point cloud, the calculated normal, the calculated principal curvatures, and a normal of a lower surface of the storage.
Remote control system and remote control method
A remote control system includes: an imaging unit that shoots an environment in which a device to be operated including an end effector is located; a recognition unit that recognizes objects that can be grasped by the end effector based on a shot image of the environment shot by the imaging unit; an operation terminal that displays the shot image and receive handwritten input information input to the displayed shot image; and an estimation unit that, based on the objects that can be grasped and the handwritten input information input to the shot image, estimates an object to be grasped which has been requested to be grasped by the end effector from among the objects that can be grasped and estimates a way of performing a grasping motion by the end effector, the grasping motion having been requested to be performed with regard to the object to be grasped.
Robot controller which automatically sets interference region for robot
A robot controller able to automatically set an motion range for a robot, in which the robot and an obstacle, such as a peripheral device, do not interfere with each other. The robot controller includes a depth acquisition section acquiring a group of depth data representing depths from a predetermined portion of the robot to points on the surface of an object around the robot, a robot position acquisition section acquiring three-dimensional position information of the predetermined portion, a depth map generator generating depth map information including three-dimensional position information of the points with using the group of depth data and the three-dimensional position information, and an interference region setting section estimating the range occupied by the object from the depth map information and setting the range as an interference region.