Patent classifications
G05B2219/40564
WORK COORDINATE GENERATION DEVICE
A work coordinate generation device includes a shape register section configured to register shape information about a shape of a work region optically defined on a target which is a work target of a work robot; a first recognition section configured to acquire first image data; a first coordinate generation section configured to generate a first work coordinate which represents the work region of the first target based on a result of recognition of the first recognition section; a second recognition section configured to acquire second image data; and a second coordinate generation section configured to generate a second work coordinate which represents the work region of the second target based on the first work coordinate and a result of recognition of the second recognition section.
Object detection device, control device, and object detection computer program
An object detection device detects, when a camera that generates an image representing a target object and the target object do not satisfy a predetermined positional relationship, a position of the target object on the image by inputting the image to a classifier, and detects, when the camera and the target object satisfy the predetermined positional relationship, a position of the target object on the image by comparing, with the image, a template representing a feature of an appearance of the target object when the target object is viewed from a predetermined direction.
Autonomous task performance based on visual embeddings
A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.
Image Recognition Method And Robot System
An image recognition method includes obtaining measurement data of a target object, comparing a 3D model having a plurality of feature points and the measurement data and updating importance degrees of the plurality of feature points based on differences between the 3D model and the measurement data, performing learning using the updated importance degrees, and performing object recognition for the target object based on a result of the learning.
METHOD AND PROCESSING SYSTEM FOR UPDATING A FIRST IMAGE GENERATED BY A FIRST CAMERA BASED ON A SECOND IMAGE GENERATED BY A SECOND CAMERA
A method and system for processing camera images is presented. The system receives a first depth map generated based on information sensed by a first type of depth-sensing camera, and receives a second depth map generated based on information sensed by a second type of depth-sensing camera. The first depth map includes a first set of pixels that indicate a first set of respective depth values. The second depth map includes a second set of pixels that indicate a second set of respective depth values. The system identifies a third set of pixels of the first depth map that correspond to the second set of pixels of the second depth map, identifies one or more empty pixels from the third set of pixels, and updates the first depth map by assigning to each empty pixel a respective depth value based on the second depth map.
ROBOT CONTROL DEVICE, ROBOT CONTROL METHOD, AND NON-TRANSITORY COMPUTER READABLE STORAGE MEDIUM
A robot control device is provided to accept information (201) specifying an object 170) manipulated by a robot (20) from among objects of a plurality of kinds and accept information (202) specifying a target relative positional relationship between the specified object (70) and the distal end of a hand of the robot (20). The robot control device extracts the object (70) from image information (501) obtained by photographing the objects of the plurality of kinds and the surrounding environment thereof, generates information (301) indicating the position and orientation of the object (70), generates an action instruction (401) from the result of learning by a learning module (103), the action instruction (401) serving to match the relative positional relationship between the object (70) and the distal end of the hand of the robot (20) with the target relative positional relationship, and outputs the action instruction (401) to the robot (20).
METHOD FOR GRASPING TEXTURE-LESS METAL PARTS BASED ON BOLD IMAGE MATCHING
A method for grasping texture-less metal parts based on BOLD image matching comprises: obtaining a real image and CAD template images by photographing, extracting a foreground part of the input part image, calculating a covariance matrix of a foreground image, establishing the direction of a temporary coordinate system, and setting directions of line segments to point to a first or second quadrant of the temporary coordinate system; constructing a descriptor of each line segment according to an angle relation between the line segment and k nearest line segments, and matching the descriptors of different line segments in the real image and the CAD template images to obtain line segment pairs; and recognizing a processed pose through a PNL algorithm to obtain a pose of a real texture-less metal part, and then inputting the pose of the real texture-less metal part to a mechanical arm to grasp the part. The present invention can correctly match line segments, obtain an accurate pose of the part by calculation, successfully grasp the part, and satisfy actual application requirements.
Picking Facility
A picking facility is realized that can shorten the time required to transfer an article from a first support body to a second support body. Of a plurality of articles 50 supported by the first support body 51, the article 50 located at the highest position and the article 50 whose upper face T1 is present in a range of a set distance D downward from the upper face T1 of the article 50 located at the highest position are set as transfer-target articles 50A, and the control device performs a selection control to preferentially select, from the transfer-target articles 50A, a transfer-target article 50A in the normal orientation SC, and a transfer control to control the transfer device so as to transfer the transfer-target article 50A selected through the selection control from the first support body 51 to the second support body.
Method and processing system for updating a first image generated by a first camera based on a second image generated by a second camera
A method and system for processing camera images is presented. The system receives a first depth map generated based on information sensed by a first type of depth-sensing camera, and receives a second depth map generated based on information sensed by a second type of depth-sensing camera. The first depth map includes a first set of pixels that indicate a first set of respective depth values. The second depth map includes a second set of pixels that indicate a second set of respective depth values. The system identifies a third set of pixels of the first depth map that correspond to the second set of pixels of the second depth map, identifies one or more empty pixels from the third set of pixels, and updates the first depth map by assigning to each empty pixel a respective depth value based on the second depth map.
GENERATING A MODEL FOR AN OBJECT ENCOUNTERED BY A ROBOT
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.