Patent classifications
G05B2219/40564
Measurement system, measurement device, measurement method, and measurement program
Provided are a measurement system, a measurement device, a measurement method, and a measurement program. 3D data is registered to 3D data based on the displacements of joints of a robot at a point in time when a 3D sensor measures 3D data of a measurement object at a specific measurement point while the robot is stopped, and the displacements of the joints of the robot at a point in time when the 3D sensor measures 3D data of the measurement object at a measurement point other than the specific measurement point while that robot is in motion. The 3D data is further registered to the 3D data such that a registration error between the 3D data and the 3D data is less than a threshold value. Similarly, each of 3D data is registered to the 3D data.
CREATING TRAINING DATA VARIABILITY IN MACHINE LEARNING FOR OBJECT LABELLING FROM IMAGES
It is described an image labeling system (100) comprising: a support (2) for an object (3) to be labeled; a digital camera (1) configured to capture a plurality of images of a scene including said object (3); a process and control apparatus (5) configured to receive said images and generate corresponding labeling data (21-24, L1-L4) associated to said object (3); a digital display (4) associated with said support (2) and connected to the process and control apparatus (5) to selectively display additional images (7-13) selected from the group comprising: first images (7-11) in the form of backgrounds for the plurality of images and introducing a degree of variability in the scene; second images (12) indicating position and/or orientation according to which place said object (3) by a user on the support (2); third images (13) to be captured by the digital camera (1) and provided to the process and control apparatus (5) to evaluate a position of the digital camera (1) with respect the digital display (4); fourth images to be captured by the digital camera (1) and provided to the process and control apparatus (5) to evaluate at least one of the following data of the object (3): position, orientation, 3D shape.
PROCESSING APPARATUS
A processing apparatus 1 includes: a workpiece-set-position recognition unit 114 that moves an arm distal-end portion to a specified position measurement point to measure a shape of a workpiece in a workpiece set state in which the workpiece is positioned by a workpiece positioning unit, and thereby recognizes a set position of the workpiece; a processing-point-information generation unit 115 that, based on the set position of the workpiece and processing-target-portion information 124 indicating a position of a target portion of the workpiece for specified processing, generates processing-point information 125 indicating a processing point which is a movement point of the arm distal-end portion to perform the specified processing on the workpiece using a processing tool in the workpiece set state; and a workpiece-processing control unit 116 that moves the arm distal-end portion to the processing point based on the processing-point information 125 to perform the specified processing on the workpiece using the processing tool.
ROBOT MOTION PLANNING ACCOUNTING FOR OBJECT POSE ESTIMATION ACCURACY
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for planning robotic movements to perform a given task while satisfying object pose estimation accuracy requirements. One of the methods includes generating a plurality of candidate measurement configurations for measuring an object to be manipulated by a robot; determining respective measurement accuracies for the plurality of candidate measurement configurations; determining a measurement accuracy landscape for the object including defining a high measurement accuracy region based on the respective measurement accuracies for the plurality of candidate measurement configurations; and generating a motion plan for manipulating the object in the robotic process that moves the robot, a sensor, or both, through the high measurement accuracy region when performing pose estimation for the object.
System and method for piece picking or put-away with a mobile manipulation robot
A method and system for picking or put-away within a logistics facility. The system includes a central server and at least one mobile manipulation robot. The central server is configured to communicate with the robots to send and receive picking data which includes a unique identification for each item to be picked, a location within the logistics facility of the items to be picked, and a route for the robot to take within the logistics facility. The robots can then autonomously navigate and position themselves within the logistics facility by recognition of landmarks by at least one of a plurality of sensors. The sensors also provide signals related to detection, identification, and location of a item to be picked or put-away, and processors on the robots analyze the sensor information to generate movements of a unique articulated arm and end effector on the robot to pick or put-away the item.
AUTONOMOUS TASK PERFORMANCE BASED ON VISUAL EMBEDDINGS
A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.
Autonomous task performance based on visual embeddings
A method for controlling a robotic device is presented. The method includes capturing an image corresponding to a current view of the robotic device. The method also includes identifying a keyframe image comprising a first set of pixels matching a second set of pixels of the image. The method further includes performing, by the robotic device, a task corresponding to the keyframe image.
APPARATUS AND METHOD FOR TRAINING A MACHINE LEARNING MODEL TO RECOGNIZE AN OBJECT TOPOLOGY OF AN OBJECT FROM AN IMAGE OF THE OBJECT
A method for training a machine learning model to recognize an object topology of an object from an image of the object. The method includes: obtaining a 3D model of the object; determining a descriptor component value for each vertex of the grid; generating training data image pairs each having a training input image and a target image. The target image is generated by determining the vertex positions in the training input image; assigning the descriptor component value determined for the vertex at the vertex position to the position in the target image; and adapting at least some of the descriptor component values assigned to the positions in the target image or adding descriptor component values to the positions of the target image.
Method, System, And Computer Program For Recognizing Position And Attitude Of Object Imaged By Camera
A method of the present disclosure includes (a) extracting distinctive features used for respectively distinguishing a plurality of similar attitudes from which images similar to one another are obtained using a simulation model of an object, (b) capturing an object image of the object using a camera, (c) estimating a position and an attitude of the object using the object image, and (d) when the estimated attitude corresponds to one of the plurality of similar attitudes, determining the one of the plurality of similar attitudes as the attitude of the object using the distinctive features.
DEVICE AND METHOD FOR GENERATING OBJECT IMAGE, RECOGNIZING OBJECT, AND LEARNING ENVIRONMENT OF MOBILE ROBOT
According to the present invention, disclosed are a device and a method of generating an object image, recognizing an object, and learning an environment of a mobile robot which perform a deep learning algorithm which allows a robot to create a map and load environment information acquired during the autonomous movement while the autonomous mobile robot is being charged and may be used for an application which finds out a location by finally recognizing objects such as furniture using a method of checking a location of the recognized objects to mark the location on the map.