Patent classifications
B25J9/1671
METHOD AND APPARATUS FOR CALIBRATING POSITION OF ROBOT USING 3D SCANNER
A robot position calibration apparatus is disclosed including a scan position controller configured to control the position of the robot by individually setting parameter sets related to the position of the robot for causing a scanner mounted on an end of the robot to scan an object in multiple scan positions around the robot, and a data receiver configured to receive, from the scanner, multiple scan data items generated by the scanner scanning the object in each of the multiple scan positions, and a parameter calibrator configured to calculate calibration values for the parameter sets having been individually set, by using multiple position information items corresponding to the parameter sets and the multiple scan data items.
APPARATUS AND METHOD FOR CAPTURING IMAGE USING ROBOT
Proposed is a capturing apparatus. The capturing apparatus may include a setting unit configured to set environment information of a robot equipped with a camera; and a pattern unit configured to set a capturing pattern of the robot based on the environment information.
SYSTEM AND METHOD FOR DETERMINING A GRASPING HAND MODEL
Method for determining a grasping hand model suitable for grasping an object by receiving an image including at least one object; obtaining an object model estimating a pose and shape of the object from the image of the object; selecting a grasp class from a set of grasp classes by means of a neural network, with a cross entropy loss, thus, obtaining a set of parameters defining a coarse grasping hand model; refining the coarse grasping hand model, by minimizing loss functions referring to the parameters of the hand model for obtaining an operable grasping hand model while minimizing the distance between the finger of the hand model and the surface of the object and preventing interpenetration; and obtaining a mesh of the hand represented by the enhanced set of parameters.
TASK-ORIENTED GRASPING OF OBJECTS
A computer-implemented method includes obtaining a collection of object models for a plurality of different types of objects belonging to a same object category, generating a canonical representation for objects belonging to the object category, performing a plurality of downstream tasks using a plurality of different robot grasps on instances of objects belonging to the category and evaluating each grasp according to success or failure of the downstream task; and generating one or more category-level grasping areas for the canonical representation for objects belonging to the object category including aggregating the evaluations of grasps according to the downstream task.
USING SIMULATED/GENERATED NOISE TO EVALUATE AND REFINE STATE ESTIMATION
A robotic system is disclosed. The system includes a memory configured to store estimated state information associated with a computer simulation of a robotic operation to stack a plurality of items on a pallet or other receptacle. The system includes one or more processors coupled to the communication interface and configured to perform the computer simulation. The computer simulation is performed at least in part by combining geometric model data based on idealized simulated robotic placement of each item with programmatically generated noise data. The programmatically generated noise data reflects an estimation of the effect that one or more sources of noise in a real-world physical workspace with which the computer simulation is associated would have on a real-world state of the plurality of items and/or the pallet or other receptacle if the plurality of items were stacked on the pallet or other receptacle as simulated in the computer simulation.
STATE ESTIMATION USING GEOMETRIC DATA AND VISION SYSTEM FOR PALLETIZING
A robotic system is disclosed. The system includes a communication interface that receives, from a sensor(s) deployed in a workspace, sensor data indicative of a current state of the workspace, the workspace comprising a pallet or other receptacle and a plurality of items stacked on or in the receptacle. The system includes one or more processors that control a robotic arm to place a first set of items on or in, or remove the first set of items from, the pallet or other receptacle, update a geometric model based on the first set of items placed on or in a receptacle, use the geometric model in combination with the sensor data to estimate a stack of one or more items on or in the receptacle, and use the estimated state to generate or update a plan to control the robotic arm to place a second set of items.
SIMULATED BOX PLACEMENT FOR ALGORITHM EVALUATION AND REFINEMENT
A robotic system is disclosed. The system includes a memory that stores for each of a plurality of items a set of attribute values. The system includes a processor(s) that uses the attribute values to simulate the placement of items, including by determining, iteratively, for each next item a placement location at which to place the item on a simulated stack of items on the pallet, using the attribute values and a geometric model of where items have been simulated to have been placed to estimate a state of the stack after each of a subset of simulated placements, and using the estimated state to inform a next placement decision. The steps of determining for each next item a placement location and estimating the state of the stack until all of at least a subset of the plurality of items have been simulated as having been placed on the stack.
System and method for using virtual/augmented reality for interaction with collaborative robots in manufacturing or industrial environment
A method includes determining a movement of an industrial robot in a manufacturing environment from a first position to a second position. The method also includes displaying an image showing a trajectory of the movement of the robot on a wearable headset. The displaying of the image comprises at least one of: displaying an augmented reality (AR) graphical image or video of the trajectory superimposed on a real-time actual image of the robot, or displaying a virtual reality (VR) graphical image or video showing a graphical representation of the robot together with the trajectory.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, DISPLAY APPARATUS, DISPLAY METHOD, ROBOT SYSTEM, ARTICLE PRODUCTION METHOD, PROGRAM, AND STORAGE MEDIUM
An information processing apparatus includes a display unit configured to display information on an operation of a robot, and a setting unit configured to set a trajectory of a movement related to the operation of the robot to be displayed, wherein the set trajectory can be made for a part of the trajectory.
Robot control device, robot control method, and robot control program
A robot control device includes an obtaining unit that obtains, from an image sensor that captures a workpiece group to be handled by a robot, a captured image, a simulation unit that simulates operation of the robot, and a control unit that performs control such that the captured image is obtained if, in the simulation, the robot is retracted from an image capture forbidden space, in which an image is potentially captured with the workpiece group and the robot overlapping each other, and which is set based on either or both a first space being the visual field range of the image sensor, and a columnar second space obtained by taking a workpiece region including the workpiece group or each of divided regions into which the workpiece region is divided, as a bottom area, and extending the bottom area to the position of the image sensor.