Patent classifications
G05B2219/37015
ROBOTIC GEOMETRIC CAMERA CALIBRATION AND MONITORING ALERT CONFIGURATION AND TESTING
A method of calibrating a system including a camera, the method including detecting a robot navigating within an environment modeled as a geo-polygon space, including a transit of the robot through a scene of the environment captured by the camera, mapping a plurality of points occupied by the robot in images of the scene to the geo-polygon space, recording data about the mapping, and configuring at least one alert using the data recorded about the mapping, the alert executed by the computing system and configured to be triggered by an object transiting the scene.
METHOD AND SYSTEM FOR PERFORMING AUTOMATIC CAMERA CALIBRATION FOR ROBOT CONTROL
A robot control system and a method for automatic camera calibration is presented. The robot control system includes a control circuit configured to determine all corner locations of an imaginary cube that fits within a camera field of view, and determine a plurality of locations that are distributed on or throughout the imaginary cube. The control circuit is further configured to control a robot arm to move a calibration pattern to the plurality of locations, and to receive a plurality of calibration images corresponding to the plurality of locations, and to determine respective estimates of intrinsic camera parameters based on the plurality of calibration images, and to determine an estimate of a transformation function that describes a relationship between a camera coordinate system and a world coordinate system. The control circuit is further configured to control placement of the robot arm based on the estimate of the transformation function.
Robotic geometric camera calibration and monitoring alert configuration and testing
A method of calibrating a system including a camera, the method including detecting a robot navigating within an environment modeled as a geo-polygon space, including a transit of the robot through a scene of the environment captured by the camera, mapping a plurality of points occupied by the robot in images of the scene to the geo-polygon space, recording data about the mapping, and configuring at least one alert using the data recorded about the mapping, the alert executed by the computing system and configured to be triggered by an object transiting the scene.
Method and system for generating training data
A method for generating training data can include: determining a set of images; determining a set of masks based on the images; determining a first mesh based on the set of masks; optionally determining a refined mesh by recomputing the first mesh; optionally determining one or more faces of the refined mesh; optionally adding one or more keypoints to the refined mesh; optionally determining a material property set for the object; optionally generating a full object mesh; determining one or more scenes; optionally determining training data based on the one or more scenes; optionally training one or more object detectors using the training data; and detecting one or more objects using the trained object detector.
METHOD AND SYSTEM FOR GENERATING TRAINING DATA
A method for generating training data can include: determining a set of images; determining a set of masks based on the images; determining a first mesh based on the set of masks; optionally determining a refined mesh by recomputing the first mesh; optionally determining one or more faces of the refined mesh; optionally adding one or more keypoints to the refined mesh; optionally determining a material property set for the object; optionally generating a full object mesh; determining one or more scenes; optionally determining training data based on the one or more scenes; optionally training one or more object detectors using the training data; and detecting one or more objects using the trained object detector.
METHOD AND SYSTEM FOR GENERATING TRAINING DATA
A method for generating training data can include: determining a set of images; determining a set of masks based on the images; determining a first mesh based on the set of masks; optionally determining a refined mesh by recomputing the first mesh; optionally determining one or more faces of the refined mesh; optionally adding one or more keypoints to the refined mesh; optionally determining a material property set for the object; optionally generating a full object mesh; determining one or more scenes; optionally determining training data based on the one or more scenes; optionally training one or more object detectors using the training data; and detecting one or more objects using the trained object detector.
Method and system for performing automatic camera calibration for robot control
A robot control system and a method for automatic camera calibration is presented. The robot control system includes a control circuit configured to control a robot arm to move a calibration pattern to at least one location within a camera field of view, and to receive a calibration image from a camera. The control circuit determines a first estimate of a first intrinsic camera parameter based on the calibration image. After the first estimate of the first intrinsic camera parameter is determined, the control circuit determines a first estimate of a second intrinsic camera parameter based on the first estimate of the first intrinsic camera parameter. These estimates are used to determine an estimate of a transformation function that describes a relationship between a camera coordinate system and a world coordinate system. The control circuit controls placement of the robot arm based on the estimate of the transformation function.
Method and system for performing automatic camera calibration for robot control
A robot control system and a method for automatic camera calibration is presented. The robot control system includes a control circuit configured to control a robot arm to move a calibration pattern to at least one location within a camera field of view, and to receive a calibration image from a camera. The control circuit determines a first estimate of a first intrinsic camera parameter based on the calibration image. After the first estimate of the first intrinsic camera parameter is determined, the control circuit determines a first estimate of a second intrinsic camera parameter based on the first estimate of the first intrinsic camera parameter. These estimates are used to determine an estimate of a transformation function that describes a relationship between a camera coordinate system and a world coordinate system. The control circuit controls placement of the robot arm based on the estimate of the transformation function.
Method and system for performing automatic camera calibration for robot control
A robot control system and a method for automatic camera calibration is presented. The robot control system includes a control circuit configured to determine all corner locations of an imaginary cube that fits within a camera field of view, and determine a plurality of locations that are distributed on or throughout the imaginary cube. The control circuit is further configured to control a robot arm to move a calibration pattern to the plurality of locations, and to receive a plurality of calibration images corresponding to the plurality of locations, and to determine respective estimates of intrinsic camera parameters based on the plurality of calibration images, and to determine an estimate of a transformation function that describes a relationship between a camera coordinate system and a world coordinate system. The control circuit is further configured to control placement of the robot arm based on the estimate of the transformation function.
Method and system for performing automatic camera calibration for robot control
A robot control system and a method for automatic camera calibration is presented. The robot control system includes a control circuit configured to control a robot arm to move a calibration pattern to at least one location within a camera field of view, and to receive a calibration image from a camera. The control circuit determines a first estimate of a first intrinsic camera parameter based on the calibration image. After the first estimate of the first intrinsic camera parameter is determined, the control circuit determines a first estimate of a second intrinsic camera parameter based on the first estimate of the first intrinsic camera parameter. These estimates are used to determine an estimate of a transformation function that describes a relationship between a camera coordinate system and a world coordinate system. The control circuit controls placement of the robot arm based on the estimate of the transformation function.