Patent classifications
G05B2219/39046
Generating a model for an object encountered by a robot
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
System With A Medical Instrument And A Recording Means
A method for automatically predetermining an intended movement of a manipulator arrangement of a medical system having a medical instrument and a recording means for generating images, wherein the recording means and/or the instrument is guided by the manipulator arrangement. The method includes establishing an intended transformation between a reference stationary in relation to the recording means and a reference stationary in relation to the instrument; monitoring a deviation between the intended transformation and a current transformation between the reference stationary in relation to the recording means and the reference stationary in relation to the instrument; and determining a reset movement of the manipulator arrangement for returning the current transformation to the intended transformation when the deviation satisfies a predetermined condition.
GENERATING A MODEL FOR AN OBJECT ENCOUNTERED BY A ROBOT
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
Generating a model for an object encountered by a robot
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
Robot system and method for operating same
A robot system includes an operating device that receives an operation instruction from an operator, a real robot that is installed in a work space and performs a series of works constituted of a plurality of steps, a camera configured to image the real robot, a display device configured to display video information of the real robot imaged by the camera and a virtual robot, and a control device, in which the control device is configured to operate the virtual robot displayed on the display device based on instruction information input from the operating device, and thereafter operate the real robot in a state that the virtual robot is displayed on the display device when operation execution information to execute an operation of the real robot is input from the operating device.
Method for managing tracklets in a particle filter estimation framework
A method for managing tracklets in a particle filter estimation framework includes executing a tracklet prediction dependent on a list of previous tracklets, thereby determining persistent tracklets and new tracklets; sampling new measurements for initializing the new tracklets, thereby determining an amount of estimated new tracklets; and determining an amount of the persistent tracklets dependent on the list of previous tracklets. The method further includes determining an amount of the new tracklets and an amount of updated persistent tracklets to be sampled dependent on the amount of estimated new tracklets, the amount of the persistent tracklets, and a memory bound; sampling the updated persistent tracklets from a list of the persistent tracklets dependent on the determined amount of the updated persistent tracklets; and sampling the new tracklets from unassociated measurements dependent on the determined amount of the new tracklets.
SYSTEMS AND METHODS FOR CAMERA CALIBRATION WITH A FIDUCIAL OF UNKNOWN POSITION ON AN ARTICULATED ARM OF A PROGRAMMABLE MOTION DEVICE
A system is disclosed for providing extrinsic calibration of a camera to a relative working environment of a programmable motion device that includes an end-effector. The system includes a fiducial located at or near the end-effector, at least one camera system for viewing the fiducial as the programmable motion device moves in at least three degrees of freedom, and for capturing a plurality of images containing the fiducial, and a calibration system for analyzing the plurality of images to determine a fiducial location with respect to the camera to permit calibration of the camera with the programmable motion device.
GENERATING A MODEL FOR AN OBJECT ENCOUNTERED BY A ROBOT
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
Learning and applying empirical knowledge of environments by robots
Techniques described herein relate to generating a posteriori knowledge about where objects are typically located within environments to improve object location. In various implementations, output from vision sensor(s) of a robot may include visual frame(s) that capture at least a portion of an environment in which a robot operates/will operate. The visual frame(s) may be applied as input across a machine learning model to generate output that identifies potential location(s) of an object of interest. The robot's position/pose may be altered based on the output to relocate one or more of the vision sensors. One or more subsequent visual frames that capture at least a not-previously-captured portion of the environment may be applied as input across the machine learning model to generate subsequent output identifying the object of interest. The robot may perform task(s) that relate to the object of interest.
ROBOT SYSTEM AND METHOD FOR OPERATING SAME
A robot system includes an operating device that receives an operation instruction from an operator, a real robot that is installed in a work space and performs a series of works constituted of a plurality of steps, a camera configured to image the real robot, a display device configured to display video information of the real robot imaged by the camera and a virtual robot, and a control device, in which the control device is configured to operate the virtual robot displayed on the display device based on instruction information input from the operating device, and thereafter operate the real robot in a state that the virtual robot is displayed on the display device when operation execution information to execute an operation of the real robot is input from the operating device.