Patent classifications
G05B2219/39046
Method for Managing Tracklets in a Particle Filter Estimation Framework
A method for managing tracklets in a particle filter estimation framework includes executing a tracklet prediction dependent on a list of previous tracklets, thereby determining persistent tracklets and new tracklets; sampling new measurements for initializing the new tracklets, thereby determining an amount of estimated new tracklets; and determining an amount of the persistent tracklets dependent on the list of previous tracklets. The method further includes determining an amount of the new tracklets and an amount of updated persistent tracklets to be sampled dependent on the amount of estimated new tracklets, the amount of the persistent tracklets, and a memory bound; sampling the updated persistent tracklets from a list of the persistent tracklets dependent on the determined amount of the updated persistent tracklets; and sampling the new tracklets from unassociated measurements dependent on the determined amount of the new tracklets.
GENERATING A MODEL FOR AN OBJECT ENCOUNTERED BY A ROBOT
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
Generating a model for an object encountered by a robot
Methods and apparatus related to generating a model for an object encountered by a robot in its environment, where the object is one that the robot is unable to recognize utilizing existing models associated with the robot. The model is generated based on vision sensor data that captures the object from multiple vantages and that is captured by a vision sensor associated with the robot, such as a vision sensor coupled to the robot. The model may be provided for use by the robot in detecting the object and/or for use in estimating the pose of the object.
System with a medical instrument and a recording means
A method for automatically predetermining an intended movement of a manipulator arrangement of a medical system having a medical instrument and a recording means for generating images, wherein the recording means and/or the instrument is guided by the manipulator arrangement. The method includes establishing an intended transformation between a reference stationary in relation to the recording means and a reference stationary in relation to the instrument; monitoring a deviation between the intended transformation and a current transformation between the reference stationary in relation to the recording means and the reference stationary in relation to the instrument; and determining a reset movement of the manipulator arrangement for returning the current transformation to the intended transformation when the deviation satisfies a predetermined condition.
LEARNING AND APPLYING EMPIRICAL KNOWLEDGE OF ENVIRONMENTS BY ROBOTS
Techniques described herein relate to generating a posteriori knowledge about where objects are typically located within environments to improve object location. In various implementations, output from vision sensor(s) of a robot may include visual frame(s) that capture at least a portion of an environment in which a robot operates/will operate. The visual frame(s) may be applied as input across a machine learning model to generate output that identifies potential location(s) of an object of interest. The robot's position/pose may be altered based on the output to relocate one or more of the vision sensors. One or more subsequent visual frames that capture at least a not-previously-captured portion of the environment may be applied as input across the machine learning model to generate subsequent output identifying the object of interest. The robot may perform task(s) that relate to the object of interest.
Learning and applying empirical knowledge of environments by robots
Techniques described herein relate to generating a posteriori knowledge about where objects are typically located within environments to improve object location. In various implementations, output from vision sensor(s) of a robot may include visual frame(s) that capture at least a portion of an environment in which a robot operates/will operate. The visual frame(s) may be applied as input across a machine learning model to generate output that identifies potential location(s) of an object of interest. The robot's position/pose may be altered based on the output to relocate one or more of the vision sensors. One or more subsequent visual frames that capture at least a not-previously-captured portion of the environment may be applied as input across the machine learning model to generate subsequent output identifying the object of interest. The robot may perform task(s) that relate to the object of interest.
COORDINATE SYSTEM CALIBRATION METHOD, APPARATUS AND SYSTEM FOR ROBOT, AND MEDIUM
The present disclosure provides a coordinate system calibration method, apparatus and system for a robot, and a storage medium, wherein the method includes controlling an execution component of a robot to perform translation movement, and acquiring first coordinate information of the execution component, and second coordinate information which is collected by a photographic apparatus regarding a calibration board, so as to determine a rotation matrix; and controlling the execution component to perform rotatory movement, and acquiring third coordinate information which is collected by the photographic apparatus regarding the calibration board, so as to determine a translation matrix.
ROBOT CONTROL APPARATUS AND CALIBRATION METHOD
A robot control apparatus includes: a robot control unit to control operation of a robot using calibration data; an image processing unit to acquire camera coordinates of a reference marker from image data acquired by a vision sensor; an error calculating unit to calculate an error on a basis of a difference between camera coordinates of the reference marker corresponding to the calibration data and current camera coordinates of the reference marker; a calibration-data calculating unit to calculate new calibration data when an absolute value of the error becomes greater than a threshold; and a calibration-data storing unit to register the new calibration data. The robot control apparatus causes the calibration-data calculating unit to calculate the new calibration data a plurality of times while causing the robot to operate between the calculations and causes the calibration-data storing unit to register a plurality of pieces of calibration data.
Apparatus and method of controlling robot arm
An apparatus for controlling a robot arm includes: the robot arm; a calibration board on which calibration marks for self-diagnosis are shown; a distance sensor mounted on the robot arm and configured to measure a distance; an image sensor mounted on the robot arm and configured to obtain an image; and a processor configured to move the robot arm to a position for the self-diagnosis, measure a distance from a predetermined part of the robot arm to the calibration board by using the distance sensor, obtain an image of the calibration board by using the image sensor, and output a signal indicating a malfunction of the robot arm in response to the measured distance being outside a distance error range, and an image measurement value of the obtained image being outside an image error range.
LEARNING AND APPLYING EMPIRICAL KNOWLEDGE OF ENVIRONMENTS BY ROBOTS
Techniques described herein relate to generating a posteriori knowledge about where objects are typically located within environments to improve object location. In various implementations, output from vision sensor(s) of a robot may include visual frame(s) that capture at least a portion of an environment in which a robot operates/will operate. The visual frame(s) may be applied as input across a machine learning model to generate output that identifies potential location(s) of an object of interest. The robot's position/pose may be altered based on the output to relocate one or more of the vision sensors. One or more subsequent visual frames that capture at least a not-previously-captured portion of the environment may be applied as input across the machine learning model to generate subsequent output identifying the object of interest. The robot may perform task(s) that relate to the object of interest.