Patent classifications
G05B2219/40425
Robot program generation for robotic processes
A system for generating a path to be followed by a robot used to perform a process on a workpiece has a computing device that has program code for operating the robot and obtaining information related to the workpiece and a vision system that scans the workpiece to obtain images thereof that are provided to the computing device. The computing device processes the images to obtain geometric information about the workpiece that the computing device uses in combination with process related reference parameters stored in the computing device to generate program code for a path to be followed by the robot to perform the process on the workpiece. The computing device also includes code configured to verify for quality the generated program code for the path to be followed by the robot to perform the process on the workpiece.
ROBOTIC HAND TOOL SHARPENING AND CLEANING APPARATUS
An automated hand tool sharpening and cleaning system for sharpening the two opposed cutting edges of domestic, industrial, sport, or hobby hand tool like a knife blade is provided by the invention. The apparatus comprises a six-axis robotic arm, a pneumatic gripper, a vision sensor camera for profiling the blade edges, a robotic controller, and sequentially-arranged grinding, coarse sharpening, fine sharpening, and buffing rotating wheel assemblies used to grind, sharpen, and buff or polish the cutting edges of the knife blade. The blade cutting edges are profiled by the camera image that is processed by associated software to define the blade by multiple points defined along its edge, followed by a set of algorithms that are used to clean up any discrepancies in the profile data. The resulting corrected profile data is then translated into a set of machine control commands fed to the robotic arm and pneumatic gripper via the robot controller for manipulating the knife blade edges via the robotic arm with respect to each of the grinding, coarse sharpening, fine sharpening, and buffing/polishing wheels and an associated wash station for remove bits of metal and other residue resulting from the sharpened knife blade.
System, Method and Product for Utilizing Prediction Models of an Environment
A first method comprising: predicting a scene of an environment using a model of the environment and based on a first scene of the environment obtained from sensors observing scenes of the environment; comparing the predicted scene with an observed scene from the sensors; and performing an action based on differences determined between the predicted scene and the observed scene. A second method comprising applying a vibration stimuli on an object via a computer-controlled component; obtaining a plurality of images depicting the object from a same viewpoint, captured during the application of the vibration stimuli. The second method further comprising comparing the plurality of images to detect changes occurring in response to the application of the vibration stimuli, which changes are attributed to a change of a location of a boundary of the object; and determining the boundary of the object based on the comparison.
ROBOT AND ROBOT SYSTEM
A robot includes a shoulder, an arm connected to the shoulder, an imaging device that is connected to the shoulder via a support, and an image receiver that receives a captured image captured by the imaging device, and a robot controller that controls the arm based on the captured image, and in which the imaging device is two sets of stereo cameras having different depths of field.
SYSTEMS AND METHODS FOR HUMAN AND ROBOT COLLABORATION
Robotic systems for simultaneous human-performed and robotic operations within a collaborative workspace are described. In some embodiments, the collaborative workspace is defined by a reconfigurable workbench, to which robotic members are optionally added and/or removed according to task need. Tasks themselves are optionally defined within a production system, potentially reducing computational complexity of predicting and/or interpreting human operator actions, while retaining flexibility in how the assembly process itself is carried out. In some embodiments, robotic systems comprise a motion tracking system for motions of individual body members of the human operator. Optionally, the robotic system plans and/or adjusts robotic motions based on motions which have been previously observed during past performances of a current operation.
System, method and product for utilizing prediction models of an environment
A first method comprising: predicting a scene of an environment using a model of the environment and based on a first scene of the environment obtained from sensors observing scenes of the environment; comparing the predicted scene with an observed scene from the sensors; and performing an action based on differences determined between the predicted scene and the observed scene. A second method comprising applying a vibration stimuli on an object via a computer-controlled component; obtaining a plurality of images depicting the object from a same viewpoint, captured during the application of the vibration stimuli. The second method further comprising comparing the plurality of images to detect changes occurring in response to the application of the vibration stimuli, which changes are attributed to a change of a location of a boundary of the object; and determining the boundary of the object based on the comparison.
APPARATUS AND METHOD FOR POSITIONING EQUIPMENT RELATIVE TO A DRILL HOLE
An automated vehicle comprising: a control unit configured to control movement of the automated vehicle to a location adjacent an estimated location of a drill hole; a scanning portion including one or more scanning devices configured to scan an area of terrain in the vicinity of the estimated location of the drill hole in order to determine an actual location of the drill hole, and to generate a point cloud representing at least a portion of the interior of the drill hole; at least one arm associated with the scanning portion, the at least one arm configured to move the scanning portion between a home position and one or more scanning positions; and an end effector associated with the at least one arm, the end effector being configured to perform one or more operations;
wherein, upon generating the point cloud, the at least one arm is configured, based on the point cloud, to position the end effector in substantial alignment with the drill hole so that the end effector can perform the one or more operations.
ROBOTIC SYSTEM FOR GRASPING OBJECTS
A method is provided for grasping randomly sized and randomly located objects. The method may include assigning a score associated with the likelihood of successfully grasping an object. Other features of the method may include orientation of the end effector, a reachability check, and crash recovery.
ROBOTIC SYSTEMS AND METHODS FOR OPERATING A ROBOT
A method for operating a robot includes executing program instructions to determine that a robotic control program being executed on a robotic controller to operate the robot has been stopped; executing program instructions to determine whether a cause of the stoppage is a motion supervision error; executing program instructions to request a new target object from a vision system; and executing program instructions to resume normal robotic operation using the robotic control program.
Picking Robot and Picking System
A picking robot executes an acquisition process for acquiring the latest recognition result from a storage destination of the latest recognition result of a housed area on the basis of the latest imaged image of the housed area, a modification process for accessing the housed area and modifying a location in the housed area by controlling the robot arm on the basis of the latest recognition result acquired in the acquisition process, an imaging process for imaging the housed area after modification in the housed area by the modification process by controlling the imaging device, a recognition process for recognizing residual articles in the housed area on the basis of an image imaged in the imaging process, and a transmission process for transmitting the recognition result in the recognition process to the storage destination.