Patent classifications
B25J9/1671
System for Performing an Input on a Robotic Manipulator
A system for performing an input on a robotic manipulator, wherein the system includes: a robotic manipulator having a plurality of limbs connected to one another by articulations and having actuators; a sensor unit configured to record an input variable, applied by a user by manually guiding the robotic manipulator, on the robotic manipulator, wherein the input variable is a kinematic variable or a force and/or a moment, and wherein the sensor unit is configured to transmit the input variable; and a computing unit connected to the robotic manipulator and to the sensor unit, the computing unit configured to transform the input variable received from the sensor unit via a predefined input variable mapping, wherein the input variable mapping defines a mathematical mapping of the input variable onto a coordinate of a graphical user interface or onto a setting of a virtual control element.
METHOD AND SYSTEM FOR PREDICTING A COLLISION FREE POSTURE OF A KINEMATIC SYSTEM
A system and a method predict a collision free posture of a kinematic system. The method includes: receiving a 3D virtual environment, receiving a 3D representation of the kinematic system and a set of 3D postures defined for the 3D virtual kinematic system, receiving a target task to be performed by the kinematic system with respect to the surrounding environment, and receiving a prescribed location within the 3D virtual environment. The prescribed location defines a position at which the 3D virtual kinematic system has to be placed within the 3D virtual environment. A collision free detection function (CFD) is applied to a set of input data containing the 3D virtual environment, the target task, the prescribed location and the set of postures. The CFD function outputs a set of collision free postures enabling the kinematic system to perform the target task when located at the prescribed location.
Method, apparatus, computer-readable storage media for robotic programming
A method, apparatus, and computer-readable storage media for robotic programming are disclosed. To improve upon or even solve the dilemma that teach-in techniques cannot work for all kinds of objects and offline programming requires complicated simulation of a robot and objects, a solution is provided to use a virtual item marked by a marker during programming of the robot and display the virtual item to a user. As such, even very large items can be used and also replaced easily during programming, which makes the programming procedures go smoothly and efficiently.
DUAL-ARM ROBOT ASSEMBLING SYSTEM
A dual-arm robot assembling system including a controlling unit, a GUI, a first robotic-arm, and a second robotic-arm is disclosed. The GUI provides a graphic program editing page, which provides multiple instruction blocks used for editing a graphical program executed by the assembling system. At least one of the first robotic arm and the second robotic arm is disposed with a point-teaching tool thereon. Before the controlling unit controls the two robotic arms to perform an assembling operation based on the graphical program, a manager may directly drag the two robotic arms through the point-teaching tool, so as to implement a point-teaching procedure for the two robotic arms. Therefore, the assembling system may accomplish the assembling operation through the two robotic arms with cooperative movement.
Simulation-real world feedback loop for learning robotic control policies
A machine learning system builds and uses computer models for controlling robotic performance of a task. Such computer models may be first trained using feedback on computer simulations of the robotic system performing the task, and then refined using feedback on real-world trials of the robot performing the task. Some examples of the computer models can be trained to automatically evaluate robotic task performance and provide the feedback. This feedback can be used by a machine learning system, for example an evolution strategies system or reinforcement learning system, to generate and refine the controller.
System, device and method for determining error in robotic manipulator-to-camera calibration
Disclosed herein is a device, system and method for determining error in robotic manipulator-to-camera calibration. The method includes detecting a test object by a camera coupled to a robotic manipulator. One or more test points are identified on the test object based on a CAD model and pre-defined contact points corresponding to the test object. Arm poses are determined for the robotic manipulator to reach the test points on the 3D test object by using current robotic manipulator-to-camera calibration. While driving an end effector of the robotic manipulator based on the arm poses, any contact of the end effector on the 3D test object is recorded upon receiving a feedback from the 3D test object. An error is determined in the current robotic manipulator-to-camera calibration based on current position of the end effector relative to the one or more test points on the 3D test object.
Teaching device, teaching method, and robot system
A teaching device constructs, in a virtual space, a virtual robot system in which a virtual 3D model of a robot and a virtual 3D model of a peripheral structure of the robot are arranged, and teaches a moving path of the robot. The teaching device includes an acquisition unit configured to acquire information about a geometric error between the virtual 3D models, and a correction unit configured to correct the moving path of the robot in accordance with the information acquired by the acquisition unit.
Apparatus control systems and method
A system for controlling interactions between a plurality of real and virtual robots, includes one or more real robots present in the real environment, one or more virtual robots present in a virtual environment corresponding to the real environment, and a processing device operable to control interactions between one or more of the real robots and one or more of the virtual robots, where the interactions between the real and virtual robots are dependent upon at least the positions of the one or more real robots in the real environment and the positions of the one or more virtual robots in the virtual environment.
Verification of robotic assets utilized in a production work cell
Embodiments described herein provide verification of robotic assets for a production work cell via simulation. A production workflow for the production work cell is received from a user, and a simulation is generated based on the production workflow. The simulation of the work cell controller design is executed using performance data for the robotic assets, and a determination is made whether the robotic assets meet or exceed the design requirements of the robotic assets based on the simulation. The result of the determination is presented to the user.
Control system and control method
A control device estimates a position and pose of an imaging device relative to a robot based on an image of the robot captured by the imaging device. A simulation device arranges a robot model at a teaching point, and generates a simulation image of the robot model captured by a virtual camera that is arranged so that a position and pose of the virtual camera relative to the robot model in the virtual space coincide with the estimated position and pose of the imaging device. The control device determines an amount of correction of a position and pose of the robot for the teaching point so that the position and pose of the robot on the actual image captured after the robot has been driven according to a movement command to the teaching point approximate to the position and pose of the robot model on the simulation image.