Patent classifications
B25J13/06
GRAPHICALLY SUPPORTED ADAPTATION OF ROBOT CONTROL PROGRAMS
A control unit to ascertain one or more parameters of a control program and/or of a control system for a robot manipulator, wherein the control unit includes: an interactive operating unit to display a first adjustment element and a specified region for the first adjustment element, wherein the first adjustment element is moveable within the specified region via an input of a user, the interactive operating unit further to detect a user-specified position of the first adjustment element and transmit the user-specified position; and a computing unit to receive the user-specified position and ascertain weightings for a specified cost function as a function of the position, wherein a sum of the weightings is constant for all positions of the adjustment element, the computing unit further to ascertain the parameters of the control program and/or of the control system for the robot manipulator based on the cost function.
GRAPHICALLY SUPPORTED ADAPTATION OF ROBOT CONTROL PROGRAMS
A control unit to ascertain one or more parameters of a control program and/or of a control system for a robot manipulator, wherein the control unit includes: an interactive operating unit to display a first adjustment element and a specified region for the first adjustment element, wherein the first adjustment element is moveable within the specified region via an input of a user, the interactive operating unit further to detect a user-specified position of the first adjustment element and transmit the user-specified position; and a computing unit to receive the user-specified position and ascertain weightings for a specified cost function as a function of the position, wherein a sum of the weightings is constant for all positions of the adjustment element, the computing unit further to ascertain the parameters of the control program and/or of the control system for the robot manipulator based on the cost function.
SURGICAL ROBOT ARM AND INSTRUMENT DETACHMENT
A surgical robot arm comprises a base connected to a terminal link via a series of intermediate joints. The terminal link comprises a drive assembly interface comprising drive assembly interface elements. Each drive assembly interface element is configured to: engage an instrument interface element of an instrument interface of a robotic surgical instrument when the surgical robot arm engages the robotic surgical instrument; and move relative to the drive assembly interface across a range of motion so as to, when engaged with the instrument interface element, transfer drive to that instrument interface element. The drive assembly interface elements are arranged across a plane perpendicular to the longitudinal axis of the terminal link such that the robotic surgical instrument is detachable from the surgical robot arm in a detachment direction parallel to the plane when each drive assembly interface element is anywhere within its range of motion.
SURGICAL ROBOT ARM AND INSTRUMENT DETACHMENT
A surgical robot arm comprises a base connected to a terminal link via a series of intermediate joints. The terminal link comprises a drive assembly interface comprising drive assembly interface elements. Each drive assembly interface element is configured to: engage an instrument interface element of an instrument interface of a robotic surgical instrument when the surgical robot arm engages the robotic surgical instrument; and move relative to the drive assembly interface across a range of motion so as to, when engaged with the instrument interface element, transfer drive to that instrument interface element. The drive assembly interface elements are arranged across a plane perpendicular to the longitudinal axis of the terminal link such that the robotic surgical instrument is detachable from the surgical robot arm in a detachment direction parallel to the plane when each drive assembly interface element is anywhere within its range of motion.
REPEATING PATTERN DETECTION WITHIN USAGE RECORDINGS OF ROBOTIC PROCESS AUTOMATION TO FACILITATE REPRESENTATION THEREOF
Improved techniques for examining a plurality of distinct recordings pertaining to user interactions with one or more software applications, where each recording concerns performing at least one task. The examined recordings can be processed such that the recordings can be organized and/or rendered in a consolidated manner which facilitates user's understanding of higher-level operations being performed by the examined recordings to carry out the associated task. As one embodiment, a robotic process automation system can, for example, operate to: acquire a plurality of recordings, the recordings include a series of user interactions with one or more application programs, the recordings being acquired via the robotic process automation system; identify repeating sequences of user interactions within the recordings; select at least one of the identified repeating sequences that occurs more often; create a first level pattern for the selected at least one of the identified repeating sequences; and associate a descriptive label to the first level pattern created for the selected at least one of the identified repeating sequences.
REPEATING PATTERN DETECTION WITHIN USAGE RECORDINGS OF ROBOTIC PROCESS AUTOMATION TO FACILITATE REPRESENTATION THEREOF
Improved techniques for examining a plurality of distinct recordings pertaining to user interactions with one or more software applications, where each recording concerns performing at least one task. The examined recordings can be processed such that the recordings can be organized and/or rendered in a consolidated manner which facilitates user's understanding of higher-level operations being performed by the examined recordings to carry out the associated task. As one embodiment, a robotic process automation system can, for example, operate to: acquire a plurality of recordings, the recordings include a series of user interactions with one or more application programs, the recordings being acquired via the robotic process automation system; identify repeating sequences of user interactions within the recordings; select at least one of the identified repeating sequences that occurs more often; create a first level pattern for the selected at least one of the identified repeating sequences; and associate a descriptive label to the first level pattern created for the selected at least one of the identified repeating sequences.
ROBOTIC PROCESS AUTOMATION SUPPORTING HIERARCHICAL REPRESENTATION OF RECORDINGS
Improved techniques for examining a plurality of distinct recordings pertaining to user interactions with one or more software applications, where each recording concerns performing at least one task. The examined recordings can be processed such that the recordings can be organized and/or rendered in a consolidated manner which facilitates user's understanding of higher-level operations being performed by the examined recordings to carry out the associated task. Advantageously, the improved techniques enable a robotic process automation (RPA) system to recognize and represent repetitive tasks within multiple recordings as multi-level (e.g., hierarchical) patterns of steps, sub-tasks, or some combination thereof. In doing so, a RPA system can identify and define such patterns within recordings and can also accommodate variants in such patterns. The resulting multi-level representation of the recordings allows users to better understand and visualize what tasks or sub-tasks are being carried out by portions of the recordings.
ROBOTIC PROCESS AUTOMATION SUPPORTING HIERARCHICAL REPRESENTATION OF RECORDINGS
Improved techniques for examining a plurality of distinct recordings pertaining to user interactions with one or more software applications, where each recording concerns performing at least one task. The examined recordings can be processed such that the recordings can be organized and/or rendered in a consolidated manner which facilitates user's understanding of higher-level operations being performed by the examined recordings to carry out the associated task. Advantageously, the improved techniques enable a robotic process automation (RPA) system to recognize and represent repetitive tasks within multiple recordings as multi-level (e.g., hierarchical) patterns of steps, sub-tasks, or some combination thereof. In doing so, a RPA system can identify and define such patterns within recordings and can also accommodate variants in such patterns. The resulting multi-level representation of the recordings allows users to better understand and visualize what tasks or sub-tasks are being carried out by portions of the recordings.
Visual annotations in robot control interfaces
Methods, apparatus, systems, and computer-readable media are provided for visually annotating rendered multi-dimensional representations of robot environments. In various implementations, an entity may be identified that is present with a telepresence robot in an environment. A measure of potential interest of a user in the entity may be calculated based on a record of one or more interactions between the user and one or more computing devices. In some implementations, the one or more interactions may be for purposes other than directly operating the telepresence robot. In various implementations, a multi-dimensional representation of the environment may be rendered as part of a graphical user interface operable by the user to control the telepresence robot. In various implementations, a visual annotation may be selectively rendered within the multi-dimensional representation of the environment in association with the entity based on the measure of potential interest.
Visual annotations in robot control interfaces
Methods, apparatus, systems, and computer-readable media are provided for visually annotating rendered multi-dimensional representations of robot environments. In various implementations, an entity may be identified that is present with a telepresence robot in an environment. A measure of potential interest of a user in the entity may be calculated based on a record of one or more interactions between the user and one or more computing devices. In some implementations, the one or more interactions may be for purposes other than directly operating the telepresence robot. In various implementations, a multi-dimensional representation of the environment may be rendered as part of a graphical user interface operable by the user to control the telepresence robot. In various implementations, a visual annotation may be selectively rendered within the multi-dimensional representation of the environment in association with the entity based on the measure of potential interest.